id
stringlengths
30
34
text
stringlengths
0
75.5k
industry_type
stringclasses
1 value
2015-48/1889/en_head.json.gz/13376
Ubuntu Forums (Ubuntu Linux Support) Ubuntu Forums (Ubuntu Linux Support) » Other Forums » Download Free E-Books (Technical and Non-Technical Books) » Linux Device Drivers, 2nd Edition -- Free 525 Page eBook Topic: Linux Device Drivers, 2nd Edition -- Free 525 Page eBook (Read 903 times) UbuntuGeek Where The Kernel Meets The HardwareThis book is for anyone who wants to support computer peripherals under the Linux operating system or who wants to develop new hardware and run it under Linux. Linux is the fastest-growing segment of the Unix market, is winning over enthusiastic adherents in many application areas, and is being viewed more and more as a good platform for embedded systems. Linux Device Drivers, already a classic in its second edition, reveals information that heretofore has been shared by word of mouth or in cryptic source code comments, on how to write drivers for a wide range of devices.Version 2.4 of the Linux kernel includes significant changes to device drivers, simplifying many activities, but providing subtle new features that can make a driver both more efficient and more flexible. The second edition of this book thoroughly covers these changes, as well as new processors and buses.You don't have to be a kernel hacker to understand and enjoy this book; all you need is an understanding of C and some background in Unix system calls. You'll learn how to write drivers for character devices, block devices, and network interfaces, guided by full-featured examples that you can compile and run without special hardware. Major changes in the second edition include discussions of symmetric multiprocessing (SMP) and locking, new CPUs, and recently supported buses. For those who are curious about how an operating system does its job, this book provides insights into address spaces, asynchronous events, and I/O.Portability is a major concern in the text. The book is centered on version 2.4, but includes information for kernels back to 2.0 where feasible. Linux Device Driver also shows how to maximize portability among hardware platforms; examples were tested on IA32 (PC) and IA64, PowerPC, SPARC and SPARC64, Alpha, ARM, and MIPS.Linux Device Drivers, 2nd Edition -- Free 525 Page eBookQuoteDownload from Here
计算机
2015-48/1889/en_head.json.gz/13380
Apps etc > Android > Instructions Instructions: settings The settings are reached by tapping the Android menu symbol (usually three dots in a vertical line) at the top right-hand corner of any Universalis page and picking "Settings" from the menu that appears. Some manufacturers have changed the standard Android menu symbol, so press whatever you do see at the top right-hand corner. If you don't see anything at all, you must be using a device with a physical menu button: in which case, press that button. Here is a list of the settings and what they mean. Local calendar There is a General Calendar, shared by the whole Church, and then there are local calendars which have saints and celebrations of more local interest. For example, Saint Benedict is celebrated with a memorial in the universal Church but with a feast in Europe, while Saint Willibrord, who isn't in the General Calendar at all, is celebrated with an optional memorial in some English dioceses and a solemnity in the Netherlands. Not all local calendars are included in Universalis, but an increasing number are. Pick the one that looks best for you.Liturgy of the Hours Invitatory Psalm: four different psalms may be used as the Invitatory Psalm, although Psalm 94 (95) is the traditional one. Universalis lets you choose whether to rotate between the permitted options ("Different each day") or stick to Psalm 94 (95) permanently ("Same every day"). Psalm translation: the Grail translation is the one used in most English versions of the Liturgy of the Hours worldwide. For copyright reasons we have to use a version of our own in our web pages, so we offer it here as an alternative in case you have got used to it. Readings at Mass Readings & Psalms: in the English-speaking world, the most usual translation is the Jerusalem Bible for the Scripture readings and the Grail version of the psalms. In the USA, the New American Bible is used. Universalis lets you choose either. We apologize to Canada and South Africa: we are still trying to negotiate with the owners of the NRSV, which you are using at Mass. Prayers and Antiphons: If you are using Universalis as a private spiritual resource, the Mass readings of the day are probably all that you want. If you are taking it to Mass with you, you may want the Entrance Antiphon and the other prayers and antiphons from the printed missals. This option lets you choose. Order of Mass Priest's Private Prayers: You can choose whether to include in the Order of Mass (and in Mass Today) the prayers that are said silently or quietly by the priest. Extra languages Gospel at Mass: You can choose whether to view the original Greek text of the Gospel alongside the English. This may nor work on your device: some manufacturers do provide the correct font on their Android devices, others do not, and the app has no way of knowing. (The feature to ask about is “polytonic Greek”). Order of Mass: You can view Latin or one of a number of other European languages in parallel with the English text of the Order of Mass. This is intended to help you follow Mass when you are abroad. The Mass Today page will also show you the parallel texts, but it will display the daily content (prayers, psalms and readings) in English only. Liturgy of the Hours: You can choose whether to view the Latin text of the Hours alongside the English. Web-based services Set up daily emails: If you like, our web site can send you daily emails with all the Hours or just a selection of them. Press this button to set it up.
计算机
2015-48/1889/en_head.json.gz/14617
xTuple ERP 3.0 Wins "Best Business Application" At LinuxWorld Conference & Expo 2008 Multi-Functional Open Source Product Lauded in LinuxWorld Product Excellence Awards For Its Strong Features, Active and Collaborative Global User Community xTuple, the leader in open source enterprise resource planning software, today announced its xTuple ERP 3.0 application has been named "Best Business Application" for 2008 in the prestigious LinuxWorld Product Excellence Awards competition at LinuxWorld® Conference & Expo 2008 in San Francisco. The LinuxWorld Product Excellence awards, judged by a group of respected industry experts and given out by IDG World Expo, producer of LinuxWorld, recognize major areas of innovation in the Linux and open source community. Among the criteria for the award are the vitality and contributions made by a product's public user community, the continued support shown by the product's host developer, and innovation in the features created by both the developer and the community. xTuple ERP is available in three versions — the PostBooks®, Standard, and OpenMFG Editions. All three versions use the same client software, run equally well on Windows, Linux, and Mac, and are fully internationalized (multi-currency, support for multiple tax structures, and multilingual translation packs maintained by xTuple's global community). The PostBooks® Edition is xTuple's popular open source accounting, ERP (Enterprise Resource Planning) and CRM (Customer Relationship Management) software, available at no cost to businesses of all kinds. It includes full financials, robust CRM, sales and purchasing, ad hoc reporting capabilities, and lightweight inventory, manufacturing and distribution tools. The Standard and OpenMFG Editions add additional enterprise functionality in the areas of distribution, retail, and manufacturing. Version 3.0 of xTuple ERP, which debuted in June, is the result of significant contributions by both xTuple and the vibrant xTuple ERP user community. Its extensive list of new features include the world's only open source assemble-to-order Product Configurator; a Screen Builder for designing customized system dashboards; advanced warranty tracking; internationalization and localization upgrades; and a user-specific roles and groups capability. "xTuple ERP is fortunate to have a highly supportive, energetic and enthusiastic community of users from around the world," said Ned Lilly, president and CEO of xTuple. "We share this award with them, and look forward to continued success in working together with our multi-product strategy to advance what we believe is the world's most advanced open source ERP." xTuple's "Best Business Application" plaque was awarded at a ceremony August 5 during LinuxWorld Conference & Expo 2008, held in San Francisco's Moscone Center. LinuxWorld is the premier global event focused on Linux and open source solutions, attracting nearly 10,000 attendees and 200 exhibitors from around the world. xTuple ERP: PostBooks® Edition is available for free download at http://sourceforge.net/projects/postbooks. More information on xTuple ERP, including public discussion forums, a documentation wiki, blogs, demonstration videos, and the community issue/bug tracker, can be found at www.xtuple.org and www.xtuple.com. About xTuple, the world’s #1 open source ERP Award-winning xTuple, makers of the world’s leading suite of open source accounting, Corporate Relationship Management (CRM) and Enterprise Resource Planning (ERP), is supply chain management software for growing businesses to control their operations and profitability. xTuple integrates all critical functional areas in one modular system: sales, financials and operations — including customer and supplier management, inventory control, manufacturing and distribution — the powerful tools to Grow Your World®. As a commercial open source company, xTuple works with a global community of tens of thousands of professional users. xTuple gives customers the ability to tailor solutions with multi-platform support for Windows, Mac, Linux and mobile as well as flexible licensing and pricing options. Connect with the company at www.xTuple.com, and join the innovation conversation with the open source community at www.xTuple.org. xTuple Style Guide (XSG)
计算机
2015-48/1890/en_head.json.gz/41
Windows 8 on the desktop—an awkward hybrid Windows 8 has a new tablet friendly UI, but how is it on the desktop? - Apr 25, 2012 1:00 am UTC Illustration by Aurich Lawson Windows 8's new user interface has proven nothing short of polarizing. The hybrid operating system pairs a new GUI concept, the touch-friendly Metro interface, to the traditional windows, icons, menus, and pointer concept that Windows users have depended on for decades. In so doing, it removes Windows mainstays such as the Start button and Start menu. While few are concerned about Windows 8's usability as a tablet operating system, desktop users remain wary. Will the new operating system take a huge step back in terms of both productivity and usability? Specific concerns voiced in our forums have included the mandated fullscreen view and a lack of resizable windows, the tight restrictions on what applications are permitted to do, and the automatic termination of background applications. The good news is that these specific criticisms are largely off-base. Windows 8 includes a full desktop with all the applications and behavior that you expect a Windows desktop to include. This means full multitasking (no background suspension or termination), full system access (to the extent that your user permissions allow), resizable non-maximized windows, Aero snap, pinned taskbar icons, alt-tab—it's all still there and it all still works. The bad news is that the various pieces of the operating system do not in fact mesh together smoothly; the seams, especially between the Metro and legacy interfaces, remain obvious and jarring. For desktop users, the experience remains decidedly mixed. Let's run through the most common interface elements and see how Windows 8 fixes old problems—and creates new ones of its own. A "Start menu" for the tablet age The Start screen in all its glory The behavior of Windows 8 when running and switching between applications has not changed much. When it comes to launching applications, however, the changes are unavoidably in-your-face. Instead of the Start menu, we have the Start screen. Depending on your screen resolution, the old Start menu occupied somewhere between a hefty chunk (about a third or so) and a small portion of the screen. If you use Windows 8, you can't help but use the Start screen; the system even shows it immediately after you log in. (This is configurable; you can elect to show the desktop instead, and Windows Server 2012 boots to the desktop by default). The Start screen and Start menu exist to do essentially the same thing. Although their presentation differs, they both launch programs. Both split those programs into two "kinds," a limited selection of pinned or promoted programs and a comprehensive "all programs" view that contains all applications installed. The traditional Windows 7 Start menu The traditional Start menu doesn't just depend on pinned programs. It devotes much of its space to programs that you use regularly, using some algorithm to determine which applications make the grade and in which order they appear. On the one hand, this means that the Start menu is automatically populated with icons for programs that you use often. On the other hand, its appearance becomes unpredictable—applications can bubble up onto the list or drop off out of sight. The pinned area, controlled by the user (applications can't pin themselves on installation), remains more controlled. The Windows 7 Start menu with a bunch of settings changed The Start screen has no recently used feature, but it allows a much greater number of applications to be pinned and positioned in 2D space. This makes it more predictable—the system won't discard a program icon unexpectedly because it thinks you use something else more often—but it also means that more care must be taken to actually organize the page the way you want. Applications also get pinned automatically when installed. This can be a little unfortunate when installing large desktop applications, as many of them dump a whole load of icons onto the Start screen. Given the way that the Start screen's layout is both personal and customizable, this is a little offensive; I would prefer to see a kind of optional synthetic group that contains most-recently-used applications and that picked up newly installed programs in the way that the Start menu does. Everything outside this group would retain the position and layout that I specified, restricting the randomness and icon spraying to one specific section of the Start screen. Organizing the pinned icons is also a little weird. Although it doesn't immediately look this way, each group on the screen is internally made out of columns. Each column is the width of a wide tile. You have to fill up one column with a mix of narrow and wide tiles before you can move on to the next column. This can make organizing the tiles the way you want a bit harder than anticipated. It absolutely prevents certain layouts from being constructed, too—a wide tile can't span two of the invisible columns. Microsoft may have some secret rationale for doing things this way, but if it does, I don't know what it is. For programs that aren't accessible from the main screen, both the Start menu and Start screen make you delve into "All Programs." The Windows XP (and below) approach to All Programs wasn't much good either. All Programs has never been much fun. In Windows XP and below, the regular flyout menus scaled atrociously. Systems with lots of software installed would have menus so tall that they filled the entire screen and then some, spilling into multiple columns. This was particularly exciting when you had so many entries that they ran into the right-hand edge of the screen, at which point they started opening to the left. Windows Vista and Windows 7 took a step backwards with a weird "in-place" scrollable menu that prevented you from even seeing the whole set of installed programs at once. It also pulled awesome stunts like "not letting you see the name of the program you want to open." Windows 7's Start Menu had plenty of flaws. I have no idea which of those icons is what. The Start screen's All Programs—or rather, All Apps, as every program is now an "app" these days—takes some steps forward and some back. It makes better use of space than the menus in Windows Vista and Windows 7, especially as the Start screen supports "semantic zoom." But it can still be unwieldy if you have an enormous number of programs installed. At least it doesn't degrade into backwards-opening menus the way Windows XP could, but it will unfortunately still truncate program names. The All Apps view is really pretty decent, compared to Microsoft's prior efforts in this area Overall, it works quite well, but it has one peculiar drawback that might just be an oversight: there's apparently no way to dismiss it. If you decide that you want to back out and return to the regular start screen, there's no "back" button (as in the Windows Vista and Window 7 Start menu) or other provision to revert. You can use the standard system features to bring up the Start screen—the charms or the Windows key or the hot corner—but to me, at least, this feels unnatural. All Apps is, logically, a part of the Start screen, just a sort of different mode. I don't intuitively expect the charms to undo that mode switch. The solution that Microsoft provided to the way the Start menu breaks down with large numbers of applications is search. Start menu search works in Windows 8 in much the same way as it does in Windows Vista and Windows 7—on the Start screen, you just start typing. There are some differences, of course. The searchable Start menu was configurable, so you could enable integrated whole-system search results and use it as your one-stop shop for computer searching. I did this and grew to depend on it. If you just type in the name of a Control Panel thing, you won't find it, because it only searches apps by default. Not so with Windows 8, which has three filtered search views ("apps," which is the default, "settings," and "files"). If you really want to search files rather than apps, you have to type your search term, hit tab to switch the focus to the right-hand bar, then down arrow to select files, then tab again to move to the file selector. Sure, you could use the mouse, but the great joy of searchable Start is that you don't have to lift your hands from the keyboard. The lack of customizability is a great shame; I would much prefer to have a "search all" that grouped its results into apps, settings, and files. You have to explicitly pick Settings from the sidebar. This is a step backward. As different as the Start screen looks, it performs the same function as the Start menu, and it performs it in substantially the same way. Some problems it solves a little better, some a little worse, but what it doesn't do is require any great shift in how programs are launched. The fullscreen issue Operationally, the Start screen and Start menu are similar, with the major functional areas of the Start menu having direct counterparts in the Start screen. One aspect of the Start screen's appearance, however, raises hackles: like everything else Metro, it's fullscreen. When the Start screen is up, you can't see anything else on your primary monitor. This provokes two complaints: it covers up things people want to look at, and it's jarring to have the major switch. Both of these are true, but how problematic they are is less clear. The Start menu is already always-on-top and in the foreground, meaning that it covers up a big chunk of the screen anyway. As with any other menu, once you click away it collapses. This greatly limits what can be done while the Start menu is open. You can't, for example, open the Start menu, then click in an e-mail or on a Web page without also collapsing the menu. It might not be possible to simultaneously use an app while the Start screen is open, but the same is in practice true of the Start menu. As for the "shock" from switching to a fullscreen Start screen, the impact of this will depend greatly on how you use your computer. If your windows are mainly maximized (or at least large) then it's not so very different from simply switching between two windows. The Start screen might be a little more colorful than a Word document, say, but it's hardly a standout among webpages, media players, or even a lot of e-mails. Those who like lots of smaller windows will find the fullscreen Start screen more unwelcome. It shouldn't change how they use their computers at all, since as soon as they launch an application the Start screen will go away, but it certainly doesn't fit with their preferences. The fullscreen approach also has implications for usability. In general, targets are easier to hit the larger they are, and they're easier to hit the closer they are to the pointer (that is, small mouse movements are more accurate than large ones). The Start menu's targets are all relatively close to the mouse, but are relatively small in one direction (wide, but not tall). The Start menu makes its targets bigger, but because they're spread across the full width of the screen, they're further away. Microsoft argues that the new design is, overall, a net win: although you have to move the mouse a little further, this is more than offset by the larger targets, producing targets that are easier to hit. In practice I haven't found it to matter; I could hit the targets on the Start menu easily, and I can hit the targets on the Start screen easily. Only for tablet users? The Start screen's design has plainly been influenced by the need to cater to tablet users. Big targets, fullscreen graphics, panning, zooming, and a move away from a traditional menu—these all meet the needs of touch users. Is that a problem for desktop users? I don't think so. The new design may have been driven by the needs of touch, but it doesn't make the mouse any worse, and in some ways makes it better. I think pinned programs work better with the Start screen, and the All Programs view doesn't degrade quite as horribly as the old Start menu when filled with hundreds of icons. It's not all good; in particular, I miss the automatic promotion of recently used items. But it works, and for the most part, it works well. Windows 7's Start Menu has never been very usable if you delve into "All Programs" It's not perfect, but neither is the Start menu. Having used Windows 8 quite a bit for a number of weeks (not quite full-time, but routinely, for hours each day), I think the Start screen works at least as well as the Start menu ever did. Once Windows 8 Metro apps become more abundant and my pinned apps thus become more useful, I think the Start screen will be a clear winner.
计算机
2015-48/1890/en_head.json.gz/85
The Open Group Blog The Open Group WebsitePodcastsSubmissionsAbout This BlogAbout The Open Group ← Alex Osterwalder’s Business Model Canvas The Open Group is Livestreaming The Open Group Barcelona Conference → by The Open Group Blog | October 18, 2012 · 2:28 PM SOA Provides Needed Support for Enterprise Architecture in Cloud, Mobile, Big Data, Says Open Group Panel By Dana Gardner, BriefingsDirect There’s been a resurgent role for service-oriented architecture (SOA) as a practical and relevant ingredient for effective design and use of Cloud, mobile, and big data technologies. To find out why, The Open Group recently gathered an international panel of experts to explore the concept of “architecture is destiny,” especially when it comes to hybrid services delivery and management. The panel shows how SOA is proving instrumental in allowing the needed advancements over highly distributed services and data, when it comes to scale, heterogeneity support, and governance. The panel consists of Chris Harding, Director of Interoperability at The Open Group, based in the UK; Nikhil Kumar, President of Applied Technology Solutions and Co-Chair of the SOA Reference Architecture Projects within The Open Group, and he’s based in Michigan, and Mats Gejnevall, Enterprise Architect at Capgemini and Co-Chair of The Open Group SOA Work Group, and he’s based in Sweden. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. The full podcast can be found here. Gardner: Why this resurgence in the interest around SOA? Harding: My role in The Open Group is to support the work of our members on SOA, Cloud computing, and other topics. We formed the SOA Work Group back in 2005, when SOA was a real emerging hot topic, and we set up a number of activities and projects. They’re all completed. I was thinking that the SOA Work Group would wind down, move into maintenance mode, and meet once every few months or so, but we still get a fair attendance at our regular web meetings. In fact, we’ve started two new projects and we’re about to start a third one. So, it’s very clear that there is still an interest, and indeed a renewed interest, in SOA from the IT community within The Open Group. Larger trends Gardner: Nikhil, do you believe that this has to do with some of the larger trends we’re seeing in the field, like Cloud Software as a Service (SaaS)? What’s driving this renewal? Kumar: What I see driving it is three things. One is the advent of the Cloud and mobile, which requires a lot of cross-platform delivery of consistent services. The second is emerging technologies, mobile, big data, and the need to be able to look at data across multiple contexts. The third thing that’s driving it is legacy modernization. A lot of organizations are now a lot more comfortable with SOA concepts. I see it in a number of our customers. I’ve just been running a large Enterprise Architecture initiative in a Fortune 500 customer. At each stage, and at almost every point in that, they’re now comfortable. They feel that SOA can provide the ability to rationalize multiple platforms. They’re restructuring organizational structures, delivery organizations, as well as targeting their goals around a service-based platform capability. So legacy modernization is a back-to-the-future kind of thing that has come back and is getting adoption. The way it’s being implemented is using RESTful services, as well as SOAP services, which is different from traditional SOA, say from the last version, which was mostly SOAP-driven. Gardner: Mats, do you think that what’s happened is that the marketplace and the requirements have changed and that’s made SOA more relevant? Or has SOA changed to better fit the market? Or perhaps some combination? Gejnevall: I think that the Cloud is really a service delivery platform. Companies discover that to be able to use the Cloud services, the SaaS things, they need to look at SOA as their internal development way of doing things as well. They understand they need to do the architecture internally, and if they’re going to use lots of external Cloud services, you might as well use SOA to do that. Also, if you look at the Cloud suppliers, they also need to do their architecture in some way and SOA probably is a good vehicle for them. They can use that paradigm and also deliver what the customer wants in a well-designed SOA environment. Gardner: Let’s drill down on the requirements around the Cloud and some of the key components of SOA. We’re certainly seeing, as you mentioned, the need for cross support for legacy, Cloud types of services, and using a variety of protocol, transports, and integration types. We already heard about REST for lightweight approaches and, of course, there will still be the need for object brokering and some of the more traditional enterprise integration approaches. This really does sound like the job for an Enterprise Service Bus (ESB). So let’s go around the panel and look at this notion of an ESB. Some people, a few years back, didn’t think it was necessary or a requirement for SOA, but it certainly sounds like it’s the right type of functionality for the job. Loosely coupled Harding: I believe so, but maybe we ought to consider that in the Cloud context, you’re not just talking about within a single enterprise. You’re talking about a much more loosely coupled, distributed environment, and the ESB concept needs to take account of that in the Cloud context. Gardner: Nikhil, any thoughts about how to manage this integration requirement around the modern SOA environment and whether ESBs are more or less relevant as a result? Kumar: In the context of a Cloud we really see SOA and the concept of service contracts coming to the fore. In that scenario, ESBs play a role as a broker within the enterprise. When we talk about the interaction across Cloud-service providers and Cloud consumers, what we’re seeing is that the service provider has his own concept of an ESB within its own internal context. If you want your Cloud services to be really reusable, the concept of the ESB then becomes more for the routing and the mediation of those services, once they’re provided to the consumer. There’s a kind of separation of concerns between the concept of a traditional ESB and a Cloud ESB, if you want to call it that. The Cloud context involves more of the need to be able to support, enforce, and apply governance concepts and audit concepts, the capabilities to ensure that the interaction meets quality of service guarantees. That’s a little different from the concept that drove traditional ESBs. That’s why you’re seeing API management platforms like Layer 7, Mashery, or Apigee and other kind of product lines. They’re also coming into the picture, driven by the need to be able to support the way Cloud providers are provisioning their services. As Chris put it, you’re looking beyond the enterprise. Who owns it? That’s where the role of the ESB is different from the traditional concept. Most Cloud platforms have cost factors associated with locality. If you have truly global enterprises and services, you need to factor in the ability to deal with safe harbor issues and you need to factor in variations and law in terms of security governance. The platforms that are evolving are starting to provide this out of the box. The service consumer or a service provider needs to be able to support those. That’s going to become the role of their ESB in the future, to be able to consume a service, to be able to assert this quality-of-service guarantee, and manage constraints or data-in-flight and data-at-rest. Gardner: Mats, are there other aspects of the concept of ESB that are now relevant to the Cloud? Entire stack Gejnevall: One of the reasons SOA didn’t really take off in many organizations three, four, or five years ago was the need to buy the entire stack of SOA products that all the consultancies were asking companies to buy, wanting them to buy an ESB, governance tools, business process management tools, and a lot of sort of quite large investments to just get your foot into the door of doing SOA. These days you can buy that kind of stuff. You can buy the entire stack in the Cloud and start playing with it. I did some searches on it today and I found a company that you can play with the entire stack, including business tools and everything like that, for zero dollars. Then you can grow and use more and more of it in your business, but you can start to see if this is something for you. In the past, the suppliers or the consultants told you that you could do it. You couldn’t really try it out yourself. You needed both the software and the hardware in place. The money to get started is much lower today. That’s another reason people might be thinking about it these days. Gardner: It sounds as if there’s a new type of on-ramp to SOA values, and the componentry that supports SOA is now being delivered as a service. On top of that, you’re also able to consume it in a pay-as-you-go manner. Harding: That’s a very good point, but there are two contradictory trends we are seeing here. One is the kind of trend that Mats is describing, where the technology you need to handle a complex stack is becoming readily available in the Cloud. And the other is the trend that Nikhil mentioned: to go for a simpler style, which a lot of people term REST, for accessing services. It will be interesting to see how those two tendencies play out against each other. Kumar: I’d like to make a comment on that. The approach for the on-ramp is really one of the key differentiators of the Cloud, because you have the agility and the lack of capital investment (CAPEX) required to test things out. But as we are evolving with Cloud platforms, I’m also seeing with a lot of Platform-as-a-Service (PaaS) vendor scenarios that they’re trying the ESB in the stack itself. They’re providing it in their Cloud fabric. A couple of large players have already done that. For example, Azure provides that in the forward-looking vision. I am sure IBM and Oracle have already started down that path. A lot of the players are going to provide it as a core capability. Pre-integrated environment Gejnevall: Another interesting thing is that they could get a whole environment that’s pre-integrated. Usually, when you buy these things from a vendor, a lot of times they don’t fit together that well. Now, there’s an effort to make them work together. But some people put these open-source tools together. Some people have done that and put them out on the Cloud, which gives them a pretty cheap platform for themselves. Then, they can sell it at a reasonable price, because of the integration of all these things. Gardner: The Cloud model may be evolving toward an all-inclusive offering. But SOA, by its definition, advances interoperability, to plug and play across existing, current, and future sets of service possibilities. Are we talking about SOA being an important element of keeping Clouds dynamic and flexible — even open? Kumar: We can think about the OSI 7 Layer Model. We’re evolving in terms of complexity, right? So from an interoperability perspective, we may talk SOAP or REST, for example, but the interaction with AWS, Salesforce, SmartCloud, or Azure would involve using APIs that each of these platforms provide for interaction. Lock-in So you could have an AMI, which is an image on the Amazon Web Services environment, for example, and that could support a lab stack or an open source stack. How you interact with it, how you monitor it, how you cluster it, all of those aspects now start factoring in specific APIs, and so that’s the lock-in. From an architect’s perspective, I look at it as we need to support proper separation of concerns, and that’s part of [The Open Group] SOA Reference Architecture. That’s what we tried to do, to be able to support implementation architectures that support that separation of concerns. There’s another factor that we need to understand from the context of the Cloud, especially for mid-to-large sized organizations, and that is that the Cloud service providers, especially the large ones — Amazon, Microsoft, IBM — encapsulate infrastructure. If you were to go to Amazon, Microsoft, or IBM and use their IaaS networking capabilities, you’d have one of the largest WAN networks in the world, and you wouldn’t have to pay a dime to establish that infrastructure. Not in terms of the cost of the infrastructure, not in terms of the capabilities required, nothing. So that’s an advantage that the Cloud is bringing, which I think is going to be very compelling. The other thing is that, from an SOA context, you’re now able to look at it and say, “Well, I’m dealing with the Cloud, and what all these providers are doing is make it seamless, whether you’re dealing with the Cloud or on-premise.” That’s an important concept. Now, each of these providers and different aspects of their stacks are at significantly different levels of maturity. Many of these providers may find that their stacks do not interoperate with themselves either, within their own stacks, just because they’re using different run times, different implementations, etc. That’s another factor to take in. From an SOA perspective, the Cloud has become very compelling, because I’m dealing, let’s say, with a Salesforce.com and I want to use that same service within the enterprise, let’s say, an insurance capability for Microsoft Dynamics or for SugarCRM. If that capability is exposed to one source of truth in the enterprise, you’ve now reduced the complexity and have the ability to adopt different Cloud platforms. What we are going to start seeing is that the Cloud is going to shift from being just one à-la-carte solution for everybody. It’s going to become something similar to what we used to deal with in the enterprise context. You had multiple applications, which you service-enabled to reduce complexity and provide one service-based capability, instead of an application-centered approach. You’re now going to move the context to the Cloud, to your multiple Cloud solutions, and maybe many implementations in a nontrivial environment for the same business capability, but they are now exposed to services in the enterprise SOA. You could have Salesforce. You could have Amazon. You could have an IBM implementation. And you could pick and choose the source of truth and share it. So a lot of the core SOA concepts will still apply and are still applying. Another on-ramp Gardner: Perhaps yet another on-ramp to the use of SOA is the app store, which allows for discovery, socialization of services, but at the same time provides overnance and control? Kumar: We’re seeing that with a lot of our customers, typically the vendors who support PaaS solution associate app store models along with their platform as a mechanism to gain market share. The issue that you run into with that is, it’s okay if it’s on your cellphone or on your iPad, your tablet PC, or whatever, but once you start having managed apps, for example Salesforce, or if you have applications which are being deployed on an Azure or on a SmartCloud context, you have high risk scenario. You don’t know how well architected that application is. It’s just like going and buying an enterprise application. When you deploy it in the Cloud, you really need to understand the Cloud PaaS platform for that particular platform to understand the implications in terms of dependencies and cross-dependencies across apps that you have installed. They have real practical implications in terms of maintainability and performance. We’ve seen that with at least two platforms in the last six months. Governance becomes extremely important. Because of the low CAPEX implications to the business, the business is very comfortable with going and buying these applications and saying, “We can install X, Y, or Z and it will cost us two months and a few million dollars and we are all set.” Or maybe it’s a few hundred thousand dollars. They don’t realize the implications in terms of interoperability, performance, and standard architectural quality attributes that can occur. There is a governance aspect from the context of the Cloud provisioning of these applications. There is another aspect to it, which is governance in terms of the run-time, more classic SOA governance, to measure, assert, and to view the cost of these applications in terms of performance to your infrastructural resources, to your security constraints. Also, are there scenarios where the application itself has a dependency on a daisy chain, multiple external applications, to trace the data? In terms of the context of app stores, they’re almost like SaaS with a particular platform in mind. They provide the buyer with certain commitments from the platform manager or the platform provider, such as security. When you buy an app from Apple, there is at least a reputational expectation of security from the vendor. What you do not always know is if that security is really being provided. There’s a risk there for organizations who are exposing mission-critical data to that. The second thing is there is still very much a place for the classic SOA registries and repositories in the Cloud. Only the place is for a different purpose. Those registries and repositories are used either by service providers or by consumers to maintain the list of services they’re using internally. Different paradigms There are two different paradigms. The app store is a place where I can go and I know that the gas I am going to get is 85 percent ethanol, versus I also have to maintain some basic set of goods at home to make that I have my dinner on time. These are different kind of roles and different kind of purposes they’re serving. Above all, I think the thing that’s going to become more and more important in the context of the Cloud is that the functionality will be provided by the Cloud platform or the app you buy, but the governance will be a major IT responsibility, right from the time of picking the app, to the time of delivering it, to the time of monitoring it. Gardner: How is The Open Group allowing architects to better exercise SOA principles, as they’re grappling with some of these issues around governance, hybrid services delivery and management, and the use and demand in their organizations to start consuming more Cloud services? Harding: The architect’s primary concern, of course, has to be to meet the needs of the client and to do so in a way that is most effective and that is cost-effective. Cloud gives the architect a usability to go out and get different components much more easily than hitherto. There is a problem, of course, with integrating them and putting them together. SOA can provide part of the solution to that problem, in that it gives a principle of loosely coupled services. If you didn’t have that when you were trying to integrate different functionality from different places, you would be in a real mess. What The Open Group contributes is a set of artifacts that enable the architect to think through how to meet the client’s needs in the best way when working with SOA and Cloud. For example, the SOA Reference Architecture helps the architect understand what components might be brought into the solution. We have the SOA TOGAF Practical Guide, which helps the architect understand how to use TOGAF® in the SOA context. We’re working further on artifacts in the Cloud space, the Cloud Computing Reference Architecture, a notational language for enabling people to describe Cloud ecosystems on recommendations for Cloud interoperability and portability. We’re also working on recommendations for Cloud governance to complement the recommendations for SOA governance, the SOA Governance Framework Standards that we have already produced, and a number of other artifacts. The Open Group’s real role is to support the architect and help the architect to better meet the needs of the architect client. From the very early days, SOA was seen as bringing a closer connection between the business and technology. A lot of those promises that were made about SOA seven or eight years ago are only now becoming possible to fulfill, and that business front is what that project is looking at. We’re also producing an update to the SOA Reference Architectures. We have input the SOA Reference Architecture for consideration by the ISO Group that is looking at an International Standard Reference Architecture for SOA and also to the IEEE Group that is looking at an IEEE Standard Reference Architecture. We hope that both of those groups will want to work along the principles of our SOA Reference Architecture and we intend to produce a new version that incorporates the kind of ideas that they want to bring into the picture. We’re also thinking of setting up an SOA project to look specifically at assistance to architects building SOA into enterprise solutions. So those are three new initiatives that should result in new Open Group standards and guides to complement, as I have described already, the SOA Reference Architecture, the SOA Governance Framework, the Practical Guides to using TOGAF for SOA. We also have the Service Integration Maturity Model that we need to assess the SOA maturity. We have a standard on service orientation applied to Cloud infrastructure, and we have a formal SOA Ontology. Those are the things The Open Group has in place at present to assist the architect, and we are and will be working on three new things: version 2 of the Reference Architecture for SOA, SOA for business technology, and I believe shortly we’ll start on assistance to architects in developing SOA solutions. Dana Gardner is the Principal Analyst at Interarbor Solutions, which identifies and interprets the trends in Services-Oriented Architecture (SOA) and enterprise software infrastructure markets. Interarbor Solutions creates in-depth Web content and distributes it via BriefingsDirect™ blogs, podcasts and video-podcasts to support conversational education about SOA, software infrastructure, Enterprise 2.0, and application development and deployment strategies. Share this post:ShareFacebookTwitterEmailLinkedInGoogleLike this:Like Loading... Comments Off on SOA Provides Needed Support for Enterprise Architecture in Cloud, Mobile, Big Data, Says Open Group Panel Filed under Cloud, Cloud/SOA, Service Oriented Architecture Tagged as Applied Technology Solutions, big data, Capgemini, Chris Harding, cloud, Cloud services, Dana Gardner, enterprise, Enterprise architect, enterprise architecture, hybrid cloud, Mats Gejnevall, Mobile, Mobile Cloud, mobile services, Nikhil Kumar, Podcast, service technology, service-oriented architecture, SOA, SOA Work Group, The Open Group ← Alex Osterwalder’s Business Model Canvas Tweet Tweet! .@bi_india identify how #Cybersecurity is supporting the rapid growth of internet users in developing countries bit.ly/1Hk4uKR ~ 1 hour ago A recording by @AllThingsITSM @itSMFUK #ITSM15 @TonyPriceAtWork: talking about #IT4IT and why its brilliant youtube.com/watch?v=5Noh8e… ~ 5 hours ago The Open Group YouTube Channel ow.ly/L6OgH #IT4IT™ #TOGAF® #Security #CyberSecurity #OpenPlatform #SOA ~ 8 hours ago Did you miss our last newsletter? ow.ly/UA8vD ~ 15 hours ago AEA (Affiliate site) Serge Thorn's IT Blog The Open Group (Official site) The Open Group on LinkedIn The Open Group on Twitter The Open Group on YouTube Tom Graves/Tetradian Visit us on LinkedIn! The Open Group LinkedIn Group Search for: Tag CloudArchiMate Barcelona Conference BIZZdesign Boundaryless Information Flow business-IT alignment business architects Cannes Conference Cloud Architecture Cloud Computing Work Group Dana Gardner E.G. Nadhan EA framework Enterprise architect Enterprise Transformation Henry Franken Jericho Forum Jim Hietala Newport Beach Conference Open Group Conference Open Platform 3.0 OTTF Security Forum Supply Chain Security The Open Group Conference The Open Group Conference in Newport Beach The Open Group Conference Washington D.C. TOGAF Tweet Jam TwitterArchived posts November 2015 (2) Follow this blog Technology links The Open Group Blog · Leading the development of open, vendor-neutral IT standards and certifications Blog at WordPress.com. Follow “The Open Group Blog”
计算机
2015-48/1890/en_head.json.gz/275
Ian Murdock Linux old timer. Debian founder. Sun alum. Salesforce ExactTarget exec. November 9, 2005 One of the big Debian stories of the week is that a company called Nexenta Systems has made a version of Ubuntu that’s based on OpenSolaris rather than the Linux kernel. Personally, I find the emergence of a Debian-based OpenSolaris distribution exciting, as it promises to vastly improve Solaris installation, packaging, and overall usability. Solaris is great technology with an incredible pedigree and some very compelling features (DTrace, in particular, sounds like a godsend, as I’m sure anyone who’s debugged kernel code via endless iterations of inserting printfs at strategic places would agree), not to mention that it’s now open source. However, when a Linux developer eager to have a look at all this neat new open source stuff boots up Solaris for the first time, it’s a bit of a throwback to an earlier time (not to mention the fact that apt-get is a hard habit to break..). And, so, I’m more than a little embarrassed at how certain members of the Debian community reacted to Nexenta’s work. The vitriol surprised even me, knowing as much as I know about how, uh, strongly the Debian community feels about certain issues. The issue in this case: Nexenta links GPL-licensed programs (including dpkg) with the Sun C library, which is licensed under the GPL-incompatible but still free software/open source CDDL license. Granted, Nexenta didn’t go about introducing themselves to the Debian community in the best way, and there may (may) be issues around whether or not what they are doing is permitted by the GPL, but couldn’t we at least engage them in a more constructive manner? In terms of the actual issue being discussed here, am I the only one who doesn’t get it? It seems to me the argument that linking a GPL application to a CDDL library and asserting that that somehow makes the library a derivative work of the application is, to say the least, a stretch—not to mention the fact that we’re talking about libc here, a library with a highly standard interface that’s been implemented any number of times and, heck, that’s even older than the GPL itself. It’s interpretations like this, folks, that give the GPL its reputation of being viral, and I know how much Richard Stallman hates that word. It’s one thing to ensure that actual derivative works of GPL code are themselves licensed under similar terms; it’s quite another to try to apply the same argument to code that clearly isn’t a derivative work in an attempt to spread free software at any cost. I’ve been a big GPL advocate for a long time, but that just strikes me as wrong. Share on Facebook (Opens in new window)Click to share on Google+ (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Twitter (Opens in new window) 25 comments on “No good deed goes unpunished” Decklin Foster November 9, 2005 at 6:15pm How quickly we forget history. Joe Bowman November 9, 2005 at 6:32pm I don’t know. Having looked over a lot of the mailing list traffic surrounding Nexenta and GNU/Solaris, I think this is less of an embarassment to the Debian community and more a shining example of how *not* to try to develop open source software for use by others (slam two licenses together without bothering to carefully examine how they interact? Sheer folly.). There are diehard GPL advocates and enforcers in every OSS-related community, and I think it’s a bit much to say that the Debian community’s response wasn’t constructive, especially in light of the fact that Nexenta went from “Hey, here’s this neat thing we want to do!” to *shipping* a potentially-infringing product within two weeks of the initial announcement. Building a truly constructive community response takes time, and the effort was certainly made, but Nexenta didn’t want to wait (or listen, from what I’ve seen). They rushed ahead and reaped the ensuing backlash people were warning them about. tinus November 9, 2005 at 6:56pm From the license: ‘However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable.’ So there’s no problem at all. Otherwise you would never have been able to use GPL
计算机
2015-48/1890/en_head.json.gz/283
Two of WDI's senior Imagineers just were given their walking papers, leaving a tsunami of questions in their wake.Tim Delaney had been with Imagineering since 1976. As Executive Designer, Vice President, his high points were easily Discoveryland at Disneyland Paris, the centerpeice of which is the incredible re-imagining of Space Mountain, and The Living Seas at Epcot.On the low end Tim brought us California Adventure's inaugural entryway and Paradise Pier. Just two days before California Adventure opened, Tim defended the park with a ferocious tenacity not seen since the Queen Mother Alien fought off Ellen Ripley.Doobie Moseley - Laughing Place: Have you been confident this whole time that this park (California Adventure) would be able to please Disney guests?Tim Delaney: Absolutely, no question in my mind. Absolutely. The reason is because of the combination of the way it’s laid out and the art direction, everything about it...They’re going to love it and this is how I felt about this entire California project from the very beginning. Dubious taste aside, Tim was still an old school champion of quality at Imagineering and always fought for the better show. It could easily be argued that getting even the most basic elements of quality green-lit for an Eisner-era project whose very manifesto was about cheaper than cheap meant a fight to the finish, something Tim hinted at in the same interview.Tim: I like Paradise Pier. I knew it would be challenging but I knew we could do it. I knew that there was something there so I had to fight. It’s a fight. It was that very spirit of holding firm to ones ideals that very well may have been Tim's undoing. Infact, most recently Tim fought hard for a truly first class version of Pirates of the Caribbean for Hong Kong Disneyland, but Jay Rasulo squashed the idea and sent him back to his room without supper.Perhaps even more bewildering is the dismissal of Valerie Edwards, WDI's head sculpter, who had been with the company for 21 years and was a featured guest artist on the D23 webzine as recently as this August. She oversaw the creation of character sculptures for Disney parks throughout the world and just recently finished the sculpt of Barack Obama for The Magic Kingdom's Hall of Presidents.As with Tim Delaney, she was known as a fearless champion of quality at Disney, something her mentors, master sculpter Blaine Gibson, Imagineering legend John Hench and animation artist and father George Edwards would have been proud of.Judging by the emotional fallout over at WDI these past few days, her colleagues were equally proud.Unfortunately current management saw her tenure a bit differently. Where previous mangement saw her value, today's leadership saw her as 'difficult'. Seems Valerie read John Lasseter's "Quality is a great business plan" memo too literally.For the creative professionals who remain at WDI the message is both clear and ominous. Along with their feelings of loss and sadness comes a creeping fear that the company will continue to jettison those who fight for quality in order to promote those who just say, 'yes'. Certainly filling Imagineering's halls with yes men is a bad idea, but as stated in the article we really have no idea why these two were let go. It's kind of unfair to immediately jump to this conclusion.I'm just upset two very talented and tenured people have been let go. Even in spite of Delaney's epic failure at DCA. October 18, 2009 at 8:35:00 AM PDT /bsdb Unfortunately along with the feelings of loss and sadness at 1401 Flower comes a creeping fear that the company will continue to jettison those who fight for quality in order to promote those who just say, 'yes'.I believe it's a combination of two things: standing up for quality projects (i.e., those which cost more in time and financial outlay) coupled with "exorbitant compensation" as defined by Burbank management (which, of course, is anything above entry-level slave wages).Team Rasulo has been overly focused on the short-term bottom line since taking the reins from Pressler, almost seven years ago. Jay doesn't seem to understand the importance of quality in the theme parks, which shouldn't be too surprising, since he personally despises the very idea of spending time in them and cannot comprehend why so many generations the world over have readily flocked to them. Say what you will about Michael Eisner, but at least he genuinely enjoyed the theme parks and was captivated by the process of creating them. The same cannot be said of Rasulo.The "second generation" Imagineers will continue to be shown the curb, one by one, until none of them remain. The Legacy of Walt is nothing but a marketing tool leveraged by Burbank to extract more and more disposable income from the wallets of the fanboys. There is no genuine admiration for what preceded the current regime, as there is at the newly opened Walt Disney Family Museum in San Francisco, of which I'm a proud Founding Member. Every single molecule in that establishment carries the love and respect for the man and his life, his creativity, ingenuity, and continuously displays the willingness to take risk after risk after risk to make his dreams become our reality. The only reality Iger and Rasulo seem to care about are the quarterly numbers which help determine the stock price, and as such, the worth of their options.It makes me sick to see such major talent wasted like this. Who will train the up and coming Imagineers, and help them to become the best designers possible? Who will care for Walt's Legacy outside the borders of the new museum? This is core of the gold mine that's maintained The Walt Disney Company's success, year in and year out. But the only thing that seems to matter to Burbank these days, is to chip away at the mountain top, to extract that gold until none remains.At the end of the day, as with any other large multinational corporate conglomerate, it all comes down to money. INNOVIZ Is there anyone left at this point, that really thinks that Imagineering is heading for another "Golden Age?" The Disney brand used to be about quality shows. This is what set Disney apart from the other parks. This has slowly and consistently been dwindeling for some time. I would point to all the instances of off the shelf solutions that were bought, and then had a Disney "overlay" applied to them. I would equate it to: buying a product, removing the trademark and putting your own on it and pawning it off as yours, like a certain sea life artist did for years. This was not what Imagineering was about until this generation.Apparently, gone are the days of true innovative designs. We are witnessing the Generation of the "Catalog Imagineer's." In fairness, there are exceptions, but only a few. That is what is so frustrating to me, being a designer, is witnessing the lackluster performance of this generation of Imagineers.Sad, but it does not have to be this way. October 18, 2009 at 10:27:00 AM PDT Tim's failure outnumber his successes. DCA, The entrance to Disneyland's Tomorrowland. Come on, he was an expensive embarrassment to good design.Valerie, on the other hand, is hard to understand. Hopefully they gave her a consultant contract so she can continue to sculpt for the company the way Blaine did after retiring. October 18, 2009 at 3:42:00 PM PDT Tim's failure outnumber his successes. DCA, The entrance to Disneyland's Tomorrowland. Come on, he was an expensive embarrassment to good design.Anyone who would make this statement is certainly ignorant of Tim's entire Imagineering career. You're citing only two of hundreds of designs that Tim created during his three decades plus at WED/WDI.I highly recommend Disneyland Paris: From Sketch to Reality by Alain Littaye and Didier Ghez to see examples of Tim's best work (IMHO) as the show producer for Discoveryland. This book is a must have for any serious Imagineering fan.
计算机
2015-48/1890/en_head.json.gz/1015
Adobe Launches Digital Media Store Dec 19, 2003 BEST PRACTICES SERIES Lower Costs and Accelerate Authoring and Editing processes, While Improving Quality with Empolis Smart Content Solution Adobe Systems Incorporated has opened the Adobe Digital Media Store, an online retail site that offers Adobe Portable Document Format (PDF) digital content. Titles for immediate download from the Adobe Digital Media Store include Adobe PDF eBooks, links to digital magazines and newspapers, maps, research reports, and other documents in PDF. Consumers can find the store online at www.digitalmediastore.adobe.com, or by clicking the 'Get eBooks Online' button in Adobe Reader or Acrobat 6.0.Adobe and its technology partner OverDrive are working with publishers including HarperCollins, Simon & Schuster, Random House, Time Warner, John Wiley, McGraw-Hill as well as independent authors and small publishers, to create an online store that delivers digital content for viewing on any device that supports the Adobe Reader. Newspapers and magazines available through Adobe partners on the site include BusinessWeek, Popular Science, The New York Times, U.S. News and World Report, and USA Today.The Adobe Digital Media store leverages the eBook capabilities of Adobe Reader 6.0. Adobe Reader offers a digital rights management (DRM) system that is designed to enable users to view content on multiple computers and PDAs featuring the Palm OSR. It also enables transparent activation and delivers electronic pages in Adobe PDF format that capture the look and feel of an actual printed page, including fonts, illustrations, and design layouts. With eBooks, digital magazines, digital newspapers, and other documents published in Adobe PDF, readers can search and scale documents to fit the viewing areas of their computers or hand-held devices. (http://www.adobe.com), (http://www.digitalmediastore.adobe.com)
计算机
2015-48/1890/en_head.json.gz/1104
GDC 2016 | March 14-18, 2016 | Moscone Convention Center | San Francisco, California ABOUT Info & Resources About GDC Game Network Newsletter Archive ATTEND Registration & Travel Why Attend? Passes & Prices Conference Associates Low Income Attendee Pass Lottery CONFERENCE Sessions & Speakers Session Scheduler Game AR/VR Track Entertainment AR/VR Track Classic Game Postmortems Game Narrative GDC Education Smartphone & Tablet Games Tutorials & Bootcamps Sponsored Sessions Developer Days Monetization (sponsored) Main Conference VRDC Exhibitors & Sponsors Exhibitor Sales GDC Play Diamond Partners Exhibitor Services Exhibitor Registration Parties, Awards & More Events Parties & Networking Choice Awards Independent Games Festival Career Center GDC Play GDC Pitch Best in Play GDC Business Matchmaking Game Career Seminar CONFERENCE | FREE TO PLAY Monday, March 14 & Tuesday, March 15 2016 Over the past few years, the free-to-play (F2P) model has revolutionized the games industry. The web and mobile games ecosystems are now utterly dominated by F2P games, and highly successful F2P games have begun to sprout up on Steam and on consoles as well. Though F2P games are no longer a novelty, designers and business leaders alike are still grappling with the creative implications of a world in which gameplay and monetization are intimately intertwined, and in which games are supposed to never end. Are you struggling to design games that can stand up to the brutal competition in this space without compromising your principles? Do you despair at the mention of ever-climbing user acquisition costs? Then come to the Free to Play Summit, where leading professionals from the F2P world can offer stress relief and provide answers. Search for all Free to Play Summit sessions View Summit Sponsorship Opportunities 2016 HIGHLIGHTED SESSIONS How to Get Your F2P Game Greenlit In 2016 Demetri Detsaridis (Experiment 7) In this hour-long talk and Q&A session, veteran game developer and consultant Demetri Detsaridis (who's spent the past year on both sides of the F2P greenlighting process) will discuss what it takes to get F2P games approved and funded and by studio management, publishers, and/or investors in today's environment. With examples drawn from the mobile, PC, and console spaces from the past year, we'll look at what's expected as part of an F2P game pitch, what needs to be part of a modern prototype, and how to craft a succinct and successful plan of action for this rapidly evolving market. Social Impact: Leveraging Community for Monetization, UA and Design Dmitri Williams (Ninja Metrics) This session will recap the concept of the social impact of players on each other (Social Value), and then proceed to spell out which genres, mechanics and platforms are driving more or less of it. The data is presented in anonymized benchmarks from 500m+ players across all genres and platforms. Also benchmarked are acquisition sources, broken down by publisher. Attendees will see which channels and appeals bring in quality social players. The session is designed for Marketing and UA teams, community managers and designers. You're Not Alone: a Backend Technology Solution Survey James Gwertzman (PlayFab, Inc.) Developers no longer need to build the backend for a F2P game from scratch; nowadays there are hundreds of vendors in categories such as attribution tracking, advertising, customer service, localization, predictive analytics, A/B testing, tax calculations, profanity filtering, payment processing, and more. This session will survey this ever-growing ecosystem, reviewing the leaders, and suggesting common scenarios that require linking technologies together. This is a "pedal meets the metal" talk on the reality of implementing an actual backend to support high-level concepts that other speakers in this summit will talk about, like boosting monetization, doing effective user acquisition, or using analytics effectively. FREE TO PLAY SUMMIT ADVISORS Steve Meretzky GSN Games It's hard to have a serious conversation about gaming without mention of Steve. It's also hard to have a humorous conversation about gaming without mention of him. Steve's contributions to the industry began in 1981 at the legendary adventure game company Infocom, where his titles included Planetfall, The Hitchhiker's Guide to the Galaxy (in collaboration with Douglas Adams), Leather Goddesses of Phobos and Zork Zero. Steve is currently vice president of creative for GSN Games, the cross-platform games division of GSN (Game Show Network). Prior to joining GSN Games, Steve was vice president of game design for Playdom, a division of Disney Interactive. Previously, Steve co-founded Boffo Games and held senior creative posts at Blue Fang, Floodgate Entertainment and WorldWinner.com, GSN's cash tournaments site. Over his prolific career, Steve also consulted with teams at Activision, Blizzard, EA, Harmonix and Hasbro, to name a few. A former board member of the IGDA, Steve organizes the Free to Play Summit at the GDC, as well as the annual Game Designers Workshop. Steve holds a Bachelor of Science in construction project management from MIT, but otherwise assures us that he did not waste his four years there. Frank Cartwright Reloaded Games Inc. As COO of Reloaded Games Inc., Frank Cartwright brings over a decade of experience in online, social, mobile, and free-to-play gaming in both the core and casual categories. Most recently, he served as SVP of Product and Platform Development at K2 Network Inc. (Gamerfirst), where he oversaw the product development of the free-to-play publishing portal GamersFirst.com, as well as the publishing platform Gamersfirst LIVE!. Prior to K2, he served as vice president of Online Entertainment at the GameShowNetwork (GSN), where he oversaw the product development, technology implementation, and administration of GSN.com, the network's emerging casual games site. Prior to GSN, he served as vice president of product development and engineering at Global Gaming League (GGL), a worldwide leader in organized, competitive online and live video game tournaments and events (V-Sports). Prior to GGL, he served as vice president of production for SkillJam Technologies, where he played an instrumental role in the company's growth and globalization efforts, handling all technical development and production. Frank began his career in gaming in 1995 at Dreamers Guild, as lead software engineer on Turner Interactive's Dinotopia PC game. From there he joined Disney Internet Group (DIG), creating dozens of Java, Flash, and Shockwave online games and building Disney's online, multiplayer gaming platform. Frank has a BS degree in Business Management. David Edery Spry Fox David Edery is the CEO of Spry Fox, the studio behind titles such as Triple Town, Steambirds, Realm of the Mad God and Road Not Taken. Previously, David was worldwide games portfolio manager for Microsoft's Xbox LIVE service, where he was responsible for content strategy and selecting the games that would be accepted for distribution by the LIVE service. David co-authored, "Changing the Game: How Video Games are Transforming the Future of Business" a book that explores the ways that games can be leveraged by businesses for serious purposes. Making Fun John Welch is a veteran leader of the online games industry and one of the early pioneers of digital distribution of entertainment. He is president and CEO of Making Fun, Inc., a game developer and publisher he founded in 2009 to create mobile and social titles for digital platforms. Making Fun's top titles include Mage & Minions, Dominion Online, Hidden Express and BloodRealm. These titles are produced with a combination of San Francisco and Argentina-based internal talent and partnerships with passionate game studios around the world. Prior to forming Making Fun, John was co-founder and CEO of PlayFirst, the casual games publisher famous for the best-selling "Diner Dash" brand that has touched hundreds of millions of consumers. John was responsible for the vision, financing, strategic direction and operations of the company for its first five years, as well as the production of the first Diner Dash title. PlayFirst was an innovator in bringing the full-service publishing model to casual online and downloadable games. Prior to forming PlayFirst, he spent nearly five years building Shockwave.com into one of the Internet's top game portals as the company's vice president of games and product. He was a key member of the Dreamcast Network leadership team at SEGA from 1998 to 1999. Previously, John spent several years running a small consultancy he co-founded. He began his career at Andersen Consulting. In addition to his operating role, John is a board member of Secret Builders, a creative online virtual world and network of mobile games heavily endorsed by parents and teachers. John has served in leadership roles with the International Game Developers Association, and he is a regular advisor and faculty member of the annual Game Developers' Conference. John is an active member and officer of the Golden Gate chapter of the Young Presidents' Organization (YPO). He holds degrees in mathematics and computer science from MIT and the University of Massachusetts at Amherst. Bus., Mktg. & Mgmt.
计算机
2015-48/1890/en_head.json.gz/1545
Original URL: http://www.psxextreme.com/scripts/previews6/preview.asp?prevID=12 Diablo III: Ultimate Evil Edition Scheduled release date: TBA 2014 Publisher: Developer: Genre: Number Of Players: Diablo is a well-known name in the video game industry. And after very long wait, the third installment hit the PC last year. Despite a ton of backlash from the DRM fiasco, the game received great reviews, and it eventually hit the PlayStation 3 in early September. Now, however, it’s time to look forward to the PS4 iteration, which will be called Diablo III: Ultimate Evil Edition and will include the upcoming expansion, Reaper of Souls. Boasting six distinct classes – Barbarian, Wizard, Demon Hunter, Monk, Crusader and Witch Doctor – the game spans five Acts. The world of Sanctuary is under attack and it’s your job to save it; select your class and let the old-school dungeon-crawling goodness begin! The PS4 version will utilize the touchpad functionality and various social features granted by the new Share button, and it’ll also run at 1080p and 60 frames per second. Further, we'll get the benefit of a few tweaks and bonuses that weren’t included in the original Diablo III. As for the use of the touchpad, you can swipe left and right when managing your inventory. The Reaper of Souls expansion presents players with the following storyline: Malthael, the former Angel of Wisdom, has become the Angel of Death, and he wants humanity to fall. The expansion also adds the aforementioned Crusader class, along with the new Adventure Mode and Nephalem Rifts (Loot Runs). But of course, the stories in Diablo have always been second to the gameplay, which has historically involved a whole lot of action/RPG goodness. With a traditional three-quarter view, players maneuver through a series of enemy-infested areas, taking down countless foes and getting stronger. There’s a crapload of items and equipment to find, and the diverse abilities should keep the game fresh for a very long time. The question is whether or not console gamers will take to this style of gameplay, which has been designed for the mouse/keyboard control since the franchise’s inception. Based on what we’ve heard from those who have gone hands-on with the console iterations, Diablo III actually works really well. It should be even better on the PS4 and with all the extra content included, it’ll be the definitive edition available. Also, don’t forget that the PS4 iteration will support Remote Play, so you’ll be able to keep playing on the PlayStation Vita. Maybe this is exactly the kind of game that would be great to play on a portable unit. Oh, and don’t worry if you’ve already picked up the PS3 version of the game. It was recently announced that you’ll be able to fully transfer your characters and save data to the PS4 version. Just be warned— The door does not swing both ways. Once your character has been transferred over, there’s no coming back. I’m not really sure why you’d want to return to the older version; I’m just reporting the facts. Diablo III should be great fun on the PS4, and the special additions and refinements will make it all the more appealing. There’s still room for the old-fashioned gameplay formats that continue to work beautifully, don’t you think? 11/13/2013 Ben Dutka
计算机
2015-48/1890/en_head.json.gz/1664
“...That was the original idea from day one - the elimination of the tweening process. But it is certainly not the only feature of Synfig that makes it unique. In addition to eliminating the tweening process, I also wanted Synfig to be used for pretty much every part of production except story-boarding and editing.” Robert Quattlebaum - OSNews News ArchiveFeaturesHistoryGalleryPressSupport the Development History Old Professional Bio The following is a rather old professional bio of Synfig's lead engineer, but it is a good overview of how Synfig came to be. Robert Quattlebaum (darco) was Synfig's lead software engineer. He has invested three years of his life and a substantial portion of his net worth into the software and the company he founded, Voria Studios. Robert has always had a passion for computers and a talent for engineering. While in middle school, Robert taught himself not only how to use them effectively but also how to program them. In high school, Robert purchased the Sony® Net Yaroze hobbyist PlayStation® development kit, and began developing a handful of PlayStation® games, including a multi-player 3D mech battle game called Blaze of Glory. After he graduated high school he attended the DigiPen Institute of Technology, a video game programming and design school located in Redmond, Washington. During his attendance, he was widely considered to be one of the best engineers of his class by his peers and was widely respected for his ability to engineer strong, clean code. DigiPen exposed Robert to a multitude of new ideas and experiences, not all of which were directly related to software engineering or video games. Watching and enjoying anime became an enjoyable pastime. Toward the end of his sophomore year, Robert began to ponder what kind of animation software would be used for the production of anime, and 2D animation in general. When he asked some of his animator friends how such software actually worked, he was surprised to find out how clumsy it was. This got him to thinking about how he would do it differently. Robert came up with an idea for how he thought such software should work—the ideal solution. After explaining the concepts to his animator friends and a handful of teachers, he concluded that the development of the software might be a worthwhile venture. Having already completed his requirements for his Associates degree, Robert left DigiPen to begin full-time development on what would later become Synfig. After a year and a half of full-time software development, Robert founded Voria Studios, LLC, an animation studio that would utilize the tools he had created to give it a competitive edge in animation production. The company's first production, Prologue, was demonstrated at AnimeExpo 2004 and ComicCon 2004. Even though Prologue was a fairly primitive animation, the response received was quite positive. However, burdened with the tasks of software development, business management, marketing, and business networking, Robert was stretched thin. Despite some valiant attempts to get clients, Voria Studios, LLC shut down it's full time operations on December 10th, 2004. Nevertheless, this was not the end of Voria nor Synfig. Unlike many other companies in similar positions, Robert realized that Voria was unique in that it had a product—the animation software which he had been developing over the past two and a half years. It has really been the company's strongest asset all along. Robert has few regrets over the past 3 years, and considers it to have been an extensive real-world education which far exceeds what he would have received if he had continued working on his bachelors degree. Robert ended up licensing Synfig under the GNU GPL and turning it over to the free software community to develop and use. Origins of the Synfig Logo For information about how the Synfig logo (previously the Voria logo) was created, please read darco's blog entry entitled "Making the Voria Logo". Origins of the Name It was originally called SINFG, a recursive acronym for "SINFG Is Not A Fractal Generator", referring to the fact that the software was capable of generating some stunning fractal imagery in addition to animation. I named it this obscure name because I find it exceedingly difficult to work on a project without a name, and I really wanted to just get started—it was the first thing that came to mind. In late 2004 it was starting to become obvious that our company was running out of runway. I came to grips with the fact that our most valuable asset wasn't our animation production capabilities but rather our software, so I started trying to come up with a new more marketable name. My favorite name I came up was "Revolic", but this was effectively vetoed by our lead animator, who insisted that it sounded like the name of a libido drug. She was always very adamant about liking the name it had always had, so I came up with a compromise: I'll make it sound the same and only change the spelling. She agreed that this was acceptable, and that's why it's now known as Synfig Studio instead of Revolic Studio. As for all of you who thought it stood for "Synthetic Figure", well, now you know better. © 2015 Synfig Studio Development Team. All rights reserved. Sign In to Edit this Site
计算机
2015-48/1890/en_head.json.gz/1716
Original URL: http://www.theregister.co.uk/2010/12/01/gnu_savannah_hacked/ Free software repository brought down in hack attack Got root? 1st December 2010 01:55 GMT The main source-code repository for the Free Software Foundation has been taken down following an attack that compromised some of the website's account passwords and may have gained unfettered administrative access.The SQL-injection attacks on GNU Savannah exploited holes in Savane, the open-source software hosting application that was spun off from SourceForge, Matt Lee, a campaigns manager for the Free Software Foundation, told The Register. The attackers were then able to obtain the entire database of usernames and hashed passwords, some of which were decrypted using brute-force techniques.Project managers took GNU Savannah offline on Saturday, more than 48 hours after the attack occurred. They expect to bring the site back online on Wednesday, although they're not guaranteeing it will be fully functional. Out of an abundance of caution, restored data will come from a backup made on November 24, prior to the compromise. Lee said there's no reason to believe any of the source code hosted on the site was affected by the breach.“Version control systems that are in place for these projects actually would show a red flag in terms of any changes that they made, and we've not seen that, so we believe there's no issue there,” he explained. What's more, there's no indication that the FTP server used to actually transfer source code was compromised, he said.The attackers used their access to add a hidden static HTML file to a CVS repository and a webpage that defaced the gnu.org home page. After finding a directory that was erroneously set to execute PHP scripts, the hackers also committed a PHP reverse shell script.“They then proceeded to try a ton of root kits on the gnu.org webserver,” according to a time line provided by Lee. “We don't think they succeeded in getting root, but they may have.”Project managers spent much of the weekend restoring the GNU website to its original state. Even after those steps were begun, the members discovered “that the cracking activity had resumed on www.gnu.org through PHP reverse shells running as user www-cvs,” the timeline said. “Realizing that the problem was much worse than we assumed at first, we immediately isolated the Savannah cluster and the GNU website from the network and start[ed] a deeper analysis.”Managers said that all unsalted MD5 passwords stored on Savannah should be considered compromised and will have to be reset before the accounts can be re-enabled. The encrypted password scheme will also be upgraded to Crypt-MD5 (/etc/shadow's), and user password strength will be checked.Lee said that Savane was already in the midst of an overhaul before the attack. It being open-source software that anyone can audit, one might have expected the SQL injection vulnerability to have been discovered and fixed long ago. To be fair, GNU.org is by no means the only popular open-source project to have been ransacked by hackers. Over the past 13 months, the heavily fortified website for the Apache Software Foundation has been breached twice. ®
计算机
2015-48/1890/en_head.json.gz/1987
Ten Years of Instapundit Everybody has their story of how they discovered the Blogosphere; for lots of people, it was via Instapundit.com, which turned ten years old this week. Here's my take, a visit to the Jurassic days of the early Blogosphere.Ten years ago, when I was making my living as a freelance writer, and writing four to six articles a month to magazines in various fields -- back then mostly "on dead tree," I had only just started to write for political Websites. I had submitted an article on the Mies van der Rohe exhibition then ongoing at New York's Museum of Modern Art to National Review Online, and then followed up with an article on the Computer History Museum, then at Moffett Field in northern California. I was always doing Google vanity searches on my name, to see who was linking to my articles online.Shortly after the piece on the Computer History Museum went up at NRO, I found it had been linked to by something or someone called "Instapundit." I had seen Weblogs before, but they were always of the "I went to the mall and bought a great pair of Nikes" or "I had a really great date at Applebee's last night" variety of daily diaries.And I had seen self-published e-zines, in the form of Virginia Postrel's Dynamist.com, KausFiles, and maybe Andrew Sullivan in whatever incarnation he was then currently in, plus of course the self-published Drudge Report, and had thought about launching a Website of my own, but these looked like they were beyond my then-meager Web skills. Designing a page template? FTP'ing up new pages every day? I didn't know of any programs that automated that sort of thing.But what set Instapundit apart, at the time, was that it was on Blogger. In fact, as Glenn Reynolds mentions in his new video at PJTV celebrating the tenth anniversary of his pioneering blog, his original URL was indeed instapundit.blogspot.com.That little Blogger Button in the corner of Glenn's Weblog made all the difference. It suddenly became obvious that the platform of Blogger.com and the content it held were two very different things. While the vast majority of blogs on Blogger.com's Blogspot hosting site were daily diaries, in reality, a blog could be anything.And it helped that Glenn picked a catchy name for his nascent enterprise. As marketing gurus Al Ries and Jack Trout once wrote, there's reason why we remember Apple as the first personal computer, and not the Altair 8800 or the IMSAI 8080. Because Apple had the name that made computing sound simple, easy to learn, and reliable, and not something you needed Wehner von Braun and Stanley Kubrick to walk you through. Similarly, the name Instapundit instantly explained the purpose of this new Website. Want news? Want opinion? What it fast? Who doesn't, in the age of the World Wide Web? Well, this is your Website.Once I saw the short "hit and run" style of Instapundit, the light bulb went off for me, as it did for hundreds, possibly thousands of other would-be bloggers back then: you could point readers to a story, and interject a short comment, but you needn't hold yourself out as an expert on a particular topic. You were essentially an Internet traffic cop, directing traffic to the hot story of the moment, and blowing the whistle on those stories were the journalist got it wrong. And unlike a magazine article, which typically is of a fixed word count to fit into an existing page space in-between advertisements, a blog post could be any length, as we've seen from Glenn's short one sentence (occasionally even one word) posts, to 5,000 word essays that Steven Den Beste routinely used to post in the first half of the previous decade. Or a blog could be devoted primarily to photos or video.In other words, it was immediately obvious there was a whole new freeform style that had opened up, when I clicked on Instapundit around September 3rd or 4th of 2001.And then the next week, the world changed. As Bryan Preston writes at the Tatler:It’s hard to believe it’s been 10 years since Glenn Reynolds started InstaPundit.com. His blog was the first I ran across in the chaos of 9-11, and I was instantly hooked by his calm, reasonable, patriotic and liberty-focused take on the horrors of that day, and he way and speed with which he assembled opinion and reaction from all over the world. The way he dissected and destroyed media memes was a lifeline to sanity. InstaPundit was a revelation to me. Later I would start my own blog, JunkYardBlog, inspired and led by Glenn’s work. Thousands of other bloggers out there have been similarly impacted and inspired by Glenn Reynolds, and millions of readers have too. Glenn Reynolds is the blogfather to the blogosphere itself, among the right and libertarian blogs.Right from the start, Glenn's list of permalinked Weblogs were worth clicking on in and of themselves, just to see who was out there in this new world of journalism.In early 2002, as I was planning to launch Ed Driscoll.com, originally simply to promote my magazine articles, I decided to use the Blogger.com interface to allow for easy access of the site, but with a different color scheme to differentiate myself from Glenn. (The hat design, based on a Trilby I had picked up in London in the summer of 2000, and swanky '50s font came a couple of years later, when I commissioned Stacy Tabb to update my Weblog.)Around that time, I wrote an article for a high tech libertarian-themed Website called Spintech on the birth of the Blogosphere, which was later republished by Catholic Exchange, whom I discovered when they republished my NRO articles as part of their reciprocal arrangement with National Review. Hopefully it gives you a sense of the early freewheelin' days of the Blogosphere:Ground Zero for all of these textual shenanigans is Blogger.com, the most well known of several providers of free software that allows even the technically and artistically incompetent to present their ideas in a pleasing and easy to follow format. It also provides instructions, encouragement and its own awards. It’s like a film school, a camera store and the Academy of Motion Picture Arts and Science all rolled up in one place…for bloggers.When the Web log concept first debuted, it was largely used for on-line personal diaries. Lots of “day in the life” stuff; lots of updates of family information; lots of photographs of nature and birthday parties; lots of nice pretty, stopping and smelling the flowers commentary by assorted emotional exhibitionists. And this is still a common use for Web logs.Then September 11th happened.One interesting byproduct of that awful day was that the servers on most major news sites (CNN, The New York Times, etc.) were blown out from over capacity. Since a big chunk of America either didn’t go into work, or left early that day, they went home, turned the TV on, fired up the computer, and wanted to know just…what…the…heck…was…going…on.But with the Web sites of news biggies jammed to capacity, some people started going to alternative sites. Little funky one-man or one-woman sites. And some of those men and women, such as Virginia Postrel on her page, The Scene, and Glenn Reynolds at Instapundit.com, spent the day keeping the nation, hell, the world, just as informed as the traditional news sites people couldn’t get into.Then, as the dust settled, that hoary old standby — media bias — started rearing its ugly head again, especially in newspapers, where the reporters seemed to pull out style guides left over from the Tet Offensive. Quagmire! Failure! Evil imperialism! The brutal Afghan winter! Remember the Soviets!Seeking opinions and news that didn’t seem to be outtakes from the Johnson years, many, many people stuck with the bloggers. And sometimes it seems that just as many people saw how much fun the bloggers were having and decided to get into the act themselves.“Sgt. Stryker” (complete with a photo of John Wayne in full leatherneck regalia) is the nom de blog of a U.S. Air Force Mechanic (“to prevent being ‘called onto the carpet’ by anyone in my immediate chain of command.”). He says, “I stumbled across InstaPundit and thought to myself, “Hey, I can do this!” I followed the Blogger link on InstaPundit's site and set up my own weblog, thus killing two birds with one stone. I had a website I could point my friends to, and I could “talk back” to the news in a more quiet manner which helped ensure domestic tranquility”, with his wife, who by then was sick of the Sarge taking back to the news on TV.”One reason Sgt. Stryker may have been so eager to give his views about September 11th and our efforts at payback, is “the impression the press tends to give of the military is of a monolithic and impersonal force, but if somebody stumbles upon my site, perhaps they can see that there are real, normal human beings who are doing all this stuff. When you read my site, you get a good idea of what some of us think and say when there are no reporters or Public Affairs Officers around.”In contrast, Joanne Jacobs is an ex-San Jose Mercury columnist who left the paper in late 2000 to write a book about a charter school in San Jose. She started her Web log after being inspired by Mickey Kaus, Andrew Sullivan and Virginia Postrel (all three of whom were part of the first generation of bloggers, dating back to the Jurassic blogging days of the late 1990s.) Most of her blogging was on the state of America’s education system, until September 11th. Then a good bit of her coverage, shifted, not surprisingly, to the terrorists and our response to them. “I never meant to do a warblog”, she says. “I simply had strong feelings that my country had been attacked and should be defended — militarily and in the field of ideas.”The Blogosphere has become much more crowded since then, and much slicker. Old media, which at first routinely trashed blogs, (for self-described "progressives," progress somehow always seems to throw them) now incorporates them into their Web sites, both to allow faster publishing of news, and as a place for opinion (read: liberal bias) beyond the homepage. And blogging now incorporates audio, video, photography, Twitter, and whatever new form of media comes along next week.But first, somebody had to be the revolution -- congrats to Ten Years of Instapundit: http://pjmedia.com/eddriscoll/2011/8/12/ten-years-of-instapundit
计算机
2015-48/1890/en_head.json.gz/2254
experience, we no longer support Internet Explorer versions 7 and older.Please upgrade to the latest version of Internet Explorer, Firefox, Safari or Chrome. Jeremy Sigmon AP-BDC Certified AP-Homes Certified Director, Technical Policy, [email protected] Since early 2007, Jeremy has been working with USGBC and its community of advocates on advancing green building policy at the state and local levels. In that time he has invested much of his time elevating USGBC’s focus and profile on greening the building codes. In his work with building science and regulatory experts across the country, Jeremy has worked strategically on green building code development, adoption and implementation in local, state and national forums. His work was critical to the landmark partnership between a host of building industry organizations on the International Green Construction Code and Standard 189.1. Jeremy works closely with many well-established building energy code advocacy organizations, continually advancing a more integrated approach to sustainability through building regulatory reform. He has charted USGBC’s work on greening the building codes, was a key contributor to EPA’s Sustainable Design and Green Building Toolkit, is an active contributor to the USGBC blog, and is published in Living Architecture Monitor, EDC Magazine, ACEEE and the ASHRAE Journal. Jeremy manages USGBC’s grassroots advocacy network across all 50 states and nearly 80 chapter organizations to advance USGBC’s mission of green buildings for all within a generation. In his work in state capitals and countless local jurisdictions, Jeremy makes connections where they matter most – between lawmakers and green building experts in their communities. Prior to joining USGBC, Jeremy was a project manager for altPOWER, Inc., a renewable energy contracting firm based in New York City and London, where he interfaced with developers, contractors, building officials and inspectors, building owners, and oversaw the installation of over 1.5 MW of photovoltaic projects. He also taught English in French Guiana, and French at his alma mater in St. Louis. Jeremy earned his bachelor of arts in political science and French from Washington University in St. Louis. *Expert topic:* Advocacy Endorse Jeremy Chapter member Member employee USGBC staff Address
计算机
2015-48/1890/en_head.json.gz/2618
Marti Hearst | Cambridge University Press | 2009 Home Blog Book From the book Search User Interfaces, published by Cambridge University Press. Copyright © 2009 by Marti A. Hearst.Ch. 6: Query Reformulation And as noted in Chapter 3, a common search strategy is for the user to first issue a general query, then look at a few results, and if the desired information is not found, to make changes to the query in an attempt to improve the results. This cycle is repeated until the user is satisfied, or gives up. The previous two chapters discussed interfaces for query specification and presentation of search results. This chapter discusses the query reformulation step. 6.1: The Need for Reformulation Examination of search engine query logs suggests a high frequency of query reformulation. One study by Jansen et al., 2005 analyzed 3 million records from a 24 hour snapshot of Web logs taken in 2002 from the AltaVista search engine. (The search activity was partitioned into sessions separated by periods of inactivity, and no effort was made to determine if users searched for more than one topic during a session. 72% of the sessions were less than five minutes long, and so one-topic-per-session is a reasonable, if noisy, estimate.) The analysis found that the proportion of users who modified queries was 52%, with 32% issuing 3 or more queries within the session. Other studies show similar proportions of refinements, thus supporting the assertion that query reformulation is a common part of the search process. Good tools are needed to aid in the query formulation process. At times, when a searcher chooses a way to express an information need that does not successfully match relevant documents, the searcher becomes reluctant to radically modify their original query and stays stuck on the original formulation. Hertzum and Frokjaer, 1996 note that at this point “the user is subject to what psychologists call anchoring, i.e., the tendency to make insufficient adjustments to initial values when judging under uncertainty”. This can lead to “thrashing” on small variations of the same query. Russell, 2006 remarks on this kind of behavior in Google query logs. For example, for a task of “Find out how many people have bought the new Harry Potter book so far”, he observes the following sequence of queries for one user session: Harry Potter and the Half-Blood Prince sales Harry Potter and the Half-Blood Prince amount sales Harry Potter and the Half-Blood Prince quantity sales Harry Potter and the Half-Blood Prince actual quantity sales Harry Potter and the Half-Blood Prince sales actual quantity Harry Potter and the Half-Blood Prince all sales actual quantity all sales Harry Potter and the Half-Blood Prince worldwide sales Harry Potter and the Half-Blood Prince In order to show users helpful alternatives, researchers have developed several techniques to try to aid in the query reformulation process (although existing tools may not be sophisticated enough to aid the user with the information need shown above.) This chapter describes interface technologies to support query reformulation, and the ways in which users interact with them. 6.2: Spelling Suggestions and Corrections Search logs suggest that from 10-15% of queries contain spelling or typographical errors (Cucerzan and Brill, 2004). Fittingly, one important query reformulation tool is spelling suggestions or corrections. Web search engines have developed highly effective algorithms for detecting potential spelling errors (Cucerzan and Brill, 2004, Li et al., 2006). Before the web, spelling correction software was seen mainly in word processing programs. Most spelling correction software compared the author's words to those found in a pre-defined dictionary (Kukich, 1992), and did not allow for word substitution. With the enormous usage of Web search engines, it became clear that query spelling correction was a harder problem than traditional spelling correction, because of the prevalence of proper names, company names, neologisms, multi-word phrases, and very short contexts (some spelling correction algorithms make use of the sentential structure of text). Most dictionaries do not contain words like blog, shrek, and nsync. But with the greater difficulty also came the benefit of huge amounts of user behavior data. Web spelling suggestions are produced with the realization that queries should be compared to other queries, because queries tend to have special characteristics, and there is a lot of commonality in the kinds of spelling errors that searchers make. A key insight for improving spelling suggestions on the Web was that query logs often show not only the misspelling, but also the corrections that users make in subsequent queries. For example, if a searcher first types schwartzeneger and then corrects this to schwartzenegger, if the latter spelling is correct, an algorithm can make use of this pair for guessing the intended word. Experiments on algorithms that derive spelling corrections from query logs achieve results in the range of 88-90% accuracy for coverage of about 50% of misspellings (Cucerzan and Brill, 2004, Li et al., 2006). For Web search engine interfaces, one alternative spelling is typically shown beneath the original query but above the retrieval results. The suggestion is also repeated at the bottom of the results page in case the user does not notice the error until they have scrolled through all of the suggested hits. As noted in Chapter 1, in most cases the interface offers the choice to the user without forcing an acceptance of an alternative spelling, in case the system's correction does not match the user's intent. But in the case of a blatantly incorrect typographical error, a user may prefer the correction to be made automatically to avoid the need for an extra click. To balance this tradeoff, some search engines show some hits with their guess of the correct spelling interwoven with others that contain the original, most likely incorrect spelling. There are no published large-scale statistics on user uptake of spelling correction, but a presentation by Russell, 2006 shows that, for those queries that are reformulations, and for which the original query consisted of two words, 33% of the users making reformulations used the spelling correction facility. For three-word query reformulations, 5% of these users used the spelling suggestion. In an in-person study conducted with a statistically representative subject pool of 100 people, Hargittai, 2006 studied the effects of typographical and spelling errors. (Here typographical means that the participant knows the correct spelling but made a typing mistake, whereas spelling error means the participant does not know the correct spelling.) Hargittai, 2006 found that 63% of the participants made a mistake of some kind, and among these, 35% made only one mistake, but 17% made four or more errors during their entire session. As might be predicted, lower education predicted higher number of spelling errors, but an interesting finding was that the higher the participant's income, the more likely they were to make a typographical error. Older participants were more also likely to make spelling errors. The most surprising result, however, was that of the 37 participants who made an error while using Google search, none of them clicked on the spelling corrections link. This would seem to contradict the statistics from Russell, 2006. It may be the case that in Hargittai's data, participants made errors on longer queries exclusively, or that those from a broader demographic do not regularly make use of this kind of search aid, or that the pool was too small to observe the full range of user behavior. 6.3: Automated Term Suggestions The second important class of query reformulation aids are automatically suggested term refinements and expansions. Spelling correction suggestions are also query reformulation aids, but the phrase term expansion is usually applied to tools that suggest alternative words and phrases. In this usage, the suggested terms are used to either replace or augment the current query. Term suggestions that require no user input can be generated from characteristics of the collection itself (Schütze and Pedersen, 1994), from terms derived from the top-ranked results (Anick, 2003, Bruza and Dennis, 1997), a combination of both (Xu and Croft, 1996), from a hand-built thesaurus (Voorhees, 1994, Sihvonen and Vakkari, 2004), or from query logs (Cui et al., 2003, Cucerzan and Brill, 2005, Jones et al., 2006) or by combining query logs with navigation or other online behavior (Parikh and Sundaresan, 2008). Usability studies are generally positive as to the efficacy of term suggestions when users are not required to make relevance judgements and do not have to choose among too many terms. Some studies have produced negative results, but they seem to stem from problems with the presentation interface. Generally it seems users do not wish to reformulate their queries by selecting multiple terms, but many researchers have presented study participants with multiple-term selection interfaces. For example, in one study by Bruza et al., 2000, 54 participants were exposed to a standard Web search engine, a directory browser, and an experimental interface with query suggestions. This interface showed upwards of 40 suggested terms and hid results listing until after the participant selected terms. (The selected terms were conjoined to those in the original query.) The study found that automatically generated term suggestions resulted in higher average precision than using the Web search engine, but with a slower response time and the penalty of a higher cognitive load (as measured by performance on a distractor task). No subjective responses were recorded. Another study using a similar interface and technology found that users preferred not to use the refinements in favor of going straight to the search results (Dennis et al., 1998), underscoring the search interface design principle that search results should be shown immediately after the initial query, alongside additional search aids. 6.3.1: Prisma Interfaces that allow users to reformulate their query by selecting a single term (usually via a hyperlink) seem to fare better. Anick, 2003 describes the results of a large-scale investigation of the effects of incorporating related term suggestions into a major Web search engine. The term suggestion tool, called Prisma, was placed within the AltaVista search engine's results page (see Figure 6.1). The number of feedback terms was limited to 12 to conserve space in the display and minimize cognitive load. Clicking on a hyperlink for a feedback term conjoined the term to the current query and immediately ran a new query. (The chevron ( >>) to the right of the term replaced the query with the term, but its graphic design did not make it clearly clickable, and few searchers used it.) Term suggestions were derived dynamically from an analysis of the top-ranked search results. Figure 6.1: Illustration of Prisma term suggestions from (Anick, 2003) . The study created two test groups by serving different Web pages to different IP addresses (using bucket testing, see Chapter 2). One randomly selected set of users was shown the Prisma terms, and a second randomly selected set of users was shown the standard interface, to act as a control group. Analysis was performed on anonymized search logs, and user sessions were estimated to be bursts of activity separated by 60 minutes of no recorded activity. The Prisma group was shown query term refinements over a period of five days, yielding 15,133 sessions representing 8,006 users. The control group included 7,857 users and 14,595 sessions. Effectiveness of the query suggestions was measured in terms of whether or not a search result was clicked after the use of the mechanism, as well as whether or not the session ended with a result click. In the Prisma group, 56% of sessions involved some form of refinement (which includes manual changes to the query without using the Prisma suggestions), compared to 53% of the control group's sessions, which was a significant difference. In the Prisma condition, of those sessions containing refinements: 25% of the sessions made use of the Prisma suggestions, 16% of the users applied the Prisma feedback mechanism at least once on any given day, When studied over another two weeks, 47% of those users used Prisma again within the two week window, and over that period, the percentage of refinement sessions using the suggestions increased from 25% to 38%. Despite the large degree of uptake, effectiveness when measured in the occurrence of search results clicks did not differ between the baseline group and the Prisma group. However, the percentage of clicks on Prisma suggestions that were followed immediately by results clicks was slightly higher than the percentage of manual query refinements followed immediately by results clicks. This study also examined the frequency of different refinement types. Most common refinements were: Adding or changing a modifier (e.g., changing buckets wholesale to plastic buckets): 25% Elaborating with further information (e.g., jackson pollock replaced by museum of modern art): 24% Adding a linguistic head term (e.g., converting triassic to triassic period): 15% Expressing the same concept in a different way (e.g., converting job listings to job openings): 12% Other modifications (e.g., replacing with hyponyms, morphological variants, syntactic variants, and acronyms): 24%. 6.3.2: Other Studies of Term Suggestions In a more recent study, White et al., 2007 compared a system that makes term suggestions against a standard search engine baseline and two other experimental systems (one of which is discussed in the subsection below on suggesting popular destinations). Query term suggestions were computed using a query log. For each query, queries from the log that contained the query terms were retrieved. These were divided into two sets: the 100 most frequent queries containing some of the original terms, and the 100 most frequent of queries that followed the target query in query logs -- that is, user-generated refinements. These candidates were weighted by their frequency in each of the two sets, and the top-scoring six candidates were shown to the user after they issued the target query. Suggestions were shown in a box on the top right hand side of the search results page. White et al., 2007 conducted a usability study with 36 participants, each doing two known-item tasks and two exploratory tasks, and each using the baseline system, the query suggestions, and two other experimental interfaces. For the known-item tasks, the query suggestions scored better than the baseline on all measures (easy, restful, interesting, etc). Participants were also faster using the query suggestions over the baseline on known item tasks (although tied with one experimental system), and made use of the query suggestions 35.7% of the time. For those who preferred this query suggestion interface, they said it was useful for saving typing effort and for coming up with new suggestions. (The experimental system for suggesting destinations was more effective and preferred for exploratory tasks.) In the BioText project, Divoli et al., 2008 experimented with alternative interfaces for terms suggestions in the specialized technical domain of searching over genomics literature. They focused specifically on queries that include gene names, which are commonly used in bioscience searches, and which have many different synonyms and forms of expression. Divoli et al., 2008 first issued a questionnaire in which they asked 38 biologists what kind of information they would like to see in query term suggestions, finding strong support for gene synonyms and homologues. Participants were also interested in seeing information about genes associated with the target gene, and localization information for genes (where they occurs in organisms). It should be noted that a minority of participants were strongly opposed to showing additional information, unless it was shown as an optional link, in order to retain an uncluttered look to the interface. A followup survey was conducted in which 19 participants from biology professions were shown four different interface mock-ups (see Figure 6.2). The first had no term suggestions, while the other three showed term suggestions for gene names, organized into columns labeled by similarity type (synonyms, homologues, parents, and siblings of the gene). Because participants had expressed a desire for reduced clutter, at most three suggestions per columns were shown, with a link to view all choices. (a)(b) Figure 6.2: Term suggestion interface mock-ups from (Divoli et al., 2008) . (a) Design 3 (b) Design 4 (see text for details). Design 2 required selection of the choices by individual hyperlink, with an option to add all terms. Design 3 allowed the user to select individual choices via checkboxes, and Design 4 allowed selecting of all terms within a column with a single hyperlink. Design 3 was most preferred, with one participant suggesting that the checkbox design also include a select all link within each column. Designs 4 and 2 were closely rated with one another, and all were strongly preferred over no synonym suggestions. These results suggest that for specialized and technical situations and users, term suggestions can be even more favored than in general Web search. 6.3.3: Query Refinement Suggestions in Web Search Interfaces The results of the Anick, 2003 and the White et al., 2007 studies are generally positive, and currently many Web search engines offer term refinement. For example, the Dogpile.com metasearch engine shows suggested additional terms in a box on the right hand side under the heading “Are you looking for?” (see Figure 6.3). A search on apple yields term suggestions of Apple the Fruit (to distinguish it from the computer company and the recording company), Banana, Facts about Apples, Apple Computers, Red Apple and others. Selecting Apple the Fruit retrieves Web pages that are about that topic, and the refinements change to Apple Varieties, Apple Nutrition, History Fruit Apple, Research on Fruit, Facts about the Fruit Apple, and others. Clicking on Facts about the Fruit Apple retrieves web pages containing lists of facts. The Microsoft search site also shows extensive term suggestions for some queries. For instance, a query on the ambiguous term jets yields related query suggestions including Jet Magazine, Jet Airways, JetBlue, Fighter Jets, Jet Li and Jet Stream (see Figure 5.8 in Chapter 5). Figure 6.3: Illustration of term suggestions from Dogpile.com, 2008 InfoSpace, Inc. All rights reserved. Jansen et al., 2007b studied 2.5M interactions (1.5M of which were queries) from a log taken in 2005 from the Dogpile.com search engine. Using their computed session boundaries (mean length of 2.31 queries per session), they found that more than 46% of users modified their queries, 37% of all queries were parts of reformulations, and 29.4% of sessions contained three or more queries. Within the sessions that contained reformulated queries, they found the following percentage of actions for query modifications (omitting statistics for starting a new topic): Assistance (clicked on a link offered by the question Are you Looking For?, which are term refinements): 22.2% Reformulation (the current query is on the same topic as the searcher's previous query, and shares one or more common terms with it): 22.7% Generalization (same topic, but seeking more general information): 7.2% Specialization (same topic, but seeking more specific information): 16.3% Content Change (identical query, but run on a different collection): 11.8% Specialization with reformulation: 9.9% Generalization with reformulation: 9.8% (Here, collections refer to Web pages versus searching images, videos, or audio data.) Thus, they found that 8.4% of all queries were generated by the reformulation assistant provided by Dogpile (see Figure 6.3), although they do not report on what proportion of queries were offered refinements. This is additional evidence that query term refinement suggestions are a useful reformulation feature. A recent study on Yahoo's search assist feature (Anick and Kantamneni, 2008) found similar results; the feature was used about 6% of the time. 6.4: Suggesting Popular Destinations White et al., 2007 suggested another kind of reformulation information: showing popular destination Web sites. They recorded search activity logs for hundreds of thousands of users over a period of five months in 2005--2006. These logs allowed them to reconstruct the series of actions that users made from going to a search engine page, entering a query, seeing results, following links, and reading web pages. They determined when such a session trail ended by looking for a stoppage, such as staying on a page for more than 30 minutes, or a change in activity, such as switching to email, or going to a bookmarked page. They distinguished session trails from query trails; the latter had the same stopping conditions as the former, but could also be ended by a return to a search engine page. Thus they were able to “follow” users along as they performed their information seeking tasks. Figure 6.4: Query trail destination suggestions, from White et al., 2007. White et al., 2007 found that users generally browsed far from the search results page (around 5 steps), and that on average, users visited 2 unique domains during the course of a query trail, and just over 4 domains during a session trail. They decided to use the information about which page the users ended up at as a suggestion for a shortcut for a given query. Given a new query, its statistical similarity to previously seen query-destination pairs was computed, and popular final destinations for that query were then shown as a suggested choice (see Figure 6.4). They experimented with suggestions from both query trails and sessions trails. In the same study of 36 participants, they compared these two experimental approaches against a standard search engine baseline and a query suggestions interface, testing on both known-item tasks and exploratory tasks. For exploratory tasks, the destination suggestions from the query trails scored better than the other four systems on perceptions of the search process (easy, restful, interesting, etc.) and usefulness (perceived as producing more useful and relevant results) for the exploratory tasks. The task completion time on exploratory tasks was approximately the same for all four interfaces; the destination suggestions were tied in terms of speed with query term suggestions in known-item tasks. In exploratory tasks, query trail destination suggestions were used more often (35.2% of the time) than query term suggestions and session trail destination suggestions. Participants who preferred the destination suggestions commented that they provided potentially helpful new areas to look at, and allowed them to bypass the need to navigate to pages. They suggested that destinations were selected because they “grabbed their attention,” “represented new ideas,” or users “couldn't find what they were looking for.” Those who did not like the suggestions stated as a reason the vagueness of showing only a Web site; presumably augmenting the destination views with query-biased summaries would make them more useful. The destination suggestions produced from session trails were sometimes very good, but were inconsistent in their relevance, a characteristic which is usually perceived negatively by users. The participants did not find the graphical bars indicating site popularity to be useful, mirroring other results of this kind. 6.5: Relevance Feedback Another major technique to support query reformulation is relevance feedback. In its original form, relevance feedback refers to an interaction cycle in which the user reads retrieved documents and marks those that appear to be relevant, and the system then uses features derived from these selected relevant documents to revise the original query (Ruthven and Lalmas, 2003). In one variation, the system uses information from the marked documents to recalculate the weights for the original query terms, and to introduce new terms. In another variation, the system suggests a list of new terms to the user, who then selects a subset of these to augment the query (Koenemann and Belkin, 1996). The revised query is then executed and a new set of documents is returned. Documents from the original set can appear in the new results list, although they are likely to appear in a different rank order. In some cases the relevance feedback interface displays an indicator such as a marked checkbox beside the documents that the user has already judged. For most relevance feedback techniques, a larger number of marked relevant documents yields a better result. In a method known as pseudo-relevance feedback (also known as blind relevance feedback), rather than relying on the user to choose the top k relevant documents, the system simply assumes that its top-ranked documents are relevant, and uses these documents to augment the query with a relevance feedback ranking algorithm. This procedure has been found to be highly effective in some settings (Thompson et al., 1995, Kwok et al., 1995, Allan, 1995). However, it does not perform reliably when the top-ranked documents are not relevant (Mitra et al., 1998a). Relevance feedback in its original form has been shown -- in artificial settings -- to be an effective mechanism for improving retrieval results (Salton and Buckley, 1990, Harman, 1992, Buckley et al., 1994, Mitra et al., 1998a). For instance, a study by Kelly et al., 2005 compared carefully elicited user-generated term expansion with relevance feedback based on documents that were pre-determined by an expert to be the most relevant. The results of relevance feedback using the top-ranked documents far outstripped user-generated term expansion. Kelly et al., 2005 used the highly relevant documents as an upper bound on performance, as it could not be expected that ordinary users would identify such documents. This finding is echoed by another study for the TREC HARD track in which an expert was shown the documents pre-determined to be most relevant and spent three minutes per query choosing documents for relevance feedback purposes. The resulting improvements over the baseline run was 60% over the metric used to assess improvements from clarification dialogues (Allan, 2005). The results of using user-generated additional terms were that queries that were already performing well improved more than queries that were not performing well originally. This study also found that spending more time in the clarification dialogue did not correlate with improved final results. Despite its strong showing in artificial or non-interactive search studies, the use of classic relevance feedback in search engine interfaces is still very rare (Croft et al., 2001, Ruthven and Lalmas, 2003), suggesting that in practice it is not a successful technique. There are several possible explanations for this. First, most of the earlier evaluations assumed that recall was important, and relevance feedback's strength mainly comes from its ability to improve recall. High recall is no longer the standard assumption when designing and assessing search results; in more recent studies, the ranking is often assessed on the first 10 search results. Second, relevance feedback results are not consistently beneficial; these techniques help in many cases but hurt results in other cases (Cronen-Townsend et al., 2004, Marchionini and Shneiderman, 1988, Mitra et al., 1998a). Users often respond negatively to techniques that do not produce results of consistent quality. Third, many of the early studies were conducted on small text collections. The enormous size of the Web makes it more likely that the user will find relevant results with fewer terms than is the case with small collections. And in fact there is evidence that relevance feedback results do not significantly improve over web search engine results (Teevan et al., 2005b). Figure 6.5: View similar articles function, from PubMed, published by the U.S. National Library of Medicine. But probably the most important reason for the lack of uptake of relevance feedback is that the method requires users to make relevance judgements, which is an effortful task (Croft et al., 2001, Ruthven and Lalmas, 2003). Evidence suggests that users often struggle to make relevance judgements (White et al., 2005), especially when they are unfamiliar with the domain (Vakkari, 2000b, Vakkari and Hakala, 2000, Spink et al., 1998). In addition, when many of the earlier studies were done, system response time was slow and the user was charged a fee for every query, so correct query formulation was much more important than for the rapid response cycle of today's search engines. (By contrast, a search engine designed for users in the developing world in which the round trip for retrieval results can be a day or more has renewed interest in accurate query formulation (Thies et al., 2002).) The evidence suggests it is more cognitively taxing to mark a series of relevance judgements than to scan a results listing and type in a reformulated query. 6.6: Showing Related Articles (More Like This) To circumvent the need for multiple relevant document selection, Aalbersberg, 1992 introduced an incremental relevance feedback that requires the user to judge only one document at a time. Similarly, some Web-based search engines have adopted a “one-click” interaction method. In the early days of the Web, the link was usually labeled as “More like this”, but other terms have been used, such as “Similar pages” or “Related articles” at the biomedical search engine Pubmed. (This is not to be confused with “Show more results at this site” which typically re-issues the query within a subdomain.) More recently in PubMed, after a user chooses to view an article, the titles of some related articles are shown along the right hand side (see Figure 6.5). Related articles are computed in terms of a probabilistic model of how well they match topics (Lin and Wilbur, 2007). These related articles are relatively heavily used by searchers. Lin et al., 2008 studied a week's worth of query logs from PubMed in June 2007, observing about 2M sessions that included at least one PubMed query and abstract view. Of these, 360,000 sessions (18.5%) included a click on a suggested related article, representing about one fifth of non-trivial search sessions. They also found that as session lengths increased, the likelihood of selecting a related article link grew, and once users started selecting related articles, they were likely to continue doing so, more than 40% of the time. Thus, the evidence suggests that showing similar articles can be useful in literature search, but it is unclear what its utility is for other kinds of search. Related article links act as a “black box” to users, meaning they cannot see why it is that one set of articles is considered related and others are not. Furthermore, they do not have control over in what ways other articles are related. Interfaces which allow users to select a set of categories or dimensions along which documents are similar may be more effective for this, as discussed in Chapter 8 on integrating navigation and search. 6.7: Conclusions When an initial query is unsuccessful, a searcher can have trouble thinking of alternative ways to formulate it. Query reformulation tools can be a highly effective part of the search user interface. Query reformulation is in fact a common search strategy, as evidenced by the statistics presented throughout this chapter -- roughly 50% of search sessions involve some kind of query reformulation. Both spelling suggestions and term expansions are effective reformulation tools -- term suggestion tools are used roughly 35% of the time that they are offered to users. Additionally, showing popular destinations for common queries, and showing related articles for research-style queries have both been shown to be effective. However, relevance feedback as traditionally construed has not been proven successful in an interactive context. << Previous: (Ch. 5) (Ch. 7) : Next >>Top (Ch. 6) Chapter Contents 6.1: The Need for Reformulation 6.2: Spelling Suggestions and Corrections 6.3: Automated Term Suggestions 6.4: Suggesting Popular Destinations 6.5: Relevance Feedback 6.6: Showing Related Articles (More Like This) 6.7: Conclusions Book Contents0: Preface 1: Design of Search User Interfaces 2: Evaluation of Search User Interfaces 3: Models of the Information Seeking Process 4: Query Specification 5: Presentation of Search Results 6: Query Reformulation 7: Supporting the Search Process 8: Integrating Navigation with Search 9: Personalization in Search 10: Information Visualization for Search Interfaces 11: Information Visualization for Text Analysis 12: Emerging Trends in Search Interfaces ReferencesIndex Buy the Book(Available Sept '09)Amazon.com Cambridge University PressCommentView and Write Comments on this ChapterCopyright © 2009 by Marti A. Hearst.
计算机
2015-48/1890/en_head.json.gz/2862
Model Behavior... Ok guys,Just a quick update for you here...I know many of you have seen picks and video of the model that's on display in the Blue Sky Cellar building. Many of you have commented on it, both pro and con, I'm here to clear the air a bit.The model is the original one that the Imagineers used in their pitch. So not everything you see will show up in the final design... they don't call it "Blue Sky" for nothing now. I've seen many people excited about the "Green Army Men Parachute Drop," but that's not going to happen. This is from the original proposal. The current plan is to keep the Maliboomer until sometime in 2010 when the majority of the pier's construction is done. After that, some of the other plans they have for this helix could be approved, economics pending.The exterior of the carousel is also not a go yet. If you recall an earlier post, I mentioned that WDI wanted to place an exterior that matched the TSMM attraction since the original look just sticks out so badly compared to the proposed theming of the new pier. The approval hasn't been given yet, but depending on the budget, we may still see this or a variation of it.As for Mickey's Fun Wheel, well you're going to be seeing what comes of it over the next few months. But when it opens in April don't expect it to look exactly like the model.That is what the "Princess Palace" will resemble, along with a great deal of more detail around the PP area. Oh, and the games remake should start shortly. By the end of the year, there should be quit a collection of walls down in this area.I'll try and post some detailed photos later next week. TTFN... Honor Hunter
计算机
2015-48/1890/en_head.json.gz/3101
483 projects tagged "Operating Systems" CD-Based (114) Floppy-Based (20) Rebol (1) SCons (1) i5/OS (1) System Configuration Collector for Windows collects configuration data from Windows systems and compares the data with the previous run. Differences are added to a logbook, and all data can be sent to the server part of SCC. GPLUtilitiesMonitoringDocumentationSystems Administration ClearOS is an integrated network server gateway solution for small and distributed organizations. The software provides all the necessary server tools to run an organization including email, anti-virus, anti-spam, file sharing, groupware, VPN, firewall, intrusion detection/prevention, content filtering, bandwidth management, multi-WAN, and more. You can think of it as a next generation small business server. Through the intuitive Web-based management console, an administrator can configure the server software along with integrated cloud-based services. GPLFiltersServerFirewallOperating Systems Tor-ramdisk Tor-ramdisk is a uClibc-based micro Linux distribution whose only purpose is to host a Tor server in an environment that maximizes security and privacy. Tor is a network of virtual tunnels that allows people and groups to improve their privacy and security on the Internet. Security is enhanced in tor-ramdisk by employing a monolithically compiled GRSEC/PAX patched kernel and hardened system tools. Privacy is enhanced by turning off logging at all levels so that even the Tor operator only has access to minimal information. Finally, since everything runs in ephemeral memory, no information survives a reboot, except for the Tor configuration file and the private RSA key, which may be exported and imported by FTP or SSH. GPLv3InternetSecurityCommunicationsNetworking ALT Linux ALT Linux is a set of Linux distributions that are based on Sisyphus, an APT-enabled RPM package repository that aims to achieve feature completeness, usability, and security in a sensible and manageable mixture. GPLSecurityLinux DistributionsOperating Systems BitRock InstallBuilder allows you to create easy-to-use multiplatform installers for Linux (x86/PPC/s390/x86_64/Itanium), Windows, Mac OS X, FreeBSD, OpenBSD, Solaris (x86/Sparc), IRIX, AIX, and HP-UX applications. The generated application installers have a native look-and-feel and no external dependencies, and can be run in GUI, text, and unattended modes. In addition to self-contained installers, the installation tool is also able to generate standalone RPM packages. SharewareSoftware DevelopmentUtilitiesDesktop EnvironmentSystems Administration Ubuntu Privacy Remix is a modified live CD based on Ubuntu Linux. UPR is not intended for permanent installation on a hard disk. The goal of Ubuntu Privacy Remix is to provide an isolated working environment where private data can be dealt with safely. The system installed on the computer running UPR remains untouched. It does this by removing support for network devices as well as local hard disks. Ubuntu Privacy Remix includes TrueCrypt and GnuPG for encryption and introduces "extended TrueCrypt volumes". GPLSecurityOffice/BusinessCryptographyLinux Distributions amforth is an extendible command interpreter for the Atmel AVR ATmega microcontroller family. It has a turnkey feature for embedded use as well. It does not depend on a host application. The command language is an almost compatible ANS94 forth with extensions. It needs less than 8KB code memory for the base system. It is written in assembly language and forth itself. GPLv2Software DevelopmentScientific/EngineeringHardwareOperating System Kernels STUBS and Franki/Earlgrey Linux The STUBS Toolchain and Utility Build Suite is a set of scripts which, together with a set of pre-written configuration files, builds one or more software packages in sequence. STUBS is designed to work in very minimal environments, including those without "make", and URLs are included so source and patches can be downloaded as necessary. Configuration files and scripts are provided which create boot media for Franki/Earlgrey Linux (one of several example busybox- and uClibc-based Linux environments) and the intention is that STUBS should be able to rebuild such an environment from within itself. GPLUtilitiesLinux DistributionsCD-BasedOperating Systems Hashrat A command-line or HTTP CGI hashing utility.
计算机
2015-48/1890/en_head.json.gz/4159
Dropbox working with Apple to resolve app rejection issue By Josh Ong Tuesday, May 01, 2012, 09:40 pm PT (12:40 am ET) After a number of developers using the Dropbox SDK reported that Apple was rejecting their iOS apps from the App Store because of links to an external purchase option, the cloud storage provider has confirmed that is working with Apple to address the issue. Developers recently took to the Dropbox forums to discuss the rejections, as highlighted by The Next Web. Apple had taken issue with a new version of the Dropbox SDK that included a link to the "Desktop version" of its website on the page for creating accounts that could allow users to purchase additional space outside of the app. Dropbox, which has more than 50 million users across 250 million different devices, released a statement about the issue to AppleInsider on Tuesday. "Apple is rejecting apps that use the Dropbox SDK because we allow users to create accounts. We're working with Apple to come up with a solution that still provides an elegant user experience," the statement read. A Dropbox employee appeared to have issued a temporary solution on the company's forums with a new version of the SDK that removed the offending link. The employee promised to share next week information about a "better solution."Source: Dropbox.com Apple began banning links to out-of-app purchases last year with the introduction of its App Store subscription service. The policy has been controversial and has affected several prominent app publishers, including Amazon, The Wall Street Journal and Barnes & Noble. Dropbox founder Drew Houston revealed last year that Apple co-founder Steve Jobs offered a nine-figure sum for the startup in late 2009. After Houston and his partner declined, Jobs reportedly warned them that Apple would enter their market. Apple went on to unveil its iCloud service last June. Though iCloud and Dropbox are different in many
计算机
2015-48/1890/en_head.json.gz/4173
Windows 8 six months in: 100 million licenses sold, 250 million app downloads There are also six times more apps in the store than at launch. - May 7, 2013 4:01 am UTC More than 100 million copies of Windows 8 have been sold in its first six months on the market, according to a Q&A with Windows division Chief Marketing Officer and Chief Financial Officer Tami Reller. The post confirms that the Windows Blue update will become available later in the year. Among other things, this serves as an opportunity for Microsoft to "respond to the customer feedback" that the company has no doubt been inundated with since Windows 8 was released. The Windows 8 license count wasn't the only number mentioned. The company claims that the number of apps in the Windows Store has increased by six times since launch. There have been 250 million app downloads, and about 90 percent of all apps get downloaded each month. Microsoft's cloud services are also picking up users, with a claimed 250 million SkyDrive users, 400 million active Outlook.com users, and 700 million active Microsoft Accounts. The transition from Hotmail to Outlook.com recently completed, with all users now using the new e-mail platform several weeks ahead of schedule. This is the third time that Microsoft has talked about how many units Windows 8 has shifted. Forty million copies were sold in the first month, rising to 60 million a month later. The sales rate has certainly slowed since then, with just 40 million copies sold in the last four months. This is not in itself unusual; past operating systems have seen an initial surge of sales before leveling off. Good? Bad? Microsoft's detractors will inevitably point out that Windows 7 picked up market share at a quicker rate, and thus Windows 8 is a failure. The company's supporters will point out in turn that Windows 8 is primarily a consumer play, and that businesses are still in the process of migrating from the 11 and a half year old Windows XP to Windows 7. Such slow-moving companies are hardly likely to let the release of a new operating system disrupt their transition plans. Microsoft, for its part, is acting upbeat about the numbers, emphasizing that Windows 8 represents a big change and explaining that big changes take time. Reller also said that the PC is "very much alive," and that it's now part of a broader market of tablets as well as (traditional) PCs. The 100 million figure does suggest that the PC isn't quite dead yet. A rate of 10 million copies per month isn't too shabby. The iPad, which according to its proponents is going to bring about the end of the PC (and hence the end of Windows), sold 6.5 million units a month last quarter. By most metrics, that's lower than the number of Windows 8 licenses sold.
计算机
2015-48/1890/en_head.json.gz/4744
Google Contributes Back to MySQL Google+MySQL scot (6695) The IW article notes that "Google uses the MySQL open source relational database internally for some applications that aren't search related." So it's not like this is a huge deal for Google as a whole. Also from the IW story; "its engineers are keen to improve the code by making their improvements publicly available. "We think MySQL is a fantastic data storage solution, and as our projects push the requirements for the database in certain areas, we've made changes to enhance MySQL itself, mainly in the areas of high availability and manageability," said Google engineer Mark Callaghan in a blog post. " It sounds to me like the engineers working on this wanted to get some publicity(maybe since they seem to be working in one of the non-critical areas of the company), and just about any press release from Google seems to get a lot of attention... vek (2312) So it's not like this is a huge deal for Google as a whole.While certainly Google's bread and butter, search isn't their only product though. Groups, gmail, news, maps, etc. All have a substantial amount of data with a substantial amount of users.
计算机
2015-48/1890/en_head.json.gz/5129
Posted Ouya: ‘Over a thousand’ developers want to make Ouya games By Check out our review of the Ouya Android-based gaming console. Even after the relatively cheap, Android-based Ouya console proved a massive success on Kickstarter (the console was able to pull in nearly $8.6 million from investors despite having an initial goal of only $960,000), pundits and prospective owners of the new gaming machine loudly wondered how well it would be able to attract developers who would otherwise be making games for the Xbox 360, iPhone or PC. Assuming you believe official statements made by the people behind the Ouya console, there is nothing to worry about on that front. “Over a thousand” developers have contacted the Ouya creators since the end of their Kickstarter campaign, according to a statement published as part of a recent announcement on who will be filling out the company’s leadership roles now that it is properly established. Likewise, the statement claims that “more than 50” companies “from all around the world” have approached the people behind Ouya to distribute the console once it is ready for its consumer debut at some as-yet-undetermined point in 2013. While this is undoubtedly good news for anyone who’s been crossing their fingers, hoping that the Ouya can make inroads into the normally insular world of console gaming, it should be noted that while these thousand-plus developers may have attempted to reach the Ouya’s creators, the company offers no solid figures on how many of them are officially committed to bringing games to the platform. That “over a thousand” figure means little if every last developer examined the terms of developing for the Ouya and quickly declined the opportunity in favor of more lucrative options. We have no official information on how these developer conversations actually went, so until we hear a more official assessment of how many gaming firms are solidly pledging support to the Ouya platform, we’ll continue to harbor a bit of cynicism over how successful this machine might possibly be. As for the aforementioned personnel acquisitions, though they’re less impressive than the possibility that thousands of firms are already tentatively working on games for the Ouya, they should offer a bit more hope that the company making the console will remain stable, guided by people intimately familiar with the gaming biz. According to the announcement, Ouya has attracted former IGN president (and the first investor in the Ouya project) Roy Bahat to serve as chairman of the Ouya board. Additionally, the company has enlisted former EA development director and senior development director for Trion Worlds’ MMO Rift, Steve Chamberlin, to serve as the company’s head of engineering. Finally, Raffi Bagdasarian, former vice president of product development and operations at Sony Pictures Television has been tapped to lead Ouya’s platform service and software product development division. Though you may be unfamiliar with these three men, trust that they’ve all proven their chops as leaders in their respective gaming-centric fields. Expect to hear more solid information on the Ouya and its games line up as we inch closer to its nebulous 2013 release. Hopefully for the system’s numerous potential buyers, that quip about the massive developer interest the console has attracted proves more tangible than not.
计算机
2015-48/1890/en_head.json.gz/5284
First security vulnerability in Internet Explorer 7 Microsoft has only just released Internet Explorer 7 and already security services provider Secunia has registered the first security vulnerability in the new browser. Surprisingly for a new version of the browser which entailed a significant rewrite, this vulnerability is a carry-over from Internet Explorer 6, described in April 2006. According to Secunia, the vulnerability allows an attacker to scout out confidential information from opened websites. Secunia has also prepared a website to demonstrate the vulnerability, which, after clicking on a link, attempts to read content from news.google.com. This was successful on a heise Security test computer running a fully patched Windows XP SP2 and the final version of Internet Explorer 7 just released. The bug, which affects both Internet Explorer 6 and the new version 7 of Microsoft's web browser, is based on incorrect handling of redirects for mhtml:// URLs. To get around the problem, the security services provider suggests deactivating active scripting. Users who wish to wait and do not want Internet Explorer 7 to be installed on their computer automatically at the start of November will find help at hand in an article on heise Security. See also: Internet Explorer 7 "mhtml:" Redirection Information Disclosure, security advisory from Secunia Demonstration of the security vulnerability from Secunia Preventing the automatic Internet Explorer 7 update, article on heise Security (ehe)
计算机
2015-48/1890/en_head.json.gz/6499
Chrome Tests an Updated New Tab Page Chromium, the open source version of Google Chrome, includes a more customizable new tab page. You can easily pin, remove and reorder thumbnails without having to enter in the edit mode. Pinned items are always displayed in the new tab page, which now shows only 8 thumbnails, even if they're no longer frequently visited.The list of search engines and the recent bookmarks have been removed and there's a new section of recent activities that includes recently-closed tabs and recent downloads. Another new section is called "recommendations", but it's still a work in progress.You can hide the thumbnails, hide the list of recent activities and the recommendations if you don't find them useful.The updated tab page is not yet ready to be released, but you can enable it if you have a recent Chromium build (Windows, Mac, Linux) by editing the desktop shortcut and adding the following flag in the target field: --new-new-tab-page
计算机
2015-48/1890/en_head.json.gz/6604
Platform: x86_64 release date:Nov. 2, 2010 The Fedora Project is a Red Hat sponsored and community-supported open source project. It is also a proving ground for new technology that may eventually make its way into Red Hat products. It is not a supported product of Red Hat, Inc. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from free software. Development will be done in a public forum. The Red Hat engineering team will continue to participate in the building of Fedora and will invite and encourage more outside participation than was possible in Red Hat Linux. By using this more open process, The Fedora Linux project hopes to provide an operating system that uses free software development practices and is more appealing to the open source community. Fedora 14, code name 'Laughlin', is now available for download. What's new? Load and save images faster with libjpeg-turbo; Spice (Simple Protocol for Independent Computing Environments) with an enhanced remote desktop experience; support for D, a systems programming language combining the power and high performance of C and C++ with the programmer productivity of modern languages such as Ruby and Python; GNUStep, a GUI framework based of the Objective-C programming language; easy migration of Xen virtual machines to KVM virtual machines with virt-v2v...." manufacturer website 1 DVD for installation on x86_64 platform back to top
计算机
2015-48/1890/en_head.json.gz/6664
Image: Matthew Cotter Can computers understand what they see?Engineer Vijay Narayanan leads effort to develop computer systems that can see better than humansKrista WeidnerAugust 19, 2014 Can computers understand what they see? UNIVERSITY PARK, Pa. — “Deep learning” is the task at hand for Vijaykrishnan Narayanan and his multidisciplinary team of researchers who are developing computerized vision systems that can match — and even surpass -- the capabilities of human vision. Narayanan, professor of computer science and engineering and electrical engineering, and his team received a $10 million Expeditions in Computing award from the National Science Foundation (NSF) last fall to enhance the ability of computers to not only record imagery, but to actually understand what they are seeing — a concept that Narayanan calls "deep learning." The human brain performs complex tasks we don’t realize when it comes to processing what we see, says Narayanan, leader of the Expeditions in Computing research team. To offer an example of how the human brain can put images into context, he gestures toward a photographer in the room. "Even though I haven't seen this person in a while and his camera is partially obscuring his face, I know who he is. My brain fills in the gaps based on past experience. Our research goal, then, is to develop computer systems that perceive the world similar to the way a human being does." Some types of computerized vision systems aren't new. Many digital cameras, for example, can detect human faces and put them into focus. But smart cameras don't take into account complex, cluttered environments. That’s part of the team's research goal: to help computerized systems understand and interact with their surroundings and intelligently analyze complex scenes to build a context of what’s going on around them. For example, the presence of a computer monitor suggests there’s most likely a keyboard nearby and a mouse next to that keyboard. Narayanan offered another example: "Many facilities have security camera systems in place that can identify people. The camera can see that I am Vijay. What happens if I bring a child with me? The obvious assumption would be, this child has been seen with Vijay so many times, it must be his son. But let's consider the context. What if this security camera is at the YMCA? The YMCA is usually a place that has mentoring programs, so this child might not be my son; in fact, he might be my neighbor. So we're working on holistic systems that can incorporate all kinds of information and build a context of what's happening by processing what’s in the scene.” Another research goal is to develop these machine vision systems that can process information efficiently, using minimal power. Most current machine vision systems use a lot of power and are designed for one specific application (such as the face recognition feature found in many digital cameras). The researchers want to build low-power devices that can replicate the efficiency of the human visual cortex, which can make sense out of cluttered environments and complete a range of visual tasks using less than 20 watts of power. Narayanan and other team members are looking at several scenarios for practical applications of smart visual systems, including helping visually impaired people with grocery shopping. He and colleagues Mary Beth Rosson and John Carroll, professors of information sciences and technology and co-directors of the Computer-Supported Collaboration and Learning Lab, are exploring how artificial vision systems can interact and help the visually impaired. "We'll be working with collaborators from the Sight Loss Support Group of Centre County to better understand the practices and experiences of visual impairment, and to design mockups, prototypes, and eventually applications to support them in novel and appropriate ways," Carroll said. Another research priority is using smart visual systems is to enhance driver safety. Distracted driving is the cause of more than a quarter million injuries each year, so a device that could warn distracted drivers when they have taken their eyes off the road for too long could greatly reduce serious accidents. These systems would also help draw drivers' attention to objects or movements in the environment they might otherwise miss. A significant aspect of the research is to redesign computer chips that are the building blocks for smart visual systems. "To develop a typical computer chip, you give it a series of commands: 'Get this number, get this number, add the two numbers, store the number in this space,'" he says. "We're working on optimizing these commands so that we don't need to keep repeating them. We want to reduce the cost of fetching instructions, which will increase efficiency." Research team members from Penn State include Rosson and Carroll as well as Chita Das, distinguished professor of computer science and engineering; Dan Kifer, associate professor of computer science and engineering; Suman Datta, professor of electrical engineering; and Lee Giles, professor of information sciences and technology. The Penn State group collaborates with researchers from seven other universities as well as with national labs, nonprofit organizations, and industry. The research team has had several project-wide electronic meetings since January 2014. "We have a lot of synergies and technical exchanges, so we all know what everyone is doing and we’ve made some exciting progress," Narayanan said. "We have geographically and professionally diverse groups of people working really well together to meet this challenge." Contacts: Vijaykrishnan [email protected] Phone: 8148630392Professor of Computer Science and Engineering and Electrical Engineering Last Updated August 22, 2014 Share this story submit to redditLinkedInSubmit this story to StumbleUponPin this story on PinterestShare on TumblrEmail this articlePrint this article Related ContentNursing, IST students win big in mHealth Challenge with apps designed for kidsPenn State-led team receives $10 million NSF Expeditions in Computing awardExploring the dark side of BitcoinTopicsResearchScience and TechnologyTagscomputer engineering, computer science, computer science and engineering, computer vision, CSE, EE, electrical engineering, Engineering, Information Sciences and Technology, IST, National Science Foundation, NSF, ResearchAudienceBusiness and IndustryFaculty and StaffCampusUniversity ParkCollegeEngineeringInformation Sciences and Technology News for:
计算机
2015-48/1890/en_head.json.gz/7167
Must Read: The Guardian goes all-in on AWS public cloud after… Open Enterprise The Beginning of the End for the ISO? Glyn Moody Glyn Moody is a technology writer and blogger for Computerworld UK. He is the author of the book Rebel Code: Linux and the Open Source Revolution (2001). It describes the evolution and significance of the free software and open source movements. He works in London and his writings have appeared in Wired, Computer Weekly, Linux Journal, Ars Technica, The Guardian, Daily Telegraph, New Scientist, The Economist and Financial Times, among others. Yesterday I was urging people to submit comments on the EU's interoperability framework. I mentioned that one of the important issues in this context was dealing with flawed standards, even – or especially – ones that claimed to be... Yesterday I was urging people to submit comments on the EU's interoperability framework. I mentioned that one of the important issues in this context was dealing with flawed standards, even – or especially – ones that claimed to be “open”. When I wrote that, I was unaware that a rather weightier group of individuals had applied themselves to the same problem, and come up with something that I think will prove, in retrospect, rather significant: the Consegi Declaration: We, the undersigned representatives of state IT organisations from Brazil, South Africa, Venezuela, Ecuador, Cuba and Paraguay, note with disappointment the press release from ISO/IEC/JTC-1 of 20 August regarding the appeals registered by the national bodies of Brazil, South Africa, India and Venezuela. Our national bodies, together with India, had independently raised a number of serious concerns about the process surrounding the fast track approval of DIS29500 [OOXML]. That those concerns were not properly addressed in the form of a conciliation panel reflects poorly on the integrity of these international standards development institutions. This is not just any old declaration by a bunch of disaffected hackers in the developing world: the signatories are all top government officials that have responsibility for open source in their respective countries. In other words, this is tantamount to an official, multi-governmental rebuke to the ISO for the way it has handled the OOXML process and the appeals arising out of it. That in itself, is pretty remarkable: normally governments are content to let standard-setting processes look after themselves. But something has changed irrevocably, as the closing remarks of the declaration make clear: The issues which emerged over the past year have placed all of us at a difficult crossroads. What is now clear is that we will have to, albeit reluctantly, re-evaluate our assessment of ISO/IEC, particularly in its relevance to our various national government interoperability frameworks. Whereas in the past it has been assumed that an ISO/IEC standard should automatically be considered for use within government, clearly this position no longer stands. The comment about the ISO's “inability to follow its own rules” is an amazing put-down, but the follow-on is even more extraordinary: it effectively says that the ISO has lost its legitimacy in the eyes of the signatories, with the implication that they will be looking elsewhere for independent and objective codification of technology in the future. Government plan to adopt ODF file format sparks standards debate in UK UK government adopts ODF for document exchange with citizens and suppliers Microsoft adopts international standard for cloud privacy I believe that this marks the beginning of the end of ISO's reign as the primary standards-setting organisation, at least as far as computing is concerned (for other industries, details of the standards-setting process, or even of the standards that result, may not be quite so crucial as they for the current phase of IT.) This is a view that I and others have articulated before, but one that was not really accompanied by any signs that things would actually change. The Consegi Declaration, by contrast, is a very real statement of intent by some of the most important players in the international computing community. Collectively, they have sufficient power to make a difference to how standards are set globally. Specifically, they could at a stroke help establish some alternative forum as a rival to the ISO by throwing their weight behind it. Against that background – and the fact that many within the free software world are deeply unhappy with the way the ISO has conducted the OOXML process – I think a serious debate needs to be started about what kind of standards-setting process is needed to produce useful, independent and truly open standards for the 21st century. The open source community also needs to start discussing with representatives of the Consegi nations and their supporters how a new international standards body might be formed to replace the current ISO – a radically different one that has at its heart the kind of bottom-up processes that have made the Internet and open source so strong and adaptable, rather than the sclerotic, top-down system that has been found so wanting in the whole OOXML fiasco. Khronos Group releases updates to graphics, parallel programming tools UK government adopts ODF as standard document format
计算机
2015-48/1890/en_head.json.gz/7228
Posted Microsoft loophole mistakingly gives pirates free Windows 8 Pro license keys By Anna Washenko Looking for a free copy of Windows 8 Pro? An oversight in Microsoft’s Key Management System – made public by Reddit user noveleven – shows that with just a bit of work, anyone can access a Microsoft-approved product key and activate a free copy of Windows 8 Pro. The problem is in the Key Management System. Microsoft uses the KMS as part of its Volume Licensing system, which is meant to help corporate IT people remotely activate Windows 8 on a local network. The Achilles’ heel of the setup, according to ExtremeTech, is that you can make your own KMS server, which can be used to partially activate the OS. That approach requires reactivation every 180 days, though, so it’s not a practical system. However, the Windows 8 website has a section where you can request a Windows 8 Media Center Pack license. Media Center is currently being offered as a free upgrade until Jan. 31, 2013. Supply an email address and you’ll be sent a specific product key from Microsoft. If you have a KMS-activated copy of Windows 8, with or without a legitimate license key, then going to the System screen will display a link that reads “Get more features with a new edition of Windows.” If you enter your Media Center key there, the OS will become fully activated. It’s a little surprising that with Microsoft’s complex KMS, this type of thing could slip through the cracks, allowing people to take advantage of the system. It seems most likely that after the uproar in response to Microsoft’s plans to remove Media Center from Windows 8 Pro, the company may have rushed the free upgrade, resulting in a loss for Microsoft and a gain for anyone who takes the time to acquire a free Windows 8 Pro copy. It’s unclear whether or not there’s a patch for this – other than removing the free Media Center download all together. Though ending the free Media Center upgrade would be an easy fix, it wouldn’t be a popular choice among customers who just bought a Windows 8 computers and who want the feature. We’ll have to wait and see how the company responds to this latest hit. Get our Top Stories delivered to your inbox:
计算机
2015-48/1890/en_head.json.gz/7299
3537 projects tagged "LGPL" GNU LGPL (1) KDE Software Compilation For users on Linux and Unix, KDE offers a full suite of user workspace applications which allow interaction with these operating systems in a modern, graphical user interface. This includes Plasma Desktop, KDE's innovative and powerful desktop interface. Other workspace applications are included to aid with system configuration, running programs, or interacting with hardware devices. While the fully integrated KDE Workspaces are only available on Linux and Unix, some of these features are available on other platforms. In addition to the workspace, KDE produces a number of key applications such as the Konqueror Web browser, Dolphin file manager, and Kontact, the comprehensive personal information management suite. The list of applications includes many others, including those for education, multimedia, office productivity, networking, games, and much more. Most applications are available on all platforms supported by the KDE Development. KDE also brings to the forefront many innovations for application developers. An entire infrastructure has been designed and implemented to help programmers create robust and comprehensive applications in the most efficient manner, eliminating the complexity and tediousness of creating highly functional applications. GPLSoftware DevelopmentInternetmultimediaUtilities OpenOffice.org is the Open Source project through which Sun Microsystems is releasing the technology for the popular StarOffice productivity suite. GPLOffice/BusinessOffice Suites Qt is a comprehensive, object-oriented development framework that enables development of high-performance, cross-platform rich-client and server-side applications. When you implement a program with Qt, you can run it on the X Window System (Unix/X11), Apple Mac OS X, and Microsoft Windows NT/9x/2000/XP by simply compiling the source code for the platform you want. Qt is the basis for the KDE desktop environment, and is also used in numerous commercial applications such as Google Earth, Skype for Linux, and Adobe Photoshop Elements. GPLSoftware DevelopmentLibrariesDesktop EnvironmentApplication Frameworks GTK, which stands for the Gimp ToolKit, is a library for creating graphical user interfaces. It is designed to be small and efficient, but still flexible enough to allow the programmer freedom in the interfaces created. GTK provides some unique features over standard widget libraries. LGPLSoftware DevelopmentLibrariesDesktop EnvironmentApplication Frameworks LGPLSoftware DevelopmentLibrariesOffice/Businessphp classes GLib is a library containing many useful C routines for things such as trees, hashes, and lists. GLib was previously distributed with the GTK toolkit, but has been split off as of the developers' version 1.1.0. LGPLSoftware DevelopmentLibraries MathMod Mathematical modeling software A mail filtering manager, supporting Sieve, procmail, maildrop and IMAP filters.
计算机
2015-48/1890/en_head.json.gz/7375
Application Lifecycle Management becomes strategic Eclipse project The Eclipse Foundation has made the area of Application Lifecycle Management (ALM) a strategic topic and has created a new top level project for application lifecycle tools. The Mylyn project will now become the home for related areas such as the management of tasks, contexts, software configurations and builds, as well as for reviews and documentations. The Eclipse Mylyn top level project is a well-established collaborative software development project in the Eclipse world that handles administrative tasks and offers ALM tools and services. In his blog post announcement, Mik Kersten, the founder of the Mylyn project, points out that the Mylyn mission is to provide task and Application Lifecycle Management frameworks and APIs, as well as task-focused programming tools within the Eclipse IDE. The project also aims at creating reference implementations for open source ALM tools used by the Eclipse community and for open ALM standards such as OSLC (Open Services for Lifecycle Collaboration). The decision by the open source organisation has further promoted Mylyn after project executives had announced earlier this year that the Eclipse project was to be extended by various sub-projects. Other top-level projects created by the Eclipse Foundation focus on such areas as Enterprise Java, Runtime, Service-Oriented Architectures (SOA), mobile, language IDEs and modelling. Mike Milinkovich, Executive Director of the Eclipse Foundation, acknowledged Mylyn as an exemplary Eclipse project which has become the de-facto ALM integration framework at Eclipse after Kersten introduced it into the Eclipse community from his PhD thesis.
计算机
2015-48/1890/en_head.json.gz/8147
The History of Microsoft Welcome to The History of Microsoft. A video series that gives you a rare glimpse into the story behind the software giant. Using rare footage and never-before-seen photos we break down Microsoft's history by year. Every Thursday we will air a brand new episode beginning with 1975 where The History of Microsoft begins when the ALTAIR 8800 appeared on the cover of Popular Electronics inspiring two young men Bill Gates and Paul Allen to develop BASIC language software for it. Tune in. Excel 25th Anniversary - Part Two Average: 4.25 Part Two of The Excel 25th Anniversary Video brings us into the present and then takes us into the future of this popular product. We dive into Web Apps, The Cloud, and all things Excel 2010 as we explore the power of software. Be sure to check out Part One where you get a chance to meet some of… Excel 25th Anniversary - Part One Here at Microsoft, we are celebrating the 25th Anniversary of Microsoft Excel by taking a look through its compelling and dramatic history, which is filled with great tech tidbits. In this video, we talk to Scott Oki, Charles Simonyi, Jeff Raikes, and other visionaries behind Excel. We go back to… The History of Microsoft - The Jeff Raikes Story: Part Two Jeff Raikes is the Visionary behind Microsoft Office. In Part One, we got a great glimpse into Jeff's history and the history of the technology industry. In Part Two, Jeff Raikes talks about Odyssey, which was the codename for Microsoft Excel. Jeff explains making, along with Bill Gates, the… The History of Microsoft - The Jeff Raikes Story: Part One Apr 06, 2010 at 11:12PM Jeff Raikes left Apple in 1981 and became the visionary behind Microsoft Office. This is Part One of the Jeff Raikes story for The History of Microsoft series. Jeff's entire story is told with great visuals; we dug through thousands of old tapes and photographs to bring you this compelling… The History of Microsoft - 1999: The Series Finale! I want to thank everyone for their support of The History of Microsoft series here on Channel 9. I had a great time creating it and I hope you enjoyed watching it. 1999 is the Series finale because we may try and do something a bit different for the last decade which would include the years… The History of Microsoft - 1998 For Microsoft, 1998 means a changing of the guard as Bill Gates appoints Steve Ballmer president of Microsoft. Microsoft Corporation's Board of Directors approves a 2-for-1 split of its common shares and The U.S. Justice Department and 20 state attorneys general file an antitrust suit against… For Microsoft, 1997 is filled with big moves as the Company announces the immediate availability of Office 97, we sign an agreement to acquire WebTV Networks for approximately $425 million in stock and cash and Microsoft's Internet Explorer 4.0 is released to critical acclaim and enormous… For Microsoft, 1996 is all about partnerships. The Interactive Media Division is created consisting of MSN, the MSN online service games and kids' titles and the information businesses formerly residing in the now-dissolved Consumer Division. Microsoft Internet Explorer version 2.0 for… For Microsoft, 1995 was filled with Windows. On January 7, 1995, during his first keynote at the consumer electronics show in Vegas, Bill Gates announces Microsoft "Bob" for Windows. Microsoft and Dreamworks SKG announce that they have signed a joint-venture agreement to form a new… For Microsoft, 1994 was an ambitious year as we introduce the architecture for its new software solution, code-named "Tiger," for delivering continuous media such as audio and video. We sign a definitive agreement to acquire Softimage, Inc. of Montreal, Quebec, a leading developer of… Author of this series
计算机
2015-48/1890/en_head.json.gz/8504
Version 3.0 — It’s Happening & With BY-SA Compatibility Language Too Mia Garlick, February 9th, 2007 So it’s been a while since we discussed Version 3.0 but it is still happening. We’re putting the finishing touches on the new license drafts for the new US and new generic/unported licenses and working to make them public within the next 10 days. As you know, Creative Commons has long been hopeful of enabling interoperability between licenses that guarantee the same frictions. Back in November 2005, Larry described his vision of building an ecology of free licenses. Although it has not been possible to date to agree with other license stewards on the exact details necessary to make licenses that are equivalent to a specific CC license compatible yet, Creative Commons remains hopeful that it will be possible at a date in the future to secure the necessary agreement with license stewards for equivalent licenses. Because we would have to change our licenses to effect this and because we are reticient to version too often (not just because it requires a lot of work for all concerned but also because it adds complexity to a system designed to be simple), we propose to include the structure of compatibility as part of the Version 3.0 changes. Given it is the Creative Commons Attribution-ShareAlike license that is most likely to be capable of compatibility with other existing flexible licenses, we are proposing to add new language to the “ShareAlike clause” of the BY-SA to establish the structure of compatibility. An amended version of the draft Attribution-ShareAlike 3.0 (US) license has been posted to the cc-licenses list. Please post any comments you have to this list. Because we are anticipating that this will not be controversial or provoke much comment, we are hoping to roll out the Version 3.0 licenses by the end of next week with the BY-SA compatibility language included. So if you have comments or suggestions for improvement, please make them to the cc-licenses (subscription required) list as soon as possible.
计算机
2015-48/1890/en_head.json.gz/9777
DVD-piracy paranoia proves counterproductive Tuesday, 24 June 2003, 4:44 PM EST A little program called DeCSS caused a lot of commotion when it surfaced on the Internet four years ago. DeCSS does only one task: Remove the encryption on a DVD movie, allowing the video files on the disc to be used at will -- played back off the disc, copied to the computer's hard drive or burned to a second DVD. Its author, a Norwegian teenager named Jon Lech Johansen, said he wrote DeCSS because he wanted to be able to watch DVDs on his Linux computer and no authorized playback software was available. The movie industry preferred to describe DeCSS as a lock-picking tool, useful only for piracy. It successfully filed suit to prevent the posting of DeCSS to Web sites from the United States. The entertainment industry's legal campaign against the DeCSS code (its name refers to the Content Scramble System used to regulate playback) has continued ever since. At the end of May, for example, the California Supreme Court opened hearings on a suit by the DVD Copy Control Association, the licensing body behind CSS, that argues posting DeCSS online violates the state's trade-secret laws. Programmers continued to rework DVD-unlocking software, eventually writing new, more effective code. That, in turn, has given birth to a surprising variety of applications. Related itemsNews: California supremes hear DeCSS case (30 May 2003)News: Jon Johansen Found Not Guilty of DVD Piracy (8 January 2003)News: Jon Lech Johansen denies DVD pirating (10 December 2002)News: DeCSS author goes on trial (9 December 2002)
计算机
2015-48/1890/en_head.json.gz/10116
Virtual – the new reality for developers New assumptions needed by code cutters Martin Banks IBM virtual strike goes ahead in Sadville Procurve goes for the core IBM faces Second Life strike “The life of the developer has just become a lot harder,” said Sharad Singhal, Distinguished Technologist HP Labs, Palo Alto, “and the reason is that the assumptions they make about their environment are not necessarily true any more.” He was talking about the way that the rush towards virtualised systems infrastructures is changing the ground rules to which developers have historically worked. They assume, for example, that the operating system they are writing to is stable, that the amount of memory they have available is static, and that the amount of CPU utilisation is static. “They are hard-coded into the machine, or at least developers assume that at the start of a job a configuration file will tell them.” But in a world where virtualisation is the norm all of this is changing and developers will have to learn how to optimise their applications to meet the needs of a constantly shifting environment. For example, code and performance optimisation processes will become far more difficult in a shifting, intangible environment. At the same time, if applications run out of capacity, developers will now be presented with alternatives to simply shedding workload. Now they will have the ability to reach out to the environment to request more capacity, and get it on demand. According to Singhal, developers face making a transition into an environment built on a new set of capabilities, which he likened to working with a new operating system. In that context, virtualisation is easy to dismiss as just another technology hot-topic that will fade away in time, leaving little trace. But a growing number of enterprises are seriously heading in that direction, and many developers are continuing to write code that is not efficient and does not match the environments the code will have to run in. This poses problems for enterprises that are already moving to virtualised environments. Being able to exploit the flexibility of virtualisation in terms of workload and capacity management is an obvious case in point. “For example,” Singhal said, “such an environment can detect that an application requires more capacity, but the application itself has not been written in a way that can make use of it. On the other hand, capacity may be taken temporarily from an application because a higher priority task requires it, but then that deprived application promptly crashes rather than being able to continue functioning in a degraded manner. What this means is that the development tools we give them are going to have to change over time.” HP Labs are paying considerable attention to developing technologies and processes that will be required in the infrastructure management area, but in applications it is the company’s partners that are – or should be - working on it. “I am assuming, for example, that when Microsoft starts offering virtual machines that a lot of the Virtual Studio type environments will start recognising that virtual machines exist,” he said. “Developers working in J2EE environments are starting to recognise that virtual environments exist. So these types of capabilities will become available to developers and they will be available inside C libraries and J2EE libraries.”
计算机
2015-48/1890/en_head.json.gz/10323
Manifold System Manifold.net Stable release 8.0.28.0 / 24/10/2013 Manifold System is a geographic information system (GIS) software package developed by manifold.net that runs on Microsoft Windows. Manifold System handles both vector and raster data, includes spatial SQL, a built-in Internet Map Server (IMS), and other general GIS features. Manifold System has an active user community with a mailing list and online forums. 2 Manifold System releases 3 User community The development team for Manifold was created in 1993 to optimize mathematics libraries for a massively-parallel supercomputer created in a joint venture between Intel Corporation and the US Department of Defense. The team subsequently embarked on a plan to create and sell mathematics libraries, including the General Graph Facilities library (GGF) and the Computational Geometry Library (CGL), under the name of the Center for Digital Algorithms. A series of "workbench" products were created to help teach customers the operation of algorithms in the libraries using visual means. Road networks and geometric data in geographic contexts were used to provide visual familiarity and interest, in effect creating a GIS-like product. In 1997 and 1998 customers asked for a true GIS product based on the workbench products and development of Manifold System was launched. The company soon changed its name to Manifold Net to match the new product's name. Manifold System releases[edit] Manifold System was first sold in January 1998 as Release 3.00. Releases 3.00 and 4.00 were heavily weighted to analytics, with many tools for abstract graph theory analysis but a very limited GIS toolset. At the request of GIS users and resellers, Release 4.50 emphasized general GIS features of broader interest and emerged as Manifold's first commercial GIS, a typical vector GIS more or less equivalent to classic vector GIS packages such as
计算机
2015-48/1890/en_head.json.gz/10668
Home / Tech / Computer / Computer Software / Operating Systems How Windows Vista Worksby Tracy V. Wilson Introduction to How Windows Vista Works Windows Vista: Aero Windows Vista: Creating a 3-D Desktop Windows Vista: Networking and Security Photo courtesy © 2006 Microsoft Corporation. All rights reserved. The first version of Microsoft Windows hit the market in 1983. But unlike today's versions of Windows, Windows 1.0 was not an operating system (OS). It was a graphical user interface that worked with an existing OS called MS-DOS. Version 1.0 didn't look much like newer versions, either -- not even Windows 3.0, which many people think of as the first real version of Windows. Its graphics were simpler and used fewer colors than today's user interfaces, and its windows could not overlap.Windows has changed considerably since then. In the last 20 years, Microsoft has released numerous full-fledged versions of the operating system. Sometimes, newer versions are significantly different from older ones, such as the change from Windows 3.1 to Windows 95. Other new releases have seemed more like enhancements or refinements of the older ones, such as the multiple consumer versions of the OS released from 1995 to 2001.Microsoft's newest version of its operating system is Windows Vista. For many users, upgrading to Vista won't seem as dramatic as the upgrade from 3.1 to Windows 95. But Windows Vista has a number of new features, both in the parts that you can see and the parts that you can't.At its core, Windows Vista is still an operating system. It has two primary behind-the-scenes jobs:Managing hardware and software resources, including the processor, memory, storage and additional devicesAllowing programs to work with the computer's hardwareIf all goes well, this work is usually invisible to the user, but it's essential to the computer's operation. You can learn about these tasks in more detail in How Operating Systems Work.But when many people think of operating systems, they think of the portion they can see -- the graphical user interface (GUI). The GUI is what people use to interact with the hardware and software on the computer. In Windows systems, features like the Start menu, the recycle bin and the visual representations of files and folders are all part of the GUI.Windows Vista's GUI is a 3-D interface called Windows Aero. Of the four editions of Windows Vista, three -- Home Premium, Business and Ultimate -- support Windows Aero. Home Basic, the most scaled-down edition of the OS, uses a less graphics-intensive GUI instead of Aero. The other editions can also use this basic GUI, so people with older computers that can't support lots of 3-D graphics can still upgrade to Vista.We'll take a closer look at the Aero GUI and other Vista features next.Microsoft's Web site has more information on which features each edition includes. Thank You and Additional Editions Thanks to Jason Caudill for his assistance with this article. In addition to the four primary editions of Windows Vista, there are two editions for special markets. Windows Vista Enterprise is designed for very large businesses. Windows Vista Starter is a basic Vista OS for use in emerging markets, such as developing countries. Print Wilson, Tracy V.. "How Windows Vista Works" 05 December 2006. HowStuffWorks.com. <http://computer.howstuffworks.com/windows-vista.htm> 27 November 2015. Popular Tech Topics Mac, ENIAC or UNIVAC: The Computer History Quiz Top 5 Myths About the Internet How to Connect Your Computer to Your TV How Capacitors Work How Cloud Computing Works Take a look at Computer Videos You Might Also Like 10 Ways to Make the Cloud Work for YouIt's vast. It's handy. And it's about time you took advantage of it, perhaps even more than you already do during your daily computing adventures. How Palm webOS WorksPalm is trying to recapture the hearts of gadget fans with its new smartphone operating system, the webOS. Can the webOS help Palm beat out the BlackBerry and iPhone? Popular Articles Do I need to back up files that are already in cloud storage? The DIDO Quiz Are there free registry cleaners?
计算机
2015-48/1890/en_head.json.gz/11046
Golden's Rules: Open source sendmail in the enterprise byBernard Golden You've probably heard of open source sendmail, but perhaps you aren't sure of what it is or whether you should consider using it for your business. This Article Covers Linux management and configuration Resource allocation with Linux cgroups helps optimize performance Tune the Linux Ext4 file system for optimal performance SearchEnterpriseLinux’s top five Linux tips of 2011 Besides MTAs, are apps available on Linux? Postfix vs. sendmail Sendmail flaw puts systems at risk again As BYOD and Mobility Rise in 2015, IT Focuses on Management Why Traditional Anti-Malware Solutions Are No Longer Enough –AVG Technologies USA, Inc. IT Decision Checklist: Messaging Security This tip provides an intro to open source sendmail and discusses the pros and cons of using it. I'll also offer a way to implement sendmail that might appeal to those who think that bringing in Sendmail would be too much work. First, what is Sendmail? Simply put, sendmail is a mail transfer agent (MTA) and is also known as a mail server. MTAs are a fundamental piece of the Internet, providing e-mail capability to users. E-mail is the killer application of the Internet, and every e-mail message that gets sent, whether profound or profane, requires an MTA. MTAs accept mail from any e-mail client that knows how to talk to them via simple mail transfer protocol (SMTP), forward e-mails to the appropriate location for each e-mail's recipient and allow e-mail clients to retrieve e-mail to be read. Simple, eh? Conceptually, yes. Practically, no. E-mail is the most heavily used application in the enterprise. There are a host of issues that accompany e-mail these days: spam, viruses, malware and the like. So an effective e-mail product needs to be functional, scalable and extensible. Sendmail is the granddaddy of e-mail software. It was developed in the early days of Unix and is the mail server of choice in thousands, if not hundreds of thousands, of locations. If your organization is looking for a time-tested mail server, you should consider Sendmail. However, you should be aware of some of its drawbacks. First, sendmail is focused solely on e-mail. Many organizations use Microsoft Exchange because it offers an integrated suite that provides e-mail, along with contact management and shared calendaring. For them, having a single product that delivers all of that functionality is convenient. Open source Sendmail does not offer an integrated product that provides contacts and calendaring. That's not to say that Sendmail can't be tweaked to support these functions -- but it requires installing other programs and configuring them to work with sendmail. This goes for antivirus and antispam functionality as well. There are excellent programs available that integrate with Sendmail -- but you'll have to do the integration and configuration. Speaking of configuration, sendmail has a reputation for being difficult to configure. It's not clear if that reputation is deserved, but it is by no means an install-and-forget product. Fortunately, there are excellent reference books available to walk you through the process. What about support? E-mail is pretty important, after all. What resources are available to help you keep it up and running? Check out sendmail.org, and you'll find a long list of community resources: Web pages, mailing lists and forums. What if you don't want to rely upon those mechanisms and want commercial support? There are many companies that provide commercial support. For one thing, open source sendmail is bundled with many Linux distributions, so support is available through commercial distribution companies. As I mentioned earlier, there is a commercial provider of sendmail called Sendmail Inc.. This company sells a commercial Sendmail that comes pre-integrated with a lot of requisite functionality and also provide support for their product. What if the whole install and administration effort just seems like too much work? Well, there is an alternative. Several companies sell e-mail appliances based upon Sendmail. Some examples are Symantec, and InterShield. They come packaged in a 1U form factor, ready to be installed in a rack and plugged into your network. All of the hard work of configuration is already done, leaving nothing more than user account setup left to do. Golden's Rule: So, the question, "Sendmail in the enterprise: Why or why not?" might be restated, "Sendmail in your enterprise: What or how?" It comes in so many versions that one will probably work for you. You just need to decide which version makes sense for your company. Dig Deeper on Linux management and configuration How to use SUSE Manager for multiplatform Linux server administration CIOs seek special skills in Linux admins Samba 4 beta offers platform choice to data centers Building a Linux infrastructure with maximum automation After delay, Fedora 10 finally available tty command Likewise upgrades free single sign-on tool Linux server provisioning aided by open source tool Ubuntu Landscape systems management tool set to launch Network architect: Hardening Linux networks with open source tools, part two Signing Linux RPM files using a Gnu Privacy Guard key
计算机
2015-48/1890/en_head.json.gz/11053
Template list Skyrim mods Elder Scrolls Online wiki Morrowind (location) From Skyrim Wiki Map of Morrowind Morrowind is a province located in the north-east of Tamriel. It contains both the continental mainland and a large island, Vvardenfell. Morrowind is the original homeland of the Dunmer, also known as the Dark Elves. In recent decades, Morrowind has become a volcanic wasteland. In an apocalyptic event, a massive series of eruptions effectively destroyed the landscape. The Dunmer would have perished if it were not for the intervention of the Daedric prince Azura, long held by the Dark Elves as their patron. Azura informed her most devoted followers of the impending catastrophe ahead of time, with sufficient opportunity to evacuate the population from the doomed land. Many Dunmer settled in Skyrim, particularly in Windhelm; however they have found themselves targeted for discrimination and relegated to a ghetto, known as the Gray Quarter. Retrieved from "http://skyrim.gamepedia.com/index.php?title=Morrowind_(location)&oldid=132730" Categories: LoreLocations This page was last modified on 21 November 2014, at 16:24. Content is available under CC BY-NC-SA 3.0 unless otherwise noted. Game content and materials are trademarks and copyrights of their respective publisher and its licensors. All rights reserved. This site is a part of Curse, Inc. and is not affiliated with the game publisher. About Skyrim Wiki
计算机
2015-48/1890/en_head.json.gz/11424
Hope everyone will take the chance to click on the ad above and check out the latest Facebook application to come out: Modern Combat. It's a super intense way of jumping into the warfare without having to actually do it! I just started playing and its super addicting. Hope you enjoy it! Time of Eve Winner A while back we posted a writing contest surrounding Time of Eve, asking all our users what they thought about the meaning of the café in Time of Eve. After receiving a huge slew of entries, Directions and Director Yoshiura undertook the task of reading through each of the entries to find a winner. After weeks of deliberation, they finally found their winner: rabid_child! Here's what rabid_child wrote that impressed everyone who worked on the Time of Eve project: "I’m probably taking this topic more literally than intended, but to me the bar is a setting for a Turing test. A Turing Test is considered the standard setup by which it can be fairly judged whether a computer is intelligent (at least by human standards). The idea is creating a situation where a computer, and a human is hidden from the tester and the tester must have a conversation with both and be unable to tell which is which, in which case the computer is considered intelligent (or you need a different tester). This café takes advantage of the conceit that here robots are indistinguishable from people aside from their halo, to do away with artificial environment and replace it with something more casual, where people might not even know that they are subconsciously conducting this test. The advantages are that first the presence of a computer is not guaranteed, and that the sentience of robots becomes more obvious when you meet the people/robots outside and find that you came to the wrong conclusion. This is because it does away with the notion that there is something inherent and impossible to reproduce in humans (such as a soul or whatever) that separates them from robots." To our pleasant surprise, Director Yoshiura wrote back, with his response to our winner, all the contestants and viewers of Time of Eve: "People say that 'when an artistic work gets released to the public, it no longer belongs to the author.' I strongly agree with that statement. So who does it belong to? I'd say the work belongs to the viewer - the person who is watching the work. So, in this contest, I wanted to ask all of you - 'In your opinion, what's the significance of Time of Eve's existence within the story?' I was most profoundly struck by the entry from rabid_child, who wrote about the Turing Test. The Turing Test was proposed by the British mathematician Alan Mathison Turing. rabid_child's idea is that Time of Eve is a place for staging the Turing Test. Furthermore, it's a place where androids and humans are testing each other. And, the cafe creates a situation where the participants can't figure out whether they are dealing with an android or human. The test originally is for humans to evaluate artificial intelligence. But, if you follow the logic of the Turing Test, it would mean that in the Time of Eve cafe the following types of tests are taking place: 1: Humans judge whether they are dealing with a human or an android 2: Androids judge whether they are dealing with a human or an android At the same time, at the Time of Eve cafe, 3: Androids aren't allowed to be differentiated from humans To quote the Turing Test itself, 'an android that can't be judged to be human or android is truly excellent.' Hmmm...I think that touches on a very critical element to the story... Thank you for the fascinating entry! For those of you who are interested, do a search on the 'Turing Test.'" Thanks to all those who entered, everyone on the Directions team who helped put it together and hope you will be able to re-watch Time of Eve with new eyes! Vocaloid Revolution Written by edsamac They say that the success of a manga or anime lies not in the number of copies sold; rather, it lies in the number of product lines the franchise has to offer. If there is a video game, a trading card set, a life-size huggable pillow, a ready-to-wear cosplay kit with functional accessories, car decals, illustration books and fan works, toothbrushes, and even underwear of the said franchise - then you can take it to the bank that it's screaming with popularity and success. Although most of these franchises start with either a manga or a light novel, a certain popular craze has begun to redefine the idea of J-Pop idol figures a whole new level over, melding together aspects of anime and Japanese pop-culture in a refreshingly new perspective. This week, I'm tossing the anime aside in order to talk about a topic that resonates in me with a passion. It is a craze that began two years ago and still stands strong, boasting for itself an army of devoted fans that fuel the flames of this marketing monster that has garnered for itself a colorful arsenal of franchise goods - extending far beyond what it was originally intended for. That symphony of melodies; the muse of the future; enchantress and siren that casts a spell upon misty-eyed listeners in a spell-binding show of beauty and class. What is this overly dramatized thing of which I speak? Oh, yes... MIKU. Hatsune Miku! Okay, so I'm overdoing it a little with the passion bit... Truth be told, I know I'm supposed to talk about the VOCALOID voice series, in general. Going through my resources, however, it turns out no matter how hard I try, I simply CANNOT write this article in any way save in a manner that more or less glorifies Hatsune Miku. Call me the brazen maniac, but it's true: Hatsune Miku is perhaps one of the most prolific character series to date, spawning for itself a respectable amount of exposure and acclaim. But what exactly IS the VOCALOID line up? VOCALOID was originally a synthetic voice emulation software developed by Yamaha that basically allowed a user to input song melody and lyrics and have the program "sing" it as playback. Though Yamaha didn't release the product under their name, they allocated sales to third party distributors, which released a bunch of VOCALOID lines as early as 2004. Developing the software further, Yamaha released a second version, entitled VOCALOID2, which produced several more VOCALOID titles under different third party distributors. Of notable acclaim was the "Character Vocal Series" released by Crypton Future Media, Japan in 2007 - their first product of which was none other than Hatsune Miku. It began with a song But Miku wasn't the first VOCALOID I listened to. Surprisingly, I was introduced to the whole craze by a friend who pointed me towards a Nico Nico Douga video featuring Sakine Meiko (yet another Vocaloid) singing a song entitled "HONEY", backed up by the singing vocals of other VOCALOIDs. The video struck me as "cute", and undeniably addicting to watch over and over again. However, it never occurred to me that the voices singing weren't real people at all - in fact, my initial guess was some sort of synthesizer effect along the lines of Cher or PERFUME. Suffice it to say that I was amazed, if not blown away at the idea that these singers were actually completely synthetic. And this fact made the difference. If we were talking about cute anime girls singing and dancing at the same time, I'd probably bat an eyelash or two before dismissing it as just another cute idea. But that wasn't the case for VOCALOID. I might be exaggerating a little, but this was cool - betcha-by-golly-wow cool. Slow to the beat It didn't help much to know that I was about a year late on the whole craze. VOCALOID mania was apparently quite the rage among the net denizens of Nico Nico Douga - a Japanese "YouTube" of sorts where VOCALOID videos are openly shared, promoting the said franchise. The unique formula of combining music with animation and J-pop idol fanfare was a work of genius, giving fans a sense of "quasi-ownership" over the franchise, as well as opening a whole different horizon in terms of creative output. Users would make songs and lyrics, post them online, and get feedback - the result being a whole plethora of different works and styles. Other creative individuals would showcase their artistry in pen, creating all sorts of visual works to complement the music in a eye-candy package of moe goodness. Surprised at what I was missing, I joined PiaPro - an online community of VOCALOID doujin artists - on a whim to see what all the fuss was about. Suffice it to say that the number of creative individuals was nauseating, making me realize that, indeed, the heart of any franchise lies in the devotion of its fans. Undoubtedly, the VOCALOID series was one that had a ridiculously solid heart, at that: one made not of trademarks or copyrights, rather, one of unadulterated, user-generated creativity and content. Miku makes a mark But VOCALOID is VOCALOID - I want to talk about Miku. Perhaps the most popular of all of the VOCALOIDs, her vocal quality is undoubtedly attuned to J-pop tunes, making her an instant icon as far as anime-related music is concerned. Given the intimate relationship anime and music have in common, it's no wonder that Hatsune Miku had a considerable impact on anime viewers - most especially those who like moe. It's almost like a godsend, for a lack of a better term. If you want to give your ear a go at Miku, try listen to works by Kz(livetune) or ryo(SUPERCELL). Admittedly, Kz's works were the first I listened to with Miku as vocals in her album Re:packaged. Kz's style is notably bubbly and "cute", having a light and almost "electronic" feel to the ears, which doesn't fail to make you thump your feet to the beat. Ryo, on the other hand, has a very wide range of musical styles, including ballads, pop, and even a little rock. Characteristic of his style, however, is his focus on piano keyboard solos, which makes his music rather refreshing, if not unique. Skies the limit Success, again, isn't rated in copies sold. For what it's worth, Hatsune Miku probably sold more copies compared to its successors - but Miku's fame goes beyond what it was originally intended. PROJECT DIVA was a recently released rhythm game for the Playstation Portable developed by SEGA, which basically allows people to play their favorite songs while getting a good dose of eye-candy at that. The game has lots of unlockables, including different outfits and artworks made by different PiaPro users. The game even features a unique video edit mode, where users can produce their own PVs (promotional videos) with songs they can import from their Memory Stick. A manga was also made by KEI, the original character designer for Hatsune Miku and subsequent VOCALOID2 characters in the Crypton lineup. Miku has even made several cameo appearances in anime (i.e. Lucky Star OVA, Zoku Sayonara Zetsubou Sensei), and has even sung the final ending theme in the anime Akikan!. She even appeared as a decal design in the Japan super GT series, marking the first anime-related decal theme (itasha) to ever be featured in such an event. The heart of it all The list of acclaim goes on, but it's the heart of the franchise that captures the fan. Be it a love for the visuals of the characters, or the sheer joy of listening to the music of these synthetic idols - what's touching is that this success is attributed to the very heart of its fans. I'm honestly not one to partake in crazes such as these, but there is perhaps one marked difference in this whole VOCALOID fever. Unlike anime shows or manga where the plot and characters are more or less layed out before you, the VOCALOID series goes further and gives you the opportunity to try your hand at making something you can truly call your own. No matter how out-of-control it can get with all the fandom, the idea that there is always your own "personal" Miku, Meiko, or any other VOCALOID for that matter, changes the feeling completely. It doesn't get any more personal than that - and that is, perhaps, what makes the franchise so immensely enjoyable. Make no mistake about that. Chrono Crusade Written by MasakoX Won't somebody stop the clock?! It is the 1920s and once again, demons are aiming to destroy society and the entire human race. In their way however, are the duo that spearhead a religious order dedicated to the eradication of all demonic activity and maintain normality amongst the populace. This may sound like a retro Devil May Cry knock-off, but don't be so quick to condemn it. What I am referring to is the well-acclaimed series Chrono Crusade. Penned by Daisuke Moriyama, Chrono Crusade is set in 1920s America, the time of the speak-easy and the period of decadence before the Great Depression of 1929. Demons have infiltrated the city and are feasting on the souls of the innocent, but they're not going to get an easy ride. Far from it. Sister Rosette Christopher, and her trusty sidekick Chrono work together to prevent the evil forces from overwhelming the unsuspecting city dwellers. Thankfully, she is not limited to just using holy water and an undersized crucifix to beat them – she's packing heat! This is another example of how Japanese manga-ka like to help religious figureheads get their message across through the use of armaments (Please refer to my article on Saiyuki from a previous newsletter). Rosette is a nun based with the Order of Magdalene, who is tied to her partner Chrono in a far than ordinary way. In a fashion not too dissimilar from Fullmetal Alchemist, Rosette is fighting to save her brother Joshua from annihilation by compromising her own life in some fashion by 'signing' a contract with Chrono. Her life is slowly dwindling away and adds to the time-based theme of the show. The title, the contract and Rosette's own impulsive personality give off a sense that the plot is in no position to dawdle. The setting, the action and the ever-more surreal scenarios that the characters are placed in as the series progresses indicate the urge to find a resolution to the story, whether it's good or bad for Rosette. The contract or what I like to call the 'pinky-swear' effect in anime is often used to bond two characters together for no other reason than to provide more footage for fans. Having said that though, its use in Chrono Crusade is a lot more sincere than in most other shows. Moriyama, through the use of the Rosette-Chrono contract, explores the notion of mortality and that a person is born with x amount of life energy; and as time progresses, it depletes. Rosette is able to channel into that energy and supply it to Chrono directly through some sort of pseudo-scientific fashion; of course it sounds nonsensical to bonafide scientists but it doesn't feel wholly fanciful to the average reader. Another facet of Chrono Crusade which intrigues me is the reference to Catholicism. Catholicism may be one of the most recognised religions in America (where the series is set) and indeed the entire world, but it has barely scratched the surface in Japan with barely over half a million followers today. That being said, religion (particularly Christianity) is regularly considered a visual garnish to lavish backdrops and narratives supplying them with a sense of history and drama. Shows like Vampire Hunter D and Hellsing are intertwined with religious overtones within their plots (albeit focusing mainly on vampires rather than wholly ecumenical affairs). Time is a precious thing and Chrono Crusade does a lot to impress that to the audience. Through Rosette's drive to protect Chrono and save her brother, she is using time like a currency in exchange for the knowledge that both her nearest and dearest are safe and healthy. It's a very selfless and noble act which endears the show to me immensely. This could've been a very generic show as the anime makes the characters look fairly uniform to most series of the day; but the complex personalities as well as the sincere and thoughtful natures of the individuals towards a common and prosperous conclusion helps keeps its head above the sea of 'pinky-swear' anime out there. A definite consideration for any anime fan's collection! Did you know that Nakiami from "Xam'd: Lost Memories" may have been inspired from Nausicaa from "Nausicaä of the Valley of the Wind" and San from "Princess Mononoke"? If you look at them, Nakiami rides on a flying device that is fairly similar to Nausicaa's and they both sport reddish hair. While Nakiami also has tribal markings on her face that are very similar to San's and both have cold/calculating personalities. Then if you compare all 3, they all have a deep connection with nature and despise those who harm the balance. Coincidence? We think not. Copyright Information Chrono Crusade © 2003 Gospel, Time of Eve © Yoshiura Yasuhiro/Directions, Naruto Shippuden © 2002 MASASHI KISHIMOTO / 2007 SHIPPUDEN All Rights Reserved., Omamori Himari ©2009 Miran Matora/Fujimishobou/Omahima Partners, Tegami Bachi: Letter Bee © Hiroyuki Asada SHUEISHA, LBP, TV Tokyo, Yumeiro Patissiere © YPPC All Rights Reserved Popular Shows Flame from the Amaterasu still remains on the battlefield as Naruto and his teammates continue to search for Sasuke. But despite the search team’s best efforts, they cannot find any trace of him. Omamori Himari Yuto Amakawa is a fairly ordinary young man. On the morning of his birthday seven years after his parents' death, a mysterious girl appears before the orphan, and demons begin attacking Yuto! Yuto is the descendant of the Amakawa family, one of twelve demon slayer clans, and Himari is an Ayakashi with a vow to protect him. Tegami Bachi: Letter Bee Lag's friends Zazie and Connor recommend he go for a checkup with the medical team, after he arrives at work one day exhausted from his busy days as a Letter Bee. The head of the medical team is Dr. Thunderland Jr., a man with an odd taste for dissection, leading him to be called "The Corpse Doctor" by many out of fear. He takes an interest in Steak and captures him, because Steak is a species known as a Kapellmeister which was thought to be extinct. Lag and Niche hurry after the doctor to rescue their friend... Word of the Day トゥース (pronounced: touh-sue): This word originates from a catchphrase/greeting of a popular comedian duo Audrey's Kasuga. Michana's Pick Yumeiro Patissiere Episode 14 The biggest event of the year for the academy - Cake Grand Prix - is upon us! The winning group gets to study abroad in France! I hope Ichigo will be able to go... Pulse Pop When you check these out, you'll understand why these are some of most frequented stories on the CR Pulse: Harbin Ice Festival Cage Homes in Hong Kong Laforet in Harajuku Nasal Irrigation The opinions, beliefs and viewpoints expressed by these authors do not necessarily
计算机
2015-48/1890/en_head.json.gz/11541
1DevDay Detroit Developer Conference 2012 DetroitDevDays Saturday, November 17, 2012 from 8:00 AM to 6:00 PM (EST) Detroit, United States Regular Tickets - Post Reg As of 11/8/2012 we can not guarantee an event bag or meals with these tickets. Ended Share 1DevDay Detroit Developer Conference 2012 1DevDay Detroit 2012 WE CANNOT GAURANTEE AN ATTENDEE BAG OR MEAL FOR ANY TICKETS PURCHASED AFTER NOVEMBER 8th THANKS TO EVERYONE FOR SHOWING SUCH GREAT SUPPORT A Celebration of the Michigan Software Developer http://1devdaydetroit.com This year DetroitDevDays will be producing the fourth annual 1DevDay Detroit Software Developer Conference. To accommodate our target attendance we are planning to use space at the Cobo Conference Center in Downtown Detroit on Saturday, November 17th. The DetroitDevDays mission is to build a software developer community in the Detroit area that is regarded as the best in the world. DevDays educate and unite our Software Developer community with inclusive, accessible and affordable events and conferences. The events are typically held on Saturdays so we do not conflict with attendee work schedules. The cost to attend is kept as low as possible, so developers of all pay scales can afford a ticket. DevDay attendees are Software Developers and Software Architects from Michigan, Ohio, Ontario and from as far as Illinois who are passionate enough about what they do to spend their Saturday, absorbing new technologies and networking with like-minded professionals. Many in the Detroit area are not in a position to attend conferences in California, Las Vegas or New York so; DevDays are way to bring great speakers and workshops to them. Past DevDay attendees have represented organizations like Compuware, Quicken Loans, ePrize, Pillar, Chrysler, GM, Ford, Comerica, Flagstar, Wayne State, UofM, MSU and many more. 1DevDay Detroit 2012 and The Michigan Software Developer Summit 1DevDay is for Software Developers and Architects. 1DevDay will remain the much-loved polyglot Developer conference it is known for. We want to make this event a celebration of the profession of Programming, App and Software Development in Michigan. Our goal is to make 1DevDay the “must attend” event for Software Developers in Michigan. This year, several sessions will be dedicated to panel discussions and workshops that will focus on the theme of growing our profession i, growing App and Software companies and to introducing developers to the many new opportunities in Michigan. We are calling this the Michigan Developer Summit. We plan to attract sponsors and representation from Detroit area Software companies and IT shops that are serious about their programmers. We also plan to reach out to start-ups seeking passionate Developers interested in becoming co-founders and entrepreneurs. Cobo Center To accommodate the number of expected attendees and to provide room for growth we are holding 1DevDay at Cobo. In the past we were forced to turn attendees away due to space requirements. This year, we can add as many rooms as we like and we will not have sessions that are standing room only. We are proud to bring 1DevDay to this historic location. Cobo Center sits on the Detroit River and is situated along Jefferson and Washington avenues. Cobo was named for Albert E. Cobo, mayor of Detroit from 1950 to 1957. Designed by Gino Rossetti, the center opened in 1960. There are about 5,000 hotel rooms in downtown Detroit with 4,000 hotel rooms within walking distance of Cobo Center. This year, a portion of the profits from 1DevDay will be donated to FORCE. http://www.facingourrisk.org/ FORCE is the only national nonprofit organization devoted to hereditary breast and ovarian cancer. Their mission includes support, education, advocacy, awareness and research specific to hereditary breast and ovarian cancer. Their programs serve anyone with a BRCA mutation or a family history of cancer. Schedule to Come Secure Parking Provided in Cobo Continental Breakfast Provided Lunch Provided At least four tracks of Talks Two Keynotes Contact us if you wish to transfer your ticket. Sorry, No refunds after October 1st. HERE ARE JUST A FEW OF THIS YEARS PRESENTERS VISIT THE WEBSITE FOR THE UP TO DATE LIST Chad Fowler – Self Engineering -Keynote Chad Fowler is an internationally known software developer, trainer, manager, speaker, and musician. Over the past decade he has worked with some of the world’s largest companies and most admired software developers. Chad is SVP of Technology at LivingSocial. He is co-organizer of RubyConf and RailsConf and author or co-author of a number of popular software books, including Rails Recipes and The Passionate Programmer: Creating a Remarkable Career in Software Development. Ted Neward – Iconoclasm – Keynote Ted Neward is an independent consultant specializing in high-scale enterprise systems, working with clients ranging in size from Fortune 500 corporations to small 10-person shops. He is an authority in Java and .NET technologies, particularly in the areas of Java/.NET integration (both in-process and via integration tools like Web services), back-end enterprise software systems, and virtual machine/execution engine plumbing. He is the author or co-author of several books, including Effective Enterprise Java, C# In a Nutshell, SSCLI Essentials, Server-Based Java Programming, and a contributor to several technology journals. Ted is also a Microsoft MVP Architect, BEA Technical Director, INETA speaker, former DevelopMentor instructor, frequent worldwide conference speaker, and a member of various Java JSRs. He lives in the Pacific Northwest with his wife, two sons, and eight PCs. Baraa Basata - Where is your domain model? I am a consultant with Pillar, making things happen on client engagements for agile project delivery. On every project, my focus is on how to best contribute to the success of the teams I serve, and I’m constantly looking for every opportunity to make a positive impact and to delight my clients. I studied Mathematics and Computer Science at the University of Michigan and Lawrence Technological University, and I reside in Flint, Michigan. Follow me on Twitter @baraabasata. Bill Wagner – Your Asynchronous Future Bill Wagner has spent most of his professional career between curly braces, starting with C and moving through C++, Java, and now C#. He’s the author of Effective C# (2nd edition released in 2010), More Effective C# (2009), and is one of the annotators for The C# Language Specification, 3rd and 4th editions. He is a regular contributor to the C# Dev Center, and tries to write production code whenever he can. With more than 20 years experience, Bill Wagner, SRT Solutions co-founder and CEO, is a recognized expert in software design and engineering, specializing in C#, .NET and the Azure platform. He serves as Michigan’s Regional Director for Microsoft and is a multi-year winner of Microsoft’s MVP award. An internationally recognized author, Bill has published three books on C# and currently writes a column on the Microsoft C# Developer Center. Bill was awarded the Emerging Technology Leader Award by Automation Alley, Michigan’s largest technology consortium. Bill earned a Bachelor of Science degree in computer science from the University of Illinois at Champaign-Urbana. Bill blogs at http://www.srtsolutions.com/billwagner and tweets at https://twitter.com/billwagner. Bob Kuehne – OpenGL Before starting A2′s Blue Newt Software, Bob Kuehne was the Technical Lead for the OpenGL Shading Language at Silicon Graphics. Bob has worked for more than a decade in the computer graphics industry, working his way up and down the OpenGL food chain, from writing OpenGL code to writing shader compilers. He has presented on OpenGL at numerous conferences, including SIGGRAPH. Godfrey Nolan -Continuous Integration in the Mobile World Godfrey is Founder and President of Southfield based RIIS and author of Decompiling Java and the just published, Decompiling Android. Godfrey specializes in requirements capture using visualization tools such as iRise and Balsamiq and requirement management tools such as ReqPro and CaliberRM primarily in the Detroit Metro area and is currently using executable requirements at a couple of clients in the automotive and telecommunications space. Jessica Kerr - Functional Principles for OO Development Jessica Kerr is a long-time Java developer turned polyglot, engaged in writing Scala for biotech. She loves speaking at St. Louis user groups and at conferences like CodeMash, DevLINK, and DevTeach — but her #1 goal is keeping two young daughters alive without squelching their inner craziness. Find her thoughts at blog.jessitron.com and @jessitron. Joshua Kalis - Functional Javascript I am a UI Engineer at Quicken Loans and Javascript fanatic. My interest in being a polyglot programmer has influenced the way I approach writing code; hopefully for the better.I have pulled OO ideas from Java and C# as well as Functional concepts from F#, Scheme, Erlang, and Haskell. Mostly my learning of and from languages, other than Javascript, has helped me write better Javascript since I have very few options in the browser. I think that learning new languages is me just looking for a better way to do what I love to do. Mac Liaw – The vert.x Stack Mr. Mac Liaw served as the Chief Technology Officer of BringShare, Inc. Mr. Liaw has more than 20 years of professional experience as a Master Programmer and Technical Strategist to internet-based companies. Prior to joining BringShare, Mr. Liaw served as the Chief Technology Officer at GoAntiques Inc., and WorthPoint Corporation. Mr. Liaw is active in the Linux Kernel and the development of the Groovy and Haskell programming languages. He was a Member of the CERN development team that established HTTP and HTML. He is known throughout the technology community. He completed The Ohio State University’s Masters Program in Computer Science. Brian Munzenberger - Intro to iOS Development Brian Munzenberger has 8 years of experience in all aspects of the software development lifecycle. Past projects include B2B, B2C, and large-scale e-commerce applications. Throughout his career Brian has worked with a wide range of companies from small startups to large corporations. Brian is a co-organizer of the Ann Arbor Computer Society and has spoken at various Detroit area user groups including the Detroit Java User Group. Brian is an expert Java and Objective-C programmer. Brian brings his passion for development and knowledge of large-scale applications to the world of iOS. Kevin Dangoor – Rise of the Web App Kevin Dangoor is product manager for Mozilla’s developer tools. Though he’s worked with many languages in many environments, he is best known for his Python work as the founder of theTurboGears web framework and Paver project scripting tool. He has spoken at numerous conferences and is a co-author of Rapid Web Applications with TurboGears. More recently, his work at Mozilla has involved the Bespin browser-based code editor, starting the CommonJS project, and a new generation of developer tools for Firefox. He lives in Ann Arbor, Michigan. Walter Falby - Java on the mainframe – More than what you think! Walter Falby has more than 30 years of experience in application development, operating system and subsystem enhancements and product development. He has worked on MVS, Windows, OS/2, UNIX and Linux. The programming languages he has used include assembler, C/C++, C# and Java. He has published books on programming and alternate energy research as well as articles on software development and bacteriological studies of recreational lakes. Jeff McWherter - IS A MOBILE-FRIENDLY WEBSITE ENOUGH? Jeff McWherter is a Partner and the Director of Development at Gravity Works Design and Development. Jeff is a graduate of Michigan State University and has over 16 years of professional software development experience. In 2012 Jeff published his third book Professional Mobile Development (Wrox Presss) which complements his other works Testing ASP.NET Web Applications (Wrox Press) and Professional Test Driven Development with C# (Wrox Press). Jeff is very active in developing programming communities across the country, speaking at conferences and organizing events such as the Lansing Give Camp, pairing developers with non-profit organizations for volunteer projects. Mark Stanislav - “It’s Just a Web Site: How Poor Web Programming is Ruining Information Security”. Mark Stanislav is a Senior Consultant at NetWorks Group, focused on operational automation and information security. With a career spanning a decade,Mark has worked within small business, academia, start-up, and corporate environments primarily focused on Linux architecture, information security, and web application development. Through the recent years of his career, Mark has had an opportunity to architect and deploy cloud infrastructure within many different industries and for various business needs. Mark holds a Bachelor’s degree in Networking & IT Administration and a Master’s in Technology Studies focused on Information Assurance, both from Eastern Michigan University. Mark also holds his CISSP, Security+, Linux+, and CCSK certifications. Chris Risner – Backending your Mobile Apps with Azure Chris Risner is a Windows Azure Technical Evangelist at Microsoft. Chris is focused on using Windows Azure as a backend for iOS and Android clients. Chris has been working with iOS and Android development for the past several years. Before working in mobile development, Chris worked on many large scale enterprise applications in Java and .NET. Chis is a prodigious learner who loves technology of all flavors and has a vast amount of experience in Smart Clients, Asp.Net MVC, C#,, Java, Objective C, Android and iOS. Chris speaks from his many successes in different areas of technology. Calvin Bushor – Oh NODE you didn’t. Calvin Bushor is Web User Interface Engineer. He specializes in developing rich user experiences using JavaScript to enhance the presentational layer. Calvin is Senior Software Engineer at Quicken Loans. He currently works and lives in Detroit and loves “every second of it”. Murali Mogalayapalli - Effective Code Quality through Behavior-Driven Development (BDD) Murali Mogalayapalli is presently Senior Software Architect at New World Systems in Troy for the Police and Fire Dispatch/Mobile solution. His 20 years of industry experience spans a variety of segments, such as Public Safety, Application Performance Monitoring and warehouse management systems. Murali has applied a breadth of technologies and stacks (Java, .NET, and various open source) over his career, and is currently focusing on performance and scale for high-availability production systems. Job Vranish – Using types to write your code for you By the time I began college I was already fairly proficient at software development and wanted to try something new so I decided to study electrical engineering. My undergraduate degree is a BS in Electrical Engineering from Calvin College. Having skills in both the hardware and software realms makes me particularly suited to embedded software development which often requires forays into the hardware side of things. After college I worked at GE Aviation developing safety critical embedded software for aircraft flight management systems. I became interested in test driven development and agile methods while at GE and when I could not make these things happen there I moved to Atomic Embedded (a pioneer in applying Agile methods to embedded software development) in June 2011. Most of my free time is now taken up with entertaining and bouncing a small baby named Jasper, but otherwise, I enjoy Haskell (and dream of one day being able to write embedded software in a functional language), the theory and implementation of programming languages, vegetarian food and racquetball. Chris Marinos-The State of F#- Why You Should (or Shouldn’t) Care Chris is a F# MVP and software consultant in Ann Arbor, MI. A proponent of F# since its pre-release days, he has given numerous F# talks and trainings throughout the US and Europe. He has also written articles on F# for MSDN Magazine and his F#-centric blog. His other technical interests and experiences include coffeescript, backbone.js, Rails, Django, C#, and of course, functional programming. When not coding, he enjoys video games, BBQ food, and obnoxiously large TVs. Nilanjan Raychaudhuri - Asynchronous web programming on the JVM Nilanjan is a consultant and trainer for Typesafe. He has more than 12 years of experience managing and developing software solutions in Java/JEE, Ruby, Groovy and also in Scala. He is zealous about programming in Scala ever since he got introduced to this beautiful language. He enjoys sharing his experience via talks in various conferences and he is also the author of the “Scala in Action” book. Have questions about 1DevDay Detroit Developer Conference 2012? Contact DetroitDevDays 1 Washington Blvd Detroit, The goal of DetroitDevDays is to build a software developer community in the Detroit area that is regarded as the best in the world. DevDays are targeted at software developers and architects. DevDays educate and unite the development community in the Metro Detroit Area with inclusive, accessible and affordable events and conferences. Vist: http://detroitdevdays.com Detroit, United States Events
计算机
2015-48/1890/en_head.json.gz/11542
BIBA Evening Meeting - Social Media and Open Source Software Boston Irish Business Association Wednesday, May 19, 2010 from 6:00 PM to 8:30 PM (EDT) Non Members - $30 Members - FREE Share BIBA Evening Meeting - Social Media and Open Source Software Open Source Software has heavily influenced how we experience the web and social media platforms. It's also resulted in savings of about $60 billion per year to consumers across all industry sectors according to experts. Please join us in welcoming Tom Erickson, CEO of Acquia and Michael Skok, General Partner at North Bridge Venture Partners, for an update on the open source industry. Tom will be fresh off DrupalCon San Francisco and ready to speak about how open source is reshaping the internet and how it will affect our professional and personal internet use. Michael, an investor and board member of Acquia along with several other software companies, will give us a sense of where open source is going form an investors perspective. Acquia (pronounced Long-ah, accent on first syllable. AH-kwee-uh) is a commercial open source software company that provides a valuable set of software and network services for the popular Drupal open source social publishing. Drupal is a free software package that allows an individual, a community of users, or an enterprise to easily publish, manage and organize a wide variety of content on a website. Hundreds of thousands of people and organizations are using Drupal to power an endless variety of web sites. Acquia's goal is to amplify Drupal making available and more valuable for more it's users. MEMBERS - FREE DOORS OPEN AT 6 PM FOR REGISTRATION AND NETWORKING, PROGRAM BEGINS AT 6:30 PM The Back Bay Hotel 350 Stuart Street We look forward to seeing you on the 19th! Tom Erickson, CEO, Acquia: Tom started his career in open source in 1980 (really). Except that it was called public domain software, derivatives were completely legal and hosting in the cloud was provided in Seattle by Boeing. Joining Acquia in 2008 was a return to those values and concepts after almost 30 years of supporting the airline industry, helping enterprises around the world adopt new software technologies. Prior to Acquia, Tom was Chief Products Officer at map maker Tele Atlas. Before that, he was CEO of Systinet, a leading provider of SOA tools that was acquired by Mercury Interactive and subsequently Hewlett Packard. Tom has also held executive positions with webMethods, the Baan Company and Watermark Software in addition to his early days at MRO Software, known as Project Software and Development, Inc. in 1980. Michael Skok, General Partner, North Bridge Venture Partners:Michael Skok joined North Bridge Venture Partners in 2002 to seek out great entrepreneurs and lead innovative software investments. Prior to this, Michael had himself been an entrepreneur and CEO in the software business for 21 years. He founded, led and attracted over $100M in private equity to his investments in several successful software companies ranging from CAD/CAM, Document Management, Workflow, Imaging and Collaboration, Security and Analytics and spanning the Mini Computer, Workstation, PC, Client Server and Internet eras. As a Venture Capitalist, Michael has backed many great entrepreneurs supporting them to focus on large market changing technologies and disruptive business models such as SaaS, Virtualization, Cloud Computing, Open Source and new application areas, such as Social Marketing Automation. As a result, he is currentlyactive on the boards of all his recent investments including Acquia (Professional Drupal Open Source), Active Endpoints (Business Process Management System-BPMS), Akiba (Stealth - Virtual Database for Scale Out/Cloud),Awareness (Social Marketing Automation), Demandware (eCommerce, Marketing and Merchandising on Demand),Lumigent (Applications Governance Risk and Compliance), MyPerfectGig (Automated Contingency Search as a Service), rPath (Datacenter Automation through Automated Release Management) and Unidesk (Desktop Virtualization Management), as well as Actifio (Stealth - Cloud Storage) and REvolution Computing (Commercial Open Source "R"). Previously, Michael has served on many private and public company boards, as well as supported various software industry groups such as Software Publishers Association where he was Chairman for a number of years in Europe. He can be contacted at [email protected]. Have questions about BIBA Evening Meeting - Social Media and Open Source Software? Contact Boston Irish Business Association 350 Stuart Street Boston, The Boston Irish Business Association's (BIBA) mission is to help its members create and foster meaningful business relationships, while offering a dynamic platform for companies looking to gain brand exposure for their products and services (both locally and in Ireland). The organization provides value to its members and member organizations through enabling business and professional growth among a diverse network of people who are looking to retain and strengthen their connection to Ireland. Check out our new website! http://www.bibaboston.com Boston, United States Events
计算机
2015-48/1890/en_head.json.gz/11764
(Japan) (Korea) For all other countries: Free Trials | Site Map Products & Services Internet Security Center Home →About Us →Corporate News →Malware →2000 →Microsoft Corporate Network is Hacked. What About Your Network? Microsoft Corporate Network is Hacked. What About Your Network? 27 Oct 2000Virus News Kaspersky Lab Int. comments on the recent virus incident Cambridge, UK, October 28, 2000 - As disclosed on Friday, the corporate network of Microsoft, the world's largest software developer, was attacked by unknown hackers. The hackers used the QAZ network worm to penetrate into the network. As a result, the hackers gained access to the resources in which Microsoft stores the source code of its products, and may have copied some of them illegally.Kaspersky Lab Int. presumes that at the moment there is little evidence to support the claim that Russian hackers from St.Petersburg performed the hacking. This scenario was introduced because the data from Microsoft's internal network was transferred to an e-mail address in Russia's northern capital. However, it is a well-known fact that the location of an e-mail box is not necessarily the same as the location of its owner. The e-mail address in St.Petersburg could be owned by anyone, from any country around the world. This email address could have been used in order to mislead the official investigation and the crime's actual origin, has yet to be discovered.More important is the fact that the hacking was performed using the QAZ network worm. This worm was originally discovered earlier this year in July and Kaspersky Lab has received several reports of examples of this worm in-the-wild. Protection against the QAZ worm was immediately added to AntiViral Toolkit Pro (AVP) and other major anti-virus products' anti-virus databases. This raises the question: how did Microsoft's security systems miss the worm and make penetration possible? An enterprise's security policy should ensure that anti-virus protection is under the full control of highly qualified network administrators. It is therefore hard to believe that a workstation had no anti-virus software installed or that it had not been updated for a long time. It is more likely that a user had intentionally or accidentally disabled the anti-virus protection and allowed the worm to infect the computer.More amazing still, even if the worm had penetrated into the Microsoft network it should not have been able to gain access to the worm's backdoor-component from the outside. Attempts to achieve this should have been squashed immediately by a firewall, that blocks data transfer from using certain communication ports, including the port used by the QAZ worm. In other words, hackers should not be able to control the malicious code from outside the network. Hence it appears that it is impossible to steal anything (including source code) from Microsoft's internal network using the QAZ worm, even if the hackers know passwords and login information.Kaspersky Lab has no reason to question the competence of Microsoft's network administrators; it is easy to accidentally overlook a port that is commonly used by malicious programs.Despite the recent incident, Kaspersky Lab does not agree with the sharp criticism aimed at Microsoft's security systems. It should not be forgotten that Microsoft has one of the largest internal networks in the world. The fact that this is its first serious incident of hacking over recent years only proves that Microsoft is actually doing very well. In fact, many other big corporations have been hacked successfully more often than Microsoft. Besides, there is still no evidence that the hacking was done not from outside, but, rather, perhaps from within the company. In other words it may not be a problem of Microsoft's security systems, but Microsoft's security in general."Once again, we would like to draw users' attention to the fact that the installation of anti-virus software cannot be considered the only requirement for comprehensive anti-virus protection. The problem is complex and far reaching, it comes in direct contact with other security aspects and is an essential part of enterprise security in general," said Denis Zenkin, Head of Corporate Communications for Kaspersky Lab.The technical description of the QAZ worm is available at Kaspersky's Virus Encyclopedia. Products for Home: Kaspersky PURE 3.0 Kaspersky Internet Security for Mac Kaspersky Password Manager Products for Enterprise Business: Targeted Security Solutions Products for Small Office: For Software Users: For home products For business products About Kaspersky Lab Why Kaspersky? Join in the fun and ask EK your questions Send us a suspected virus
计算机
2015-48/1890/en_head.json.gz/11818
LinuxInsider > Mobile | Next Article in Mobile Tizen Could Be a Giant Step Back for Mobile Linux By Jay Lyman Oct 11, 2011 5:00 AM PT Amid continued traction for Android, there have been a number of other developments for mobile operating systems based on Linux. Given my support for and belief in Linux and open source software, you might expect me to be bullish on the prospects for all of this mobile and device Linux. However, based on what I've seen in the past in terms of mergers, reshuffles and strategic restarts, I believe the introduction of the Tizen Linux-based OS is reminiscent of a time when mobile Linux wasn't really moving ahead. Almost three years ago, I wrote in 451 Group's report, "Mobility Matters," that in spite of previous false starts and maneuvers -- similar to the ones we're seeing right now -- mobile Linux and open source software were finally poised to break out of niche use. I saw potential in the LiMO Founation, Palm's webOS, and particularly Android. More recently came the introduction of Tizen. Though the Tizen project is backed by the Linux Foundation, the LiMO Foundation, and industry leaders including Intel and Samsung, it is a jolt to mobile Linux and open source developers since it effectively ends the MeeGo OS and project. The Ghost of Mobile Linux Past There is plenty of developer resentment of the move, as well as consideration of a fork to continue MeeGo independantly. There is also some concern regarding the fate of the Qt open source programming framework that has been a significant part of MeeGo development. The fact that MeeGo was itself a consolidation and continuation of other Linux-based mobile and device OS projects -- Intel's Moblin and Nokia's Maemo -- is yet another historical theme being repeated with Tizen. These repetitions from the mobile Linux of the past include the reshuffling of ownership and leadership, merging of efforts, changing target devices and markets, and lack of the right backers -- such as wireless carriers that have embraced Android. Samsung has joined Intel in support of Tizen, though, and based on Samsung's experience and learning with Android, as well as Intel's takeaways from Moblin and MeeGo, Tizen may enjoy a different, more successful outcome. Sill, recalling the old days when mobile and device Linux efforts typically started and then faded, there was HP's somewhat inexplicable departure from webOS, the Linux-based, mobile OS HP bought with Palm. The fate of webOS remains unclear, but despite my previous contention that webOS might represent the next Android in terms of growth and traction, it now appears the operating system is, at best, in a state of limbo -- and at worst, similar to previous mobile Linux efforts that have languished in obscurity. Still Room for Innovation I wouldn't want to suggest that there is not opportunity for one of these existing mobile Linux efforts or a new one. In looking at what's happend with Apple's iOS and Google's Android, as a more open alternative, there is no question that there is still ample opportunity for an even more open option in mobile and converged devices. There is also incredible potential for mobile Linux and open source software in automobiles and embedded in a variety of electronics, given efforts and events such as the recently announced Automotive Linux Summit. I'll be watching closely for signs of future horizons for mobile Linux and open source software, as well as these signals of a more challenging past repeating itself. LinuxInsider columnist Jay Lyman is a senior analyst for The 451 Group, covering open source software and focusing primarily on Linux operating systems, application development, systems management and cloud computing. Lyman has been a speaker at numerous industry events, including the Open Source Business Conference, OSCON, Linux Plumber's Conference and Open Source World/Linux World, on topics such as Linux and open source in cloud computing, mobile software, and the impact of economic conditions and customer perspectives on open source. Follow his blog here. More by Jay Lyman
计算机
2015-48/1890/en_head.json.gz/12220
Malware writers go cross platform Leave a reply Security | tags: bug, flash, HP, malware, security July 11, 2012 by Nick Farrell. Security researchers working for F-Secure have found a web exploit that detects the operating system of the computer and drops a different trojan to match. The attack was first seen on a Columbian transport website which had been hacked by a third party. The unidentified site then displayed a signed Java applet that checks if the user’s computer is running Windows, Mac OS X, or Linux. The clever bit of the code appears to have been lifted from an open source tool kit written by Dave Kennedy, a security researcher and president of TrustedSec. He did not write it to do anything nasty. F-Secure said in its blog that all three files for the three different platforms connect to 186.87.69.249 to get additional code to execute. The ports are 8080, 8081, and 8082 for OS X, Linux, and Windows. While Apple has been being turned over for a while now, reports of real-world attacks on the Linux operating system are less common. Single attacks that have the ability to infect any one of the three OSes are rarer still. Fortunately for Apple users, the exploit only infects modern Macs that were modified to run software known as Rosetta. Rosetta was designed so that Macs using Intel processors can run software written for PowerPC processors. Rosetta is not supported on Lion, the most recent version of OS X. This means that the hackers’ knowledge of Macs is somewhat limited, but they did have a stab at it. Search for:
计算机
2015-48/1890/en_head.json.gz/12377
Category: Internet What Is an Internet Access Program? DSL filter. Many computer users are familiar with the Ethernet cable, because that's what they plug into their computer or high-speed modem to connect to the Internet. View slideshow of images above Eugene P. Edited By: Angela B. An Internet access program is a piece of software on a computer or device that handles communications and other protocols required to retrieve or send information to servers connected to the Internet. These programs not only handle the normal protocols that are used by servers for communication, but sometimes also act as an interface between the user’s computer and the hardware required to transmit and receive signals, such as a router or modem. For the most part, an Internet access program simply acts as a bridge between the Internet and the computer, with little functionality of its own outside of routing network traffic. Other programs, such as web browsers, email readers and peer-to-peer clients generally handle specific tasks that can be performed online, such as viewing a web site or reading email. The most basic type of Internet access program is one that uses a modem to connect to the Internet. A modem is a device that sends and receives signals that modulate and demodulate across standard telephone lines. An Internet access program for a modem can dial the modem, sometimes through a separate set of drivers, and then send and interpret signals to establish a connection through an available protocol such as the point-to-point protocol (PPP). Without some type of Internet access program, a connection could not be established and it would be impossible to use the Internet, even if the modem could be dialed. Ad A large number of computers connect to the Internet through a digital subscriber line (DSL) or cable modem. Both of these units are pieces of hardware that have internal software and embedded hardware that allow them to use advanced hardware protocols for transmission. The Internet access software used for these devices focuses almost exclusively on just passing information back and forth from the device to the computer, without the need to access the hardware directly. Another type of Internet access program is actually independent of most hardware, and instead is installed on a computer to allow it to connect and use a specific server online. These are usually programs that are branded to a specific company or Internet service provider (ISP), and they allow a customer to securely use the available servers. The programs are designed as an ISP security feature to prevent users who do not have the correct access program from connecting to the network and using the Internet through it. An Internet access program also can be software used on a computer or device so the Internet can be used over a wireless network, without the use of an Ethernet cable. These programs are able to detect signals, interpret special hardware protocols such as handshaking, and then interact with the service network. Wireless Internet access programs are much more complex than those designed for wired use, because signals that are transmitted must be captured and isolated from all other signals being wirelessly transmitted in the area. Ad Which Countries Have the Lowest Internet Access Rates? What Is an Access Point Bridge? What is a Virtual Access Point? What is Internet Bandwidth? What is the Difference Between a LAN and the Internet? What is an Internet Booster? Markerrag @Soulfox -- That isn't always the case. A lot of desktop computers still don't have built in wireless connectivity and require the user to go out and buy an adapter that typically plugs into a USB port. Since that is new hardware, the drivers for it probably won't exist on your computer. Still, most of those are "plug and play." All that means is that you plug in the adapter, the computer reads it and then installs whatever software and drivers it needs. Soulfox The good news about these Internet access programs is that you usually won't have to deal with them at all. Any modern computer should have the correct drivers to handle connecting a computer through a "wired" connect, wireless connection or even a modem (although the old "modem and phone line" connection is becoming increasingly rare). That is good news because configuring things manually for Internet connectivity can be a real headache. Post your comments
计算机
2015-48/1890/en_head.json.gz/12435
Continuous integration (CI) is the practice, in software engineering, of merging all developer working copies to a shared mainline several times a day. It was first named and proposed by Grady Booch in his 1991 method,[1] although Booch did not advocate integrating several times a day. It was adopted as part of extreme programming (XP), which did advocate integrating more than once per day, perhaps as many as tens of times per day. The main aim of CI is to prevent integration problems, referred to as "integration hell" in early descriptions of XP. CI isn't universally accepted as an improvement over frequent integration, so it is important to distinguish between the two as there is disagreement about the virtues of each.[citation needed] In XP, CI was intended to be used in combination with automated unit tests written through the practices of test-driven development. Initially this was conceived of as running all unit tests in the developer's local environment and verifying they all passed before committing to the mainline. This helps avoid one developer's work-in-progress breaking another developer's copy. If necessary, partially complete features can be disabled before committing using feature toggles. Later elaborations of the concept introduced build servers, which automatically ran the unit tests periodically or even after every commit and report the results to the developers. The use of build servers (not necessarily running unit tests) had already been practised by some teams outside the XP community. Nowadays, many organisations have adopted CI without adopting all of XP. In addition to automated unit tests, organisations using CI typically use a build server to implement continuous processes of applying quality control in general — small pieces of effort, applied frequently. In addition to running the unit and integration tests, such processes run additional static and dynamic tests, measure and profile performance, extract and format documentation from the source code and facilitate manual QA processes. This continuous application of quality control aims to improve the quality of software, and to reduce the time taken to deliver it, by replacing the traditional practice of applying quality control after completing all development. This is very similar to the original idea of integrating more frequently to make integration easier, only applied to QA processes. In the same vein, the practice of continuous delivery further extends CI by making sure the software checked in on the mainline is always in a state that can be deployed to users and makes the actual deployment process very rapid. 2 Best practices 2.1 Maintain a code repository 2.2 Automate the build 2.3 Make the build self-testing 2.4 Everyone commits to the baseline every day 2.5 Every commit (to baseline) should be built 2.6 Keep the build fast 2.7 Test in a clone of the production environment 2.8 Make it easy to get the latest deliverables 2.9 Everyone can see the results of the latest build 2.10 Automate deployment 4 Costs and benefits Workflow[edit] When embarking on a change, a developer takes a copy of the current code base on which to work. As other developers submit changed code to the source code repository, this copy gradually ceases to reflect the repository code. Not only can the existing code base change, but new code can be added as well as new libraries, and other resources that create dependencies, and potential conflicts. The longer a branch of code remains checked out, the greater the risk of multiple integration conflicts and failures when the developer branch is reintegrated into the main line. When developers submit code to the repository they must first update their code to reflect the changes in the repository since they took their copy. The more changes the repository contains, the more work developers must do before submitting their own changes. Eventually, the repository may become so different from the developers' baselines that they enter what is sometimes referred to as "merge hell", or "integration hell",[2] where the time it takes to integrate exceeds the time it took to make their original changes. In a worst-case scenario, developers may have to discard their changes and completely redo the work.[citation needed] Continuous integration involves integrating early and often, so as to avoid the pitfalls of "integration hell". The practice aims to reduce rework and thus reduce cost and time. A complementary practice to CI is that before submitting work, each programmer must do a complete build and run (and pass) all unit tests. Integration tests are usually run automatically on a CI server when it detects a new commit. Best practices[edit] This section contains instructions, advice, or how-to content. The purpose of Wikipedia is to present facts, not to train. Please help improve this article either by rewriting the how-to content or by moving it to Wikiversity, Wikibooks or Wikivoyage. (May 2015) This section lists best practices suggested by various authors on how to achieve continuous integration, and how to automate this practice. Build automation is a best practice itself.[3][4] Continuous integration – the practice of frequently integrating one's new or changed code with the existing code repository – should occur frequently enough that no intervening window remains between commit and build, and such that no errors can arise without developers noticing them and correcting them immediately.[5] Normal practice is to trigger these builds by every commit to a repository, rather than a periodically scheduled build. The practicalities of doing this in a multi-developer environment of rapid commits are such that it is usual to trigger a short time after each commit, then to start a build when either this timer expires, or after a rather longer interval since the last build. Many automated tools offer this scheduling automatically. Another factor is the need for a version control system that supports atomic commits, i.e. all of a developer's changes may be seen as a single commit operation. There is no point in trying to build from only half of the changed files. To achieve these objectives, continuous integration relies on the following principles. Maintain a code repository[edit]
计算机
2015-48/1912/en_head.json.gz/6071
Category: Internet What are Network Security Protocols? Without cryptographic network security protocols, Internet functions such as e-commerce would not be possible. Secure networks have extra security, evidenced by "https" in the address. Michael Linn Edited By: J.T. Gale Network security protocols are used to protect computer data and communication in transit. The primary tool used to protect information as it travels across a network is cryptography. Cryptography uses algorithms to encrypt data so that it is not readable by unauthorized users. Generally, cryptography works with a set of procedures or protocols that manage the exchange of data between devices and networks. Together, these cryptographic protocols enhance secure data transfer. Without cryptographic network security protocols, Internet functions such as e-commerce would not be possible. Secure communication is necessary because attackers try to eavesdrop on communications, modify messages in transit, and hijack exchanges between systems. Some of the tasks networks security protocols are commonly used to protect are file transfers, Web communication, and Virtual Private Networks (VPN). The most common method of transferring files is using File Transfer Protocol (FTP). A problem with FTP is that the files are sent in cleartext, meaning that they are sent unencrypted and therefore able to be compromised. For example, many webmasters update their sites using FTP; an attacker using a packet sniffer and the website’s IP address can intercept all communications between the webmaster and the site’s server. Ad As an alternative, Secure File Transfer Protocol (SFTP) offers a more secure way to transfer files. SFTP is usually built upon Secure Shell (SSH) and is able to encrypt commands and data transfers over a network, thereby reducing the likelihood of interception attacks. The SSH cryptographic protocol is also resilient to impersonation attacks because the client and server are authenticated using digital certificates. In addition to SSH, Secure Sockets Layer/Transport Layer Security (SSL/TLS) can be used as the underlying protocol for SFTP. Like SSH, SSL/TLS authenticates the identity of both the server and the client, as well as encrypts communications between the two. In addition to securing SFTP file transfers, SSL/TLS is used for securing e-mail communication. SSL is also used in combination with Hypertext Transfer Protocol (HTTP) to encrypt communications between a browser and a web server in the form of HTTP over Secure Sockets Layer (HTTPS). HTTPS encrypts communications and verifies the identity of a web server. When performing private transactions over the Internet, such as online banking, it generally is good practice for a person to check the browser’s address bar to make sure that the website’s address begins with https:// and not just http://. Another area where cryptographic network security protocols play an important role, especially for modern businesses, is in exchanging documents between private networks over a public Internet connection. These so-called Virtual Private Networks (VPNs) are critical for business because they securely connect remote workers and offices across the world. Some commonly used network security protocols that are used to facilitate VPNs are Point-to-Point Tunneling Protocol (PPTP), Layer 2 Tunneling Protocol (L2TP), IP Security (IPsec), and SSH. Not only do these network security protocols create a safe connection but they also greatly reduce the costs associated with creating an alternate solution, such as building or leasing lines to create a private network. Ad What Is a Shell Lease? What Is Formal Verification? What Is the Typical Penetration Test Procedure? What Is a Service Scan? What Is a Replay Attack? What Is a VPN Client Connection? What Is a Device Fingerprint? Logicfest Wow. How many Webmasters out there are using FTP to update and manage their external server accounts? The risk of a security problem arising is slim for those who don't have sites with a lot of critical information, but the risk of that information being accepted will increase when it comes to information that can be valuable in the hands of a competitor or a hacker wanting to use the data in nefarious ways. Post your comments
计算机
2015-48/1912/en_head.json.gz/6337
Apply MyCentral Google Apps@UCM WiFi/Connecting to UCM Schedules & Notifications UCM Home » Office of Technology » Policy & Procedures Office of Technology About OT Are You a New Student? Are You a New Employee? MOREnetandMOStateStatutes MOREnet and Missouri State Statutes MOREnet The Missouri Research and Education Network (MOREnet) provides Internet connectivity, access to Internet2, technical support, videoconferencing services and training to Missouri's K-12 schools, colleges and universities, public libraries, health care, state government and other affiliated organizations. Use of Central Missouri's computer and network facilities must comply with the MOREnet Acceptable Use Policy . MOREnet's AUP, in addition to expressly forbidding commercial and illegal use, forbids use of the network in a manner that is harmful or harassing to others and in a manner that disrupts normal network use and service. MOREnet's acceptable use policies are available at http://www.more.net. University of Central Missouri derives its authority from state statutes and is ultimately responsible to the people of Missouri through the governor and the General Assembly. Information regarding statutory regulation of the use of state-owned computer facilities is available at http://www.moga.mo.gov.
计算机
2015-48/1912/en_head.json.gz/7038
Aseem Agarwala Home Tech transfer Research projects Publications Activities & Honors I am a research scientist at Google, and an affiliate assistant professor at the University of Washington's Computer Science & Engineering department, where I completed my Ph.D. in 2006; my advisor was David Salesin. My areas of research are computer graphics, computer vision, and computational imaging. Specifically, I research computational techniques that can help us author more expressive imagery using digital cameras. I spent nine years after my Ph.D. at Adobe Research. I also spent three summers during my Ph.D. interning at Microsoft Research, and my time at UW was supported by a Microsoft fellowship. Before UW, I worked for two years as a research scientist at the legendary but now-bankrupt Starlab, a small research company in Belgium. I completed my Masters and Bachelors at MIT majoring in computer science; while there I was a research assistant in the Computer Graphics Group, and an intern at the Mitsubishi Electric Research Laboratory (MERL) . As an undergraduate I did research at the MIT Media Lab. I also spent much of 2010 building a modern house in Seattle, and documented the process in my blog, Phinney Modern.
计算机
2015-48/1912/en_head.json.gz/7371
President's Justification of the High Performance Computer Control Threshold Does Not Fully Address National Defense Authorization Act of 1998 Requirements Joseph A. Christoff(202) [email protected] Office of Public Affairs The United States controls the export of high performance computers for national security and foreign policy reasons. High performance computers have both civilian and military applications and operate at or above a defined performance threshold (which was formerly measured in millions of theoretical operations per second [MTOPS], but is now measured in Weighted TeraFlops [WT]). The U.S. export control policy currently organizes countries into "tiers," with tier 3 representing a higher level of concern related to U.S. national security interests than tiers 1 and 2. A license is required to export computers above a specific performance level to countries such as China, India, Israel, Pakistan, and Russia. Policy objectives of U.S. computer export controls are to (1) limit the acquisition of highest-end, high performance computer systems by potential adversaries and countries of proliferation concern and (2) ensure that U.S. domestic industries supporting important national security computer capabilities can compete in markets where there are limited security or proliferation risks. Over the last few years, the effectiveness of U.S. export controls in meeting these policy objectives has been challenged by market and technological changes in the computer and microprocessor industries. The National Defense Authorization Act of 1998 requires that the President provide a justification to Congress for changing the control threshold for exports of high performance computers to certain sensitive countries. The President's report must, at a minimum, (1) address the extent to which high performance computers with capabilities between the established level and the newly proposed level of performance are available from foreign countries, (2) address all potential uses of military significance to which high performance computers at the newly proposed level could be applied, and (3) assess the impact of such uses on U.S. national security interests. In February 2006, the President set a new control threshold for high performance computers and a new formula for calculating computer performance. GAO is required by law to assess the executive branch's proposed changes to the current control thresholds related to foreign availability and the national security risks of exporting high performance computers between the previous and proposed thresholds.The President's February 2006 report did not fully address the three requirements of the National Defense Authorization Act of 1998. Therefore, the report did not present the full implications of the threshold change to Congress. Find Recent Work on National Defense »
计算机
2015-48/1912/en_head.json.gz/7845
LeadFerret Releases a Directory of Graphic Design Professionals Today, LeadFerret, the free B2B data site, announced the release of their latest directory of Graphic Design Professionals. This directory comes fully equipped with detailed contact information for each record, including email addresses, telephone numbers, and, where applicable, social media links. Calabasas, CA (PRWEB) LeadFerret, the world's first 100% free business to business database with complete information including e-mail addresses, announced today the release of a specialized directory of Graphic Design Professionals. This directory allows users to search through and see full contact information for specific records of people who consider themselves experts in the field of graphic design, in addition to the millions of records outside that directory. Access the Full Directory: http://leadferret.com/directory/graphic-design-professionals From the design of a logo to the layout of a print ad, the fruit of graphic design labors is practically omnipresent. It is essentially impossible to acquire a consumer product that has not been shaped in some way at the hand of at least one graphic designer. Even the placement of a warning label on a shampoo bottle was chosen for a number of visual reasons. Graphic designers create visual concepts, usually in the form of images, layouts, or text beyond a specifically pre-existing font. They do this either by hand or using computer software, creating layout and production design for things ranging from advertisements, brochures, and magazines to corporate reports. Many graphic designers are employed by specialized design services, publishing, or advertising, public relations, and related services industries, but as of 2012, about 24 percent of graphic designers were self-employed. Whether a company is looking to design or redesign part of its aesthetic, or has a valuable product or service to offer people who work in the field of graphic design, this directory is a great place to start. LeadFerret users will be able to make the most of this directory, by having access to the most valuable prospects, with complete information, including email addresses, social media links and more, making it easier to develop marketing campaigns. About LeadFerret LeadFerret, Inc offers an online B2B database with complete data for over 17+ million business contacts. Users can search and view all 17+ million records for free, with no limitations, users only pay when they want to download records. Every record comes with complete information, including email address, phone number, company information, and much more. Many records now even come with social media links. For more information, go to: http://www.LeadFerret.com. Forest Cassidy LeadFerret.com+1 (818) 527-6024 @LeadFerret1
计算机
2015-48/1912/en_head.json.gz/7854
Robert Gentleman joins REvolution’s board of directors January 25, 2010By David Smith (This article was first published on Revolutions, and kindly contributed to R-bloggers) We’re so excited here at REvolution Computing to announce that Robert Gentleman has joined our board of directors. Robert is one of the two originators of the R Project: a research project between Robert and Ross Ihaka in 1996 was the genesis of the R language. (Both Robert and Ross were profiled in an article in the New York Times about R last year.) Today, the R Project has grown tremendously, with estimates of more than 2 million users worldwide, thousands of volunteer contributors to R packages, and more than 20 world-leading statisticians and computer scientists leading the core development. Robert was also the leader of the BioConductor project for many years, developing cutting-edge tools in R for the analysis of genetic data. Today, Robert continues his research in genomics as a senior director in bioinformatics at Genentech. He had this to say about joining our board: “REvolution has made important contributions to the R community and to the commercial use of R on an enterprise level,” Gentleman said. “I am eager to help Norman and the team expand the role R has in the commercial world and to help bring high-quality analytic and graphical software to many new areas of application.” Robert’s expertise in R, the R project, and open-source projects in general will be invaluable to us as we develop our plans for REvolution Computing. We’re also pleased to announce a second new appointment to our board of directors: financial expert Donald Nickelson. You can read more about both new board members in our press release linked below. REvolution Computing: REvolution Computing Names Robert Gentleman and Donald Nickelson to Board Tags: Announcements, REvolution
计算机
2015-48/1912/en_head.json.gz/7925
at SIGGRAPH eTech & Art & More Report on Panel: Beyond Copyright: The Brave New World of Digital Rights Management by Ben Wyrick Dan Burk summed it up: "The Internet is the biggest copy machine in the world." Digital technology catalyzed by the Internet is allowing a greater dissemination and propagation of knowledge than ever before. And much of that information is intellectual property, some of which is protected by U.S. copyright law. Copyright law is currently in a state of flux, due to recent legislation such as the Digital Millenium Copyright Act (DMCA), passed in 1998. What are the rights of creators, distributors, and end-users of material under the DMCA and how have those rights changed since the U.S. Constitution was penned? Do we have a reasonable system for protecting everyone's rights under current law? These were the questions discussed in a panel titled "Beyond Copyright: The Brave New World of Digital Rights Management," chaired by Robert Ellis, SIGGRAPH Public Policy Program Chair. Also on the panel were Dan Burk, a University of Minnesota law professor, Deborah Neville, an attorney who has represented authors and Hollywood studios, Barbara Simons, ACM Past President and ACM U.S. Public Policy Committee Co-Chair, and Sarah Stein, a media professor at North Carolina State University with a background in documentary film. The Constitution calls for copyright protection to "promote the progress of science and useful arts." It states that copyrights are to be of a limited term, after which time they revert to the public domain. According to Burk, the idea is for the public to benefit from ideas, but under DMCA, distribution middlemen, record companies, and publishers are reaping the benefits. For example, DVDs are protected against duplication by the Content Scrambling System (CSS), a weak method of encryption. A consumer purchasing a DVD remains unable to copy that DVD even after the copyright has run out, in essence keeping the DVD out of the public domain forever, a violation of original copyright law. Enter DeCSS. DeCSS is a computer program which circumvents the encryption on DVDs and allows them to be copied or viewed on alternate operating systems such as Linux. It could be argued that DeCSS restores the spirit of early copyright law, returning the legal concept of "fair use" to DVDs. The purpose of fair use, according to Burk, is to allow "enough play in the joints" between the needs of the creator and the needs of the user. Fair use allows the duplication of copyrighted material for academic or research purposes, reviews of a product by critics, and other rights. Fair use walks the thin line between protecting the rights of the artist and allowing legitimate uses of a purchased product by the consumer. "We wouldn't have academic institutions the way we know them without fair use," Stein says, referring to the heavy reliance universities and libraries place on fair use. The panelists argued that DMCA seriously erodes the doctrine of fair use and encouraged audience members to become politically active in issues of intellectual property. Another change DMCA has brought in copyright law is the introduction of criminal penalties for reverse engineering and other forms of infringement. Formerly the penalties were civil only, involving fines. Now you can go to jail. And supplying someone with the ability to circumvent encryption is illegal, even if the protected material is not copyrighted. According to Stein, such provisions benefit distributors such as record companies, as opposed to the musicians themselves. Burk believes the erosion of fair use under DMCA may be unconstitutional due to conflicts with freedom of speech. "The system is out of control," warns Burk, who believes the spirit of the DMCA is out of line with what the public thinks is fair. Neville points to profit as a motive for restricting fair use, and attributes the rise of illegal copying and hacking to unfair prices for media. Simons spoke of the positive side of peer-to-peer file sharing networks. She views them as empowering the artist, who would then rely less on record companies for distribution. Hence the record companies' aversion to such networks. "DMCA is the best legislation money can buy," said Burk, who called attendees to become the Rosa Parks of the copyright movement and take back control of intellectual property from Bill Gates and Jack Valenti. Simons echoed the call for civil disobedience, but warned that violating the DMCA could have serious repercussions. She added that professional societies like the ACM can help lead the way to workable legislation. The panelists agreed that a positive change in current law needs to take place: "People should not be thrown in jail for writing code," said Simons. SIGGRAPH Panels Public policy was a theme this year... page is maintained by YON - Jan C. Hardenbergh [email protected] photos you see in the 2001 reports are due to a generous loan of Cybershot digital cameras from SONY
计算机
2015-48/1912/en_head.json.gz/8602
Oracle® Spatial User's Guide and Reference The Oracle Spatial User's Guide and Reference provides usage and reference information for indexing and storing spatial data and for developing spatial applications using Oracle Spatial and Oracle Locator. Oracle Spatial requires the Enterprise Edition of Oracle Database 10g. It is a foundation for the deployment of enterprise-wide spatial information systems, and Web-based and wireless location-based applications requiring complex spatial data management. Oracle Locator is a feature of the Standard and Enterprise Editions of Oracle Database 10g. It offers a subset of Oracle Spatial capabilities (see Appendix B for a list of Locator features) typically required to support Internet and wireless service applications and partner-based geographic information system (GIS) solutions. The Standard and Enterprise Editions of Oracle Database 10g have the same basic features. However, several advanced features, such as extended data types, are available only with the Enterprise Edition, and some of these features are optional. For example, to use Oracle Database 10g table partitioning, you must have the Enterprise Edition and the Partitioning Option. For information about the differences between Oracle Database 10g Standard Edition and Oracle Database 10g Enterprise Edition and the features and options that are available to you, see Oracle Database New Features. The relational geometry model of Oracle Spatial is no longer supported, effective with Oracle release 9.2. Only the object-relational model is supported.
计算机
2015-48/1912/en_head.json.gz/8922
ZombiU was supposed to be a flagship title for the console, displaying it's graphical abilities and the new features of the gamepad in a manner that would make the console the "in" thing for gamers everywhere.Instead, what it became was an extremely polarizing game. There are very few people who are of the mindset that this game is "just alright". Most people you run into will find it either a brilliant video game, full of depth and difficulty, or a janky, poorly executed mess.ZombiU takes place in London in November, 2012. An old legend called the Black Prophecy is coming to pass, with a zombie outbreak. There has been an underground group researching and preparing for this day. As one of the survivors of the apocalypse, you are tasked with working with this underground group to find the cure.ZombiU doesn't set out to be your typical run-and-gun shoot-'em-up first person shooter. It, instead, wants to be a survival horror game. You can shoot all the zombies you want, great. What's more important is the goal of survival. Survive so that you can get samples. Survive so that you can help find the cure. Survive so that you can just keep living. It takes an angle on the zombie fad that a lot of games just look past.One of the more polarizing aspects of the game is it's permadeth. In ZombiU, when your character dies, you don't play as that character anymore. Instead, you respawn as another one of the survivors. Your old character, in keeping with the elements of the game, doesn't just disappear- it becomes a zombie. You have to kill your old self to get your items back, which is a surreal experience. You have just spent three hours or so as character A, and now you are character B, and your first mission? Smash in Zombie Character A's brains. The weakness to this system is that there is only one dead copy at a time, so if you die again before you can retrieve your loot, it's all gone. Another polarizing aspect is the combat. It tries to do so well. You are always armed with a melee weapon, a cricket bat. Along the way, you can pick up other weapons, including, of course, guns. The problem with guns is that they make noise. The noise attracts other zombies to come see what all the fuss is about, which turns your group of three zombies that you got the drop on into five or six guys trying to eat your brains. Add in that kickback causes problems for you (which, if you're thinking about yourself as a survivor in England who might not have the most experience shooting a gun, adds a level to this game that isn't always thought about) and that ammo is very, very scarce, and you have all the elements for a great survival horror game. However, the problem is that the melee with the cricket bat is unrewarding. It can take five or six hits at times to down a zombie. Finding a group of three or four means fifteen to twenty hits, and that's a chore. The use of the WiiU gamepad is a fun part of this game. When you go to loot things, rather than a menu coming up and the game pausing, you are directed to look at the gamepad's screen. There, you can see what is in the filing cabinet and decide what you want to keep. While that is happening, though, the game isn't paused. Everything is still going on around you. It adds an element of tension to your adventures that is not found in many other games.This game tries to be one of the best zombie games out there. It tries to take a fresh approach to things. It has all of the right ideas, too. Rather than an amazing story or just being a game about killing a million zombies, it really nails the feeling that you are trying to survive so, so well. Unfortunately, it misses in execution of parts. I really hope we see a sequel to this with more polished combat, or at least another game trying to do the same things here. This game is the epitome of having great ideas, but not quite executing them in the right way. It's an enjoyable and unique experience for sure if you're willing to forgive it of it's faults, but that is a bridge too far for some people. Terms & Conditions
计算机
2015-48/1912/en_head.json.gz/9012
Consoles that won’t die: The Atari Jaguar Dan Crawley April 25, 2013 9:00 AM Tags: Another World, Atari, Atari Connexion, Atari Jaguar, Atomic, blackout, Bomberman, Commodore, Consoles that won't die, Dazed, editor's pick, Eric Chahi, featured, Frog Feast, game features, Mad Bodies, Nintendo, Out of This World, Removers, retro gaming, RGC, Sebastian Briais, Sega, Sega Saturn, Sony, Sony PlayStation, Super Nintendo Image Credit: Ian Muttoo / Flickr Read more: that won’t die The Intellivision The Commodore 64 The SNES The NES Out of This World (known as Another World outside of North America) is a true gaming classic, hitting systems as diverse as the Super Nintendo and iOS since its 1991 arrival on the Commodore Amiga. Creator Eric Chahi has now given his blessing to one more adaptation of the game on an unlikely console. French computer scientist Sébastien Briais is the man at the heart of the project, and the platform of choice is the Atari Jaguar, a powerful machine that’s notorious for being one of the worst-selling video game consoles of all time. Jaguar: Atari’s last console Atari led the video game industry in the late ’70s and early ’80s. But this dominance was a distant memory by the time of the Jaguar’s 1993 release — Atari never recovered from the Crash of 1983 that almost killed home gaming in America. Despite claims that its new console was technically superior to its rivals and some impressive software from Atari, the Jaguar simply didn’t sell. A lack of support from third-party publishers such as Activision, Electronic Arts, or Capcom led to an understocked games catalog, and Atari had more or less accepted defeat by the time the new kids on the block, the Sony PlayStation and Sega Saturn, released in 1995. Atari’s report to stockholders that year was bleak: “From the introduction of Jaguar in late 1993 through the end of 1995, Atari sold approximately 125,000 units of Jaguar. As of December 31, 1995, Atari had approximately 100,000 units of Jaguar in inventory … . There can be no assurance that Atari’s substantial unsold inventory of Jaguar and related software can be sold at current or reduced prices if at all.” Briais is an enthusiastic Jaguar programmer, and despite the console’s retail failure 20 years ago, he is confident that it will prove a worthy home for Chahi’s classic cinematic adventure game. “The story began in 2007 when I attended the Atari Connexion in Congis not far from Paris,” says Briais. “This event was organised by the Retro Gaming Connexion. Eric Chahi was invited to the event, and he was very enthusiastic to see some crazy people still having fun coding on old hardware. Some friends of mine and I asked whether he would let us adapt Another World for the Jaguar.” Above: Out of this World creator Eric Chahi speaking at the European Game Developers Conference in 2010Image Credit: Official GDC/flickr Eric Chahi recalls his first meeting with Briais with equal clarity. “The event organizers presented me to [Briais’ programming group] The Removers,” he says. “They asked me if it would be possible to port Another World on Jaguar. I was impressed by their ability to code on this machine. These guys sounded like crazy people, so I immediately said, ‘Yes.'” But the Out of This World Jaguar project remained just a concept until 2010, when Briais finally had the time to seriously work on it. Chahi provided Briais with the original Atari source code, along with the latest data and enhanced graphics from the 15th anniversary edition. “I gave Seb technical info on the game engine,” he says, “and later I resized the graphics to the native size of the Jaguar so that there is no dithering [scattering of pixels to make up for a limited color palette].” With Chahi’s support, Briais managed to not only get the game running but take it to a stage where it was outperforming the original. “About one year ago, [Eric] came to my home and tried a beta version,” says Briais. “I think he was quite impressed by the console, as the game runs very smoothly on it.” “It was like jumping into an alternate reality in the past where someone coded Another World on this computer,” recalls Chahi. “I was amazed by the quality of this version. Seb coded it in assembly language using the advantage of the Jaguar hardware. It is one of the best versions, clearly. The code is so well optimized that if the frame rate is not limited, it can run maybe at least five times faster than the original with all the enhanced graphics.” Gallery: GalleryAbove: The box art for Out of This World's Jaguar release.Image Credit: Removers / RGC 1 2 View All Trending Research
计算机
2015-48/1912/en_head.json.gz/9376
Traveler's Insurance: IRM Protects Your Documents Wherever They Go Command and ControlAt the core of any IRM system is the policy server where you define a set of rights as broadly or narrowly as you require. Webster says the way the technology companies approached this was to use a process in which the document would "phone home" to the policy server. "A user who wants to use information in a document has to call the [policy] server to get rights," she says. According to David Mendel, senior product marketing manager for content management and archiving at EMC (which purchased IRM vendor Authentica in 2006), the policy server provides the ability to set policies dynamically. "There is a separate policy server on which encryption keys and the policies are stored. That’s important," Mendel says, "because this is what allows for dynamic policy control, which is what you need in a business setting." This provides the ability to change policies on-the-fly over time, even completely revoking the ability to open the document if needed.Webster provides an example: If a company is taking bids to outsource its manufacturing overseas, it has to share designs and drawings with potential manufacturers. The company, she explains, has to give enough information for these manufacturing companies to make a meaningful bid. "If you send this information to half a dozen companies, you want to be able to revoke the access after you make a decision for those you didn’t pick." She says by using a policy server, it forces recipients to access the server periodically to open the document. And if you revoke the rights, the next time they try to open the document, it will no longer open.Documents can also be configured to work even when there is no internet access to enable contact with the policy server. Webster says it’s not necessarily constant control if you don’t want it to be. "There is this notion of conditional access, if you will. It’s not up to the minute, but you could generate a file that had to access on every [use] or you could generate a file that has to tag back up on certain time intervals to continue access."Creating PolicyCompanies can establish policies in whatever organizing principle makes the most sense. Landwehr suggests creating policies in the same manner as old paper document designations, such as stamping a file Confidential. "You can tie a policy to a document where the policy would be defined in human-friendly terms like ‘Company Confidential’ or ‘Board of Directors Restricted’ and within that policy define the authorized users and groups and what permissions they have," he says.Gaudet prefers to look at it from a role-based perspective—which roles have access to this document. "We built a series of best practices and we have a methodology we developed. Right now you have no protection. Anyone can access the content." He has customers draw a circle; anyone inside the circle can access content and anyone outside can’t. From there, he says, customers can refine the process and create inner circles within the larger circle to define more granular usage rights. "The more specific the business process, the more you know about the people involved and what rights they should have," he says. How Do I Open This?After an organization establishes IRM policies, what happens to a given document is driven by the person’s role and what he or she can do with it as defined on the policy server. But each solution requires that the recipient have a client capable of checking in with the policy server to access credentials. Mendel describes the EMC client solution: "It includes a client piece, which is a plug-in for technical business applications such as Microsoft Office, PDF, Outlook email, and BlackBerry. The plug-in allows you to use the native business application and access the controlled document." Mendel explains that if you don’t have the required plug-in, when you try to open it, a text box opens indicating the document is protected with the EMC information rights management. You can follow a link to download the client, and you will be able to open this type of protected document in the future.However, there could be instances where a business user has a legitimate need to access a document but does not fall within the sphere of acceptable users. In order to keep information flowing smoothly, the Oracle SealedMedia IRM solution allows business users to provide permission for valid business reasons on-the-fly. Andy MacMillan, VP of product management at Oracle (which bought Stellent and its IRM product SealedMedia in 2006), says, "If I need access to this document, it doesn’t make sense for me to contact IT and ask to be added to a role when they [probably] don’t know the business reason why I should be added." What Oracle does here, MacMillan explains, is to display a webpage with a link to a contact person who can fill out a web-based form and grant an exception to view the document. MacMillan points out there is an audit trail of this activity so IT can check to see which people have been given access to a document.
计算机
2015-48/1912/en_head.json.gz/9466
milkshake all that's good SHOPPING Join the Milkshake movement! Subscribe to our free email dedicated to finding the good in everything: Terms and Conditions / Privacy Policy Effective Date: November 1, 2010 Your privacy is very important to us. This Privacy Policy ("Privacy Policy") is designed to explain: (i) how Milkshake, LLC. "Milkshake", "we", "our" or "us") collects, uses, secures and shares information, (ii) what type(s) of information we collect; (iii) how you can edit and/or delete certain information; and (iv) covers your access and/or use of our Web site (www.getmilkshake.com), together with any other service or product that we provide to you, including your receipt of our daily email(s) (hereafter starting with the phrase "our Web site" collectively referred to as "Services"). This Privacy Policy does not apply to the practices of companies that Milkshake does not own or control, or to people that Milkshake does not employ or manage. By accessing or using our Services, you agree to the terms of this Privacy Policy as they may be amended frequently. As we update and expand our Services, this Privacy Policy may change, so we encourage you to check back to this page from time to time. This Privacy Policy is incorporated into, and part of, the Milkshake terms of use, which is the agreement between you and Milkshake that governs your access to and use of our Services in general. This Privacy Policy shall be interpreted under the laws of the United States, regardless of the location of individual users. Information Collection and Use We collect personally identifiably information and non-personally identifiable information (collectively, "Information") for the following purposes: Providing Services to you To fulfill your requests for our Services To tailor your experience Showing you and/or providing to you by email content that we think might be of interest to you, and displaying content according to your preferences and our terms Contacting or notifying you Marketing and communications with you in relation to our Services Performing market research via surveys to better serve your needs, improve the effectiveness of our Web site, your experience, our various types of communications and/or promotional activities User traffic patterns To pre-fill fields To maintain, protect and improve our Services (including advertising services) Ensuring the technical functioning and optimization of our network Developing new services and products Types of Personally Identifiable Information We Collect and From Whom We collect personally identifiably information when you voluntarily provide it to us or when you authorize us to collect it on your behalf. The type(s) of personally identifiably information we may collect from you are set forth below. Your geographical coordinates A persistent identifier, such as a subscriber number held in a cookie or processor serial number, that is combined with other available data that identifies an individual Any other personally identifiably information needed to provide you with a Service Examples of scenarios where you may provide us with your personally identifiably information include: Communicating with us or our third party providers Requesting information, data, content or material Participating in an online survey Requesting inclusion in an email or other mailing list Submitting an entry for a contest or other promotions Filling out a questionnaire Any other business-related reason to provide you with our Services We do not share or transfer your personally identifiably information with third parties without your consent, except under the limited conditions described below. We may combine the Information you provide (including demographic and profile data) with information we receive from other sources in order to provide you with our Services, a better or more tailored experience and to improve the quality of our Services. For certain Services, we may give you the opportunity to opt out of combining such information. When you access our Services, we or our service providers may send one or more cookies - a small file containing a string of characters - to your computer that uniquely identifies your browser. We use cookies to improve the quality of our Services by storing user preferences and tracking user trends. Log In Information When you access our Services, our service provider's servers automatically record information that your browser sends whenever you visit a Web site. These server logs may include information such as your Web request, Internet Protocol address, browser type, browser language, the date and time of your request and one or more cookies that may uniquely identify your browser. Web Beacons We also may use "clear GIFs" (aka "Web beacons" or "pixel tags") or similar technologies, in the Services, including in our communications with you, to enable us to know whether you have visited a Web page or received a message. A clear GIF is typically a one-pixel, transparent image (although it can be a visible image as well), located on a Web page or in an e-mail or other type of message, which is retrieved from a remote site on the Internet enabling the verification of an individual's viewing or receipt of a Web page or message. Information collected from invisible pixels is used and reported in the aggregate and does not contain personally identifiable information. We and our advertising partners may use this information to improve our Services, including, marketing programs and content. Milkshake Services on other Web sites We offer some of our Services on or through other Web sites. Information, including personally identifiably information, that you provide to those Web sites may be sent to us in order to deliver our Services. We process such information under this Privacy Policy. The affiliated Web sites through which our Services are offered may have different privacy practices and we encourage you to read their privacy policies. We have no responsibility or liability for the practices, policies and security measures implemented by third parties on their Web sites. Third Party Applications Our Services may include Widgets, which are interactive mini-programs that run on our Web site to provide specific services from another company (e.g. allowing you to share a profile with your FaceBook or Twitter account). Personal information, such as your email address, may be collected through the Widget. Cookies may also be set by the Widget to enable it to function properly. Information collected by this Widget is governed by the privacy policy of the company that created it. Information Disclosure to Third Parties We share your Information, including your personally identifiably information, with third parties only in limited circumstances where we believe such sharing is 1) reasonably necessary to provide Services, 2) legally required, or, 3) permitted by you. We may disclose your Information to government authorities, and to other third parties when compelled to do so by government authorities, at our discretion. We also may disclose your Information, including your personally identifiably information, when we have reason to believe that someone is causing injury to or interference with our rights or property, other users of the Services, or anyone else that could be harmed by such activities. Information Transfer If Milkshake becomes involved in a merger, acquisition, liquidation, or any form of sale or transfer of some or all of its assets, you affirmatively consent to our transfer or sale of all of your Information, including your personally identifiably. We will provide notice to you before your personally identifiably information is transferred and becomes subject to a different privacy policy. The importance of security for your personally identifiably information is of utmost concern to us. Milkshake and its third party providers use commercially-reasonable security measures to protect against the loss, misuse, and alteration of personally identifiably information collected by and/or provided to us and under our or such third party provider's control. No security system is completely secure. Accordingly, we and our third party providers cannot guarantee the complete security of your Information, including your personally identifiably information. Milkshake processes personally identifiably information only for the purposes for which it was collected and in accordance with this Privacy Policy. We review our data collection, storage and processing practices to ensure that we only collect, store and process the personally identifiably information needed to provide or improve our Services or as otherwise permitted under this Privacy Policy. We take commercially reasonable steps to ensure that the personally identifiably information we process is accurate, complete, and current, but we depend on you to update or correct your personally identifiably information whenever necessary. We also provide links to other Web site(s) that may have their own information collection practices which are operated and hosted by third parties. These other Web sites are governed by their own privacy policies or information collection practices, which may be substantially different from ours. We have no responsibility or liability for the practices, policies and security measures implemented by third parties on their Web sites. We encourage you to review the privacy policies and information collection practices of those Web sites. Information provided to those Web sites will be subject to the privacy policy posted at that Web site. Third Party Advertisers We may use third-party advertising companies to serve ads when you visit our Web site or to provide you with Services. These companies may use information (generally not including your name, address, email address or telephone number) about your visits to this and other Web sites in order to provide advertisements about goods and services of interest to you. These companies may employ cookies and clear gifs to measure advertising effectiveness. Any information that these third parties collect via cookies and clear gifs is generally not personally identifiable (unless, for example, you provide personally identifiable information to them through an ad or e-mail message). We encourage you to read these businesses' privacy policies if you should have any concerns about how they will care for your personal information. If you would like more information about this practice and to know your choices about not having this information used by these companies, see the Network Advertising Initiative's consumer Web site at . Send to a Friend/Colleague When you ask us to invite or send information to a friend or colleague, we will send them a message on your behalf using your name. The invitation or message may also contain information about related products, services, promotions or events. We may also send up to two reminders to them in your name. If they do not want us to keep their information, we will also remove it at their request. Location Data Milkshake or its third party providers may offer location-enabled services. If you use those services, Milkshake may receive information about your actual location (such as GPS signals sent by a mobile device) or information that can be used to approximate a location (such as a cell ID). In the unlikely event that we believe that the security of your personally identifiably information in our possession or control may have been compromised, we may seek to notify you of that development. If a notification is appropriate, we would endeavor to do so as promptly as commercially reasonably possible under the circumstances, and, to the extent we have your e-mail address, we may notify you by e-mail. You consent to our use of e-mail as a means of such notification. When you contribute to a public area or feature of our Services, such as a chat room, bulletin board, list serve, wall, blog, wiki or other open forum that we may make available on or through our Web site or Services, the information that you submit may be made available to the general public. For this reason, we recommend that you do not submit any sensitive information, including your full name, home address, phone number, other information that would enable other users to locate you, or financial information on these areas. Data Transfers Across International Boarders If you are unsure whether this Privacy Policy is in conflict with the applicable local rules where you are located, you should not submit your or any other persons personally identifiably information to us. Anyone who accesses, uses or interacts with us, our third party providers or the Services who submits personally identifiably information or provides any other information does thereby consent to: (i) the international transfer and/or storage of all such information, including personally identifiably information, to a country which may be deemed to have inadequate data protection and (ii) the collection, using and sharing of such information, including personally identifiably information, as provided in this Privacy Policy. Wireless Addresses If the email address you provide to us is a wireless email address, you agree to receive messages at such address from or on behalf of Milkshake (unless and until you opt out). You understand that your wireless carrier's standard or premium rates apply to these messages. You represent that you are the owner or authorized user of the wireless device on which messages will be received, and that you are authorized to approve the applicable charges. With identity theft a continuing problem, it has become increasingly common for unauthorized individuals to send e-mail messages to consumers, purporting to represent a legitimate company such as a bank or on-line merchant, requesting that the consumer provide personal, often sensitive information. Sometimes, the domain name of the e-mail address from which the e-mail appears to have been sent, and the domain name of the Web site requesting such information, appears to be the domain name of a legitimate, trusted company. In reality, such sensitive information is received by an unauthorized individual to be used for purposes of identity theft. This illegal activity has come to be known as "phishing." If you receive an e-mail or other correspondence requesting that you provide any sensitive information (including your password(s) or credit card information) via e-mail or to a Web site that does not seem to be affiliated with our Web site, or that otherwise seems suspicious to you, please do not provide such information, and report such request to us at [email protected] We reserve the right to add special protections for minors (such as to provide them with an age-appropriate experience), recognizing this may provide minors a more limited experience of our Services. If a minor (as defined by applicable law) provides us with his/her data without parental or guardian consent, we encourage the parent or guardian to contact us to have this information removed and to unsubscribe the minor from future Milkshake marketing. Please note that this Privacy Policy may change from time to time in our sole and absolute discretion. We will not reduce your rights under this Privacy Policy without your consent, and we expect most such changes will be minor. Regardless, we will post any Privacy Policy changes on this page and, if the changes are significant, we will provide a more prominent notice which may include email notification of Privacy Policy changes. Your access, use or interaction with us, our third party providers or the Services following the posting of such changes or the updated Privacy Policy will constitute your acceptance of any such changes. We encourage you to review our Privacy Policy frequently to make sure that you understand how we collect, track, compile, use and share information. Questions, Comments or Changes to Your Personally Identifiable Information If you have any questions or comments, or wish to change the personally identifiably information we have about you, please contact us at: [email protected]. MOST RECENT FINDS October 03, 2015 Take A Stand To End Gun Violence + A Letter From Our Co-Founder Celebrate Love! Eco-Engagement Rings for Every Couple Get Cheeky At Target For A Good Cause Put a Little Good in Everyone's life milkshake all that's good Get your milkshake everyday and be a part of all that's good - products, travel, music, eco, people, causes and companies that give back and make a difference: Milkshake Global Milkshake Kids Good Finds All Finds About Milkshake Advertising/Media Kit Copyright © 2013 Milkshake, LLC.
计算机
2015-48/1912/en_head.json.gz/9667
LINFO Backslash Definition The backslash is an upward-to-the-left sloping straight line character that is used mostly in a computer context. Its main use in Unix-like operating systems and in some programming languages (e.g., C and Perl) is as an escape character, that is, to indicate that the following character has a special meaning. For example, a backslash followed by a lower case letter n (i.e., \n) represents a new line and a backslash followed by a lower case t (i.e., \t) represents a tab. Such sequences are referred to as escape sequences. Another application is in the TeX typesetting system, used on Unix-like operating systems, in which the backslash is used to represent the start of a markup tag. Also, in some text editors a backslash appears at the end of each line that wraps around to the next line. For most people, however, the most familiar use of the backslash is as a separator for directory names and file names in MS-DOS and the various Microsoft Windows operating systems. This role is similar to the use of the forward slash (i.e., a straight line sloping upward to the right) in Unix-like operating systems and the Internet. However, the forward slash has an additional role in Unix-like operating systems of representing the root directory (i.e., the directory that contains all other directories). Microsoft operating systems generally do not have a single root directory and thus cannot use the backslash for this purpose. The backslash was selected as the path delimiter in some early operating systems because the forward slash had already been in use to designate command-line options. However, options were designated by hyphens in the original UNIX. and thus in all of its descendants; consequently, forward slashes were the logical choice for directory and file separators. Created September 5, 2005. Copyright © 2005 The Linux Information Project. All Rights Reserved.
计算机
2015-48/1912/en_head.json.gz/11114
Microsoft Vs. Medium: A Tale Of Two Office Cultures By Elise Hu Aug 28, 2013 ShareTwitter Facebook Google+ Email Microsoft CEO Steve Ballmer oversaw a system called "stack ranking," which employees have called toxic. Originally published on August 28, 2013 3:36 pm In the flood of stories about Steve Ballmer's time at the helm of Microsoft, a troubling symbol of the company's office culture keeps emerging. It's called "stack ranking," a system that had corrosive effects on Microsoft employees by encouraging workers to play office politics at the expense of focusing on creative, substantive work. Kurt Eichenwald explained the system in a Vanity Fair feature last year: "The system — also referred to as 'the performance model,' 'the bell curve,' or just 'the employee review' — has, with certain variations over the years, worked like this: every unit was forced to declare a certain percentage of employees as top performers, then good performers, then average, then below average, then poor. ... "For that reason, executives said, a lot of Microsoft superstars did everything they could to avoid working alongside other top-notch developers, out of fear that they would be hurt in the rankings. ... " 'The behavior this engenders, people do everything they can to stay out of the bottom bucket,' one Microsoft engineer said. 'People responsible for features will openly sabotage other people's efforts. One of the most valuable things I learned was to give the appearance of being courteous while withholding just enough information from colleagues to ensure they didn't get ahead of me on the rankings.' "Worse, because the reviews came every six months, employees and their supervisors — who were also ranked — focused on their short-term performance, rather than on longer efforts to innovate." Most technology companies are moving in the complete opposite cultural direction, especially because they need to innovate. Google encourages its employees to spend 20 percent of their time working on individually exciting projects that aren't mandated by the company. Gaming software company Valve and software design firm Menlo Innovations, which I profiled Monday, have eschewed management layers entirely so that the entire team can feel empowered by making decisions. "Top-heavy and fairly tall [companies] found out that they stifled creativity to a greater extent because the top level of managers don't necessarily have all the creative ideas," says Stephen Courtright, a business professor at Texas A&M University. Courtright and other researchers conducted a "study of studies" on workplace satisfaction and found that for individual employees, the most motivating factor for them is having decision-making authority. "When individuals feel like they've been empowered, they are more likely to perceive the organization as their own, so they identify more with the organization. They tend to have a greater sense of self worth at work. They tend to have more motivation to help the organization succeed, and be more creative and innovative as well," Courtright says. That brings us to the company culture at Medium, a kind of anti-Microsoft that's behind a new content platform. Medium was launched by Ev Williams, the co-founder of Twitter. As documented in a feature on First Round Capital's site, Medium is experimenting with a culture that aims to be more "human." How? Medium's Jason Stirman threw out the "classic management advice" about not getting too chummy with his reports and shielding his team from wider organizational drama. He found that taking his employees out for drinks, knowing about their personal lives and not insulating the team from external concerns actually made them feel more connected to the company, happier and more productive. So Medium has adopted "holacracy" as its management framework. Some takeaways, as explained to First Round Capital: "Holacracy encourages people to work out their tensions and issues one-on-one or outside of meetings if possible. Given the rampant explosion of meetings in corporate environments (so much so that there are meetings about having too many meetings), this is an increasingly important tip. Tension meetings are defined as opportunities to air issues that couldn't be resolved elsewhere. People should only address the group with topics that actually need others to weigh in or help find a path forward. "Establishing mutual accountability can make a highly tiered workplace feel flatter and more engaging. In addition to informing his reports about what was going on throughout the company, Stirman wishes he would have shared his own list of tasks and concerns with the people on his team. That way he would have been accountable to them too and made them feel less managed. 'At Twitter, there was this common power dynamic where my reports felt accountable to me to get their work done and I felt accountable to the guy above me. It would have been good to be more forthcoming.' " So far, holacracy is working at Medium, and the wider trend toward flatter offices continues in the tech sector and beyond. Meanwhile, Microsoft has Ballmer in charge for another year. In an interview with the Seattle Times last month, he said that he's sticking with the stack-ranking system that's already driven so many Microsoft employees away.Copyright 2014 NPR. To see more, visit http://www.npr.org/. View the discussion thread. © 2015 WDIY
计算机
2015-48/1912/en_head.json.gz/11743
Recent posts by Caractacus on Kongregate Mar 21, 2009 3:14am Caractacus Best RTS game? X Com. It’s nearly 20 years old, it still beats down everything since. Topic: Technical Support / Drift Runners Don’t know if anyone else is having this problem but everytime I attempt to open Drift Runners, it crashes my browser. In fact it crashes everything, I’ve attempted it in Opera, Firefox and IE. All of them crashed. Several times. Any possible solutions? Aug 6, 2008 9:46am Pandemic 2 -- Disease Names If you could make your own MMO... I have tons of ideas for Games. But the one that would really really work for an MMORPG would be a Warhammer version – specifically the Space Marine Universe where you choose your species, then you can either sit around on your planet and do quests there, or join a faction (Space Marines, other armies etc) You start off as a grunt in a massive Universe War and you have to survive and kill in order to progress up through the ranks. Eventually if you get to the high levels you get command of your own ship and the troops within, you can even get higher in the Imperium so that you control the strategy and tactics, maybe even President of The Imperium. The point would be that all points of the game are Player Controlled, so you’re not just completing quests in a group or alone, you’re actually conducting a war from a players perspective, whichever rank you are at. So if you are a General, you can choose to send a ship to attack an enemy planet, and if you’re playing the Enemy someone will be playing the Governor of that Planet who has either invested in defenses or not. You get the idea. I reckon we’ll see a game like that in the future, but it’s still a ways off. I’d also like to see an MMORPG set in Discworld. the first video game you ever played Christ I’m old. The first game I ever played was one of two. Either Harrier Attack or The Galactic Plague on The Amstrad CPC. I was about 3 at the time and I’ve loved games ever since. What is it with Hedgehogs and computer games? We currently have Hedgehog Challenge on Kong, following closely on the heels of Hedgehog Launch, and of course, looming over all of them, there is the long shadow of Sonic the Hedgehog. All of these games seek to endow Hedgehogs with qualities that these small prickly mammals are not usually known for, such as speed, jumping, or space exploration. Why Hedgehogs? Why not Voles, Moles or other small omnivores? What is it about the Hedgehog that fascinates Game Developers? Or is it something more sinister? Are our Games Companies run by a secret race of super intelligent, space faring and super fast Hedgehogs attempting to condition us into acceptance for their eventual take over of the world? Friends, we must be on our guard and watch for this Ungodly conspiracy. If you spot anything Hedgehog related in your immediate vicinity, or indeed if an authority figure in your area has recently been replaced by a Hedgehog, then speak up and let the people know that that they must rise up and strike a blow against Hedgehogs for the sake of Mankind. Let them know that Humans will not become the slaves of the Hedgehogs. Join Hedgehog watch now, and together we shall put an end to their prickly plans. What Is Beauty? Great Question. One that you can never have a wrong answer for. I did my dissertation on beauty. Here are the quotes I culled for the foreward. Remember, if "all these extremely clever and famous people cannot agree even on the basic tenets of beauty then it’s pretty impossible for anyone else to (Crap at html so can’t separate these out – the quotes come in pairs): I never remember that anything beautiful…was ever shown, though it were to a hundred people, that they did not all immediately agree that it was beautiful." The Sublime and Beautiful, Edmund Burke “Everything has its Beauty but not everyone sees it.” Analects, Confucius “Everything beautiful has its moment and then passes away.” Las Ruinas, Luis Cernuda “A thing of Beauty is a joy forever; Its loveliness increases; it will never Pass into nothingness” Endymion, John Keats “Beauty stands In the admiration only of weak minds Led captive” Paradise Regained, John Milton “If you get simple Beauty, and nought else. You get about the best thing God invents.” Fra Lippo Lippi, Robert Browning “The good is the beautiful.” Lysis, Plato “It is amazing how complete is the delusion that Beauty is goodness.” The Kreutzer Sonata, Leo Tolstoy “Remember that the most beautiful things in the world are the most useless.” The Stones of Venice, John Ruskin “Think of all the Beauty around you and be happy.” The Diary of a Young Girl, Anne Frank Myspace Suicide story “Also, you have to admit the girl must have been pretty self obssessed, pathetically depressed, shy, addicted to the feel of people liking her, somewhat to greatly mentally ill, overly sensitive to rejection, or a combination of all of those for somebody on the internet who was somewhat friends with them to kill herself over that.” Sounds like your average teenager to me. American strategy, then and now. The allied headcount is extremely high – however the deaths are coming from security firms like blackwater which are not reported. Take the military deaths which are reported. Times that by two or three and add the two numbers together to get a more realistic picture of the allied death toll. Plus Iraq is pretty much now in a full blown civil war between two or three or four guerilla armies – the Americans are the least of anyones problems. It’s the civilians in the middle who are dying. Evanator Songs Why would I need to edit? Now that’s more like it. Flagpole Sitter by Harvey Danger. Oh yea. Van Morrison – Meet me in the indian summer. Bit pointless as most of the bars tune to Van’s voice. And he’s a bit staccato in his singing, especially on this song. Area 51 is proof that we are gullible. Again. Scatman doesn’t do much. Guess synthesisers and drum machines don’t register highly. Hmm. 90’s dance – Corona, Rhythm of the Night. Not so good. Pity. I’d have thought they’d have gone well. Gonna try Scatman John. As I expected – Sparks are pretty good. Tried As I sit to play the organ at the Notre Dame Cathedral. Some pretty easy bits mixed with high intensity mental play. This town ain’t big enough for the both of us might be better. Also found a glitch. If you start a song and get everything pretty even and then upload something, the song stops but your score keeps going. Anyone tried Bohemian Rhapsody yet? A thread for the songs you upload on Evanator. Tell us whether they rock or not. My first one Cold War Kids: Hang me up to dry. Great fun. Slow to start and then goes mental. Ryan (and everyone else) I suggest you read The War of the Flea by Robert Taber. It’s an excellent book showing how guerilla warfare trumps conventional warfare everytime – but only if you are defending. For offence it’s utterly useless. Some of the main reasons. Guerillas are not conventional uniformed troops. They are defending and so always have the support of the civilian population. Any time they become threatened by the enemy they simply hide their weapons and blend in with the civilian population in ten minutes. Offensive troops cannot do this. They are uniformed, they do not usually have the support of the civillians. In todays world they cannot shoot civilians indiscriminately. Therefore they cannot find the enemy. This is what happened in Vietnam and is happening in Iraq. Guerillas move on foot, they know the territory like the back of their hand, they carry small arms. They can strike and disappear. Offensive troops have heavy machinery to bring up. Tanks etc. They are slow. If they abandon the heavy machinery, they are vulnerable. Guerillas do not need to hold any ground at all because the civilians do that for them. If they lose control of a town or village they simply wait until the enemy passes through and then pop up behind them. Offensive countries have to hold all the ground they take – otherwise their supply lines are cut, their troops in front of the line are lost. This becomes extremely intensive in terms of soldiers and equipment as Guerillas could attack at any point at any time. Basically – as Taber explains in the book, Guerilla Warfare is unbeatable for defending countries. While the Iraqi army failed miserably against the American Army, the Americans don’t even know who they are fighting right now. The last person to succesfully fight guerillas was Hitler. And even then he only had limited success. As any war film will tell you – he had no compunction about having his officers massacre the civilians in any place that resisted thereby partly removing the support base of guerilla fighters. His Blitzkrieg form of attack where communications were cut immediately also hampered resistance by stopping any weapons reaching beleagured towns before they had been taken. Afghanistan is different because the resistance is Taliban – not guerillas from the civilian population. And everyone has grievances against the Taliban. Taber’s book is I believe par for the course in army training, but obviously not in Pentagon Training. Basically, it is impossible for any country to offensively occupy another country for any major period of time. Guerilla armies are impossible to find let alone rout. While Iraq was being freed from Saddam, the Americans had the support of the civilians, when it turned into an occupation then the population turned and will hound the Americans until they leave. No matter the troop surges, no matter what. The troop surge is having results, but all that is, is the guerilla fighters going to ground while there are more offensive troops around. The surge is seen to be working, so at some point it will end, and oh look, all those fighters will pop up again. The Guerillas have all the time in the world on their side. The Americans cannot stay in Iraq indefinitely. Iraq is slightly easier than Vietnam because the Iraqis don’t have the dense terrain the Vietcong did but in the long run it won’t make a slight bit of difference. America was defeated the day it stayed longer than the fall of Saddam and Robert Taber predicted it in the 1970’s. Pity George Bush and Donald Rumsfeld can’t read. I think the change obviously occured in Vietnam where thousands upon thousands of raw recruits were sent in and thousands upon thousands came out in bodybags. Mobile warfare (Tanks) were obviously pretty much useless in that environment and it was probably noticed that the longer you survived by chance, the more chance you had of surviving by skill. However I think it probably has gone way too far the other way. Vietnam was a unique environment (also the pacific campaign during WW2, Korea etc) when we come back to Iraq – it’s not much more than a wide open flat space, mostly desert. small squads of well trained men simply can’t cover that expanse, in the Iraq campaign you need numbers. Different battlefields require different tactics and the Pentagon still haven’t realised this. Put five squaddies on the corner of every street in Baghdad, and the enemy cannot move, but there are not enough troops to do this so the enemy can stay in control of everywhere outside the Green Zone. It’s no good having 500 commandos if they can’t get outside their own bunker. I’m not for the war in Iraq. But if the Pentagon had had any kind of military nous at all they would have ended this at the initial invasion, which would have been the best possible outcome. Because they are morally, militarily and politically redundant they have messed it up beyond belief. Kongregate debate team! (Historical Document) And another one: number 13: New studies from both sides of the Atlantic reveal that Roundup, the most widely used weedkiller in the world, poses serious human health threats. More than 75 percent of genetically modified (GM) crops are engineered to tolerate the absorption of Roundup—it eliminates all plants that are not GM. Monsanto Inc., the major engineer of GM crops, is also the producer of Roundup. Thus, while Roundup was formulated as a weapon against weeds, it has become a prevalent ingredient in most of our food crops. Three recent studies show that Roundup, which is used by farmers and home gardeners, is not the safe product we have been led to trust. A group of scientists led by biochemist Professor Gilles-Eric Seralini from the University of Caen in France found that human placental cells are very sensitive to Roundup at concentrations lower than those currently used in agricultural application. An epidemiological study of Ontario farming populations showed that exposure to glyphosate, the key ingredient in Roundup, nearly doubled the risk of late miscarriages. Seralini and his team decided to research the effects of the herbicide on human placenta cells. Their study confirmed the toxicity of glyphosate, as after eighteen hours of exposure at low concentrations, large proportions of human placenta began to die. Seralini suggests that this may explain the high levels of premature births and miscarriages observed among female farmers using glyphosate. Seralini’s team further compared the toxic effects of the Roundup formula (the most common commercial formulation of glyphosate and chemical additives) to the isolated active ingredient, glyphosate. They found that the toxic effect increases in the presence of Roundup ‘adjuvants’ or additives. These additives thus have a facilitating role, rendering Roundup twice as toxic as its isolated active ingredient, glyphosate. Another study, released in April 2005 by the University of Pittsburgh, suggests that Roundup is a danger to other life-forms and non-target organisms. Biologist Rick Relyea found that Roundup is extremely lethal to amphibians. In what is considered one of the most extensive studies on the effects of pesticides on nontarget organisms in a natural setting, Relyea found that Roundup caused a 70 percent decline in amphibian biodiversity and an 86 percent decline in the total mass of tadpoles. Leopard frog tadpoles and gray tree frog tadpoles were nearly eliminated. In 2002, a scientific team led by Robert Belle of the National Center for Scientific Research (CNRS) biological station in Roscoff, France showed that Roundup activates one of the key stages of cellular division that can potentially lead to cancer. Belle and his team have been studying the impact of glyphosate formulations on sea urchin cells for several years. The team has recently demonstrated in Toxicological Science (December 2004) that a “control point” for DNA damage was affected by Roundup, while glyphosate alone had no effect. “We have shown that it’s a definite risk factor, but we have not evaluated the number of cancers potentially induced, nor the time frame within which they would declare themselves,” Belle acknowledges. There is, indeed, direct evidence that glyphosate inhibits an important process called RNA transcription in animals, at a concentration well below the level that is recommended for commercial spray application. There is also new research that shows that brief exposure to commercial glyphosate causes liver damage in rats, as indicated by the leakage of intracellular liver enzymes. The research indicates that glyphosate and its surfactant in Roundup were found to act in synergy to increase damage to the liver. UPDATE BY CHEE YOKE HEONG Roundup Ready weedkiller is one of the most widely used weedkillers in the world for crops and backyard gardens. Roundup, with its active ingredient glyphosate, has long been promoted as safe for humans and the environment while effective in killing weeds. It is therefore significant when recent studies show that Roundup is not as safe as its promoters claim. This has major consequences as the bulk of commercially planted genetically modified crops are designed to tolerate glyphosate (and especially Roundup), and independent field data already shows a trend of increasing use of the herbicide. This goes against industry claims that herbicide use will drop and that these plants will thus be more “environment-friendly.” Now it has been found that there are serious health effects, too. My story therefore aimed to highlight these new findings and their implications to health and the environment. Not surprisingly, Monsanto came out refuting some of the findings of the studies mentioned in the article. What ensued was an open exchange between Dr. Rick Relyea and Monsanto, whereby the former stood his grounds. Otherwise, to my knowledge, no studies have since emerged on Roundup. For more information look to the following sources: Professor Gilles-Eric, [email protected] Biosafety Information Center, http://www.biosafety-info.net Institute of Science in Society, http://www.i-sis.org.uk Just to reinvigorate this thread. Just found this site using stumble upon: http://www.projectcensored.org/censored_2007/index.htm Check down to number 11: Dangers of genetically modified food confirmed. Here’s the report: Several recent studies confirm fears that genetically modified (GM) foods damage human health. These studies were released as the World Trade Organization (WTO) moved toward upholding the ruling that the European Union has violated international trade rules by stopping importation of GM foods. Research by the Russian Academy of Sciences released in December 2005 found that more than half of the offspring of rats fed GM soy died within the first three weeks of life, six times as many as those born to mothers fed on non-modified soy. Six times as many offspring fed GM soy were also severely underweight. In November 2005, a private research institute in Australia, CSIRO Plant Industry, put a halt to further development of a GM pea cultivator when it was found to cause an immune response in laboratory mice.1 In the summer of 2005, an Italian research team led by a cellular biologist at the University of Urbino published confirmation that absorption of GM soy by mice causes development of misshapen liver cells, as well as other cellular anomalies. In May of 2005 the review of a highly confidential and controversial Monsanto report on test results of corn modified with Monsanto MON863 was published in The Independent/UK. Dr. Arpad Pusztai (see Censored 2001, Story #7), one of the few genuinely independent scientists specializing in plant genetics and animal feeding studies, was asked by the German authorities in the autumn of 2004 to examine Monsanto’s 1,139-page report on the feeding of MON863 to laboratory rats over a ninety-day period. The study found “statistically significant” differences in kidney weights and certain blood parameters in the rats fed the GM corn as compared with the control groups. A number of scientists across Europe who saw the study (and heavily-censored summaries of it) expressed concerns about the health and safety implications if MON863 should ever enter the food chain. There was particular concern in France, where Professor Gilles-Eric Seralini of the University of Caen has been trying (without success) for almost eighteen months to obtain full disclosure of all documents relating to the MON863 study. Dr. Pusztai was forced by the German authorities to sign a “declaration of secrecy” before he was allowed to see the Monsanto rat feeding study, on the grounds that the document is classified as “CBI” or “confidential business interest.” While Pusztai is still bound by the declaration of secrecy, Monsanto recently declared that it does not object to the widespread dissemination of the “Pusztai Report.”2 Monsanto GM soy and corn are widely consumed by Americans at a time when the United Nations’ Food and Agriculture Organization has concluded, “In several cases, GMOs have been put on the market when safety issues are not clear.” As GMO research is not encouraged by U.S. or European governments, the vast majority of toxicological studies are conducted by those companies producing and promoting consumption of GMOs. With motive and authenticity of results suspect in corporate testing, independent scientific research into the effects of GM foods is attracting increasing attention. Comment: In May 2006 the WTO upheld a ruling that European countries broke international trade rules by stopping importation of GM foods. The WTO verdict found that the EU has had an effective ban on biotech foods since 1998 and sided with the U.S., Canada, and Argentina in a decision that the moratorium was illegal under WTO rules.3 It’s an excuse to watch endless repeats of Saved by the Bell. Making drugs illegal is silly -- but the war on drugs is simply reckless. That wasn’t me sorry. I just copied and pasted the article from the website and it came out like that. An interesting article on guns, drugs and crime in Britain http://news.bbc.co.uk/1/hi/uk/6937537.stm Simply carrying a gun now carries a mandatory minimum sentence of five years - so who does it and why? Guns can provide an intoxicating and almost pornographic attraction to young men who often feel powerless, according to academics in the field. Last year Gavin Hales, a criminologist from Portsmouth University, researched gun crime in a project funded by the Home Office. He interviewed 80 men in prison who had become involved in gun crime. Asked about what attracted him to guns, Tommy, a London-born crack addict and armed robber, said: "The control, the power you have got when you have got that in your hand." That power was crudely illustrated at a recent trial at the Old Bailey. GUN CRIMINALS The Babamuboni brothers Timy (left) and Diamond Babamuboni were convicted of manslaughter at the Old Bailey in December 2006 The trial was shown footage of Timy, 15, tormenting a friend with a gun They were sentenced as juveniles despite the police's suspicions that they lied about their ages Is it wrong to blame hip hop? The jury was shown footage from a mobile phone of a boy pointing a sawn-off shotgun at a terrified former friend who was forced to strip to his underpants as his tormentors laughed. Timy Babamuboni, who was aged 15, swore on the Bible he was not the boy in the footage, but he was shown to be a liar and was later convicted of the manslaughter of a woman shot dead as she held a baby at a christening party in south London. Posturing Mr Hales said police frequently came across mobile phone footage of young men posturing with guns, which may be real or imitation. In May this year police swooped on homes in Ellesmere Port, Cheshire, after discovering footage of youths posturing with a weapon. It turned out to be an imitation weapon. Things have changed a great deal since the 1960s and 1970s, when gun crime was generally restricted to armed robberies, usually by career criminals and often using shotguns. In the 1980s and 1990s the number of armed robberies fell away as more and more criminals moved into the drugs trade. Despite the 1997 ban on handguns - introduced after the Dunblane massacre - the crooks increasingly favoured pistols and revolvers, which were easier to hide and more "fashionable". Some politicians have pointed the finger at Hollywood films, violent computer games and the posturing - often with guns - on hip-hop videos. Last year Tory leader David Cameron criticised BBC Radio 1 for playing songs which he said "encouraged the carrying of guns and knives". In the past 15 years there has been a noticeable rise in black-on-black gun crime, which was recognised when the Metropolitan Police launched Operation Trident in response to appeals by the black community. But despite Mr Cameron's recent "anarchy in the UK" rhetoric, the problem pre-dates the Labour government. In March and April 1997 - under a Conservative government - there were 10 murders by gun in England alone. We have noticed for a couple of years now that the ages of people involved in gun crime is reducing and it's something that we have been deeply concerned about Det Ch Supt Helen Ball What does seem to have changed in the past decade is the average age of both offenders and victims, which has come down considerably. The average age of the victims in those 10 murders in the spring of 1997 was 29 and the youngest was aged 19. Ten years on, if you look at the gun deaths that took place in June and July 2007 the average age of the five victims had fallen to 25 and that falls to 20 if 47-year-old boxer James Oyebola is excluded. Detective Chief Superintendent Helen Ball, who heads up Operation Trident, recently told BBC Radio Five Live: "We have noticed for a couple of years now that the ages of people involved in gun crime is reducing and it's something that we have been deeply concerned about and until we are able to tackle that trend I am not sure that we will be able to be confident in solving this problem." She said the proportion of victims who were teenagers had risen from 19% to 31% in the last four years. She said there were many reasons for young people getting involved, but two significant factors were exclusion from school and copying the offending of older siblings. The Reverend Nims Obunge, the chief executive of the Peace Alliance, said many young people suffered from low self-esteem and this absence of "self-love" was key. He said: "When young people don't feel a sense of love for themselves, the absence of value for their lives... that is dangerous." Mr Obunge added: "Another big thing is the sense of territoriality - some call it gang culture - which has kicked off in a big way in recent years." Mr Hales said the emergence of so-called "postcode territoriality" did raise difficult questions. He said: "Is it a fad? It may be part of youth culture which may disappear very quickly, but it is a worry." Mr Hales said there was increasing evidence of an "arms race" in some communities, with youths turning from knives to guns and then to even more powerful weapons. Some youths claim this arms race forces them to carry guns for protection. But Detective Chief Inspector John Lyons, of Greater Manchester Police's Armed Crime Unit, was dismissive of that argument. The other day I had a journalist ask me what it was like living in the 'triangle of death' Erinma Bell He said: "If you are not swimming in the pool with the sharks, you don't need to behave like a shark. "You might have a gun for self-defence if you are a drug dealer, but you are just as likely to have it to make sure you get paid." But Manchester community worker Erinma Bell said there needed to be more emphasis on positive aspects of life in inner city communities such as Moss Side and Longsight and she blamed the media for perpetuating negative images. "The other day I had a journalist ask me what it was like living in the 'triangle of death'. The media should stop perpetuating these labels," she said. She said many of the youths in areas plagued by gun crime simply needed to be given real achievable alternatives as well as positive role models. Ms Bell has set up a work experience programme at construction company Laing O'Rourke and she said this sort of thing could transform the aspirations of young people in areas like Moss Side. Craze 24, a hip-hop MC from Brixton, south London, agrees there is a lack of positive role models and said: "The local role models are drug dealers, with their big gold chains, their flash cars and their money. "The young kids too often want to be like them rather than someone who is studying every day for a proper job which might take years. There is too much of a 'get-rich-quick' mentality." He said the mandatory five-year sentence for carrying a gun was just not enough. "They need to make it 10 years to really scare these kids," he said. Richard Garside, director of the Centre for Crime and Justice Studies, said the fact young black men were statistically over-represented among gun crime victims should not lead to misleading analysis. He said: "I don't think anybody is seriously suggesting there is a gun-carrying gene that black people inherit that white people don't. So whatever we are saying about young black men, it's not related to their blackness." Mr Garside said the high rate of gun crime in black communities was more to do with the fact the victims tended to live in inner city areas with a lack of social and economic opportunity.
计算机
2015-48/1912/en_head.json.gz/12079
The Elder Scrolls Online announced Bethesda has announced The Elder Scrolls Online, an MMO set in Tamriel coming to PC and Mac in 2013. Making good on a rumor from March, The Elder Scrolls Online has officially been confirmed. As suspected, the game takes place a millennium before The Elder Scrolls: Skyrim, and focuses on the daedric prince Molag Bal's attempts to pull Tamriel into the demonic realm.The game is appearing as June's Game Informer cover story, so we should see more details, screenshots, and a teaser trailer soon. For now, the setting is about as far as our knowledge goes. The game is being developed by Zenimax Online Studios, and is due in 2013 for PC and Mac."We have been working hard to create an online world in which players will be able to experience the epic Elder Scrolls universe with their friends, something fans have long said they wanted," said game director Matt Firor in the announcement. "It will be extremely rewarding finally to unveil what we have been developing the last several years. The entire team is committed to creating the best MMO ever made - and one that is worthy of The Elder Scrolls franchise." Steve Watts Kelevor What disappoints me about this announcement is that the care and forethought , and perhaps even inspiration that gave ri... pizzaput http://www.gamespot.com/forums/topic/28983187/cats-are-not-ducks-and-skyrim-isnt-a-goddamn-rpg-no-56k?page=0 pizzaput Cry little fanboy, cry. Visit Chatty to Join The Conversation The Elder Scrolls series
计算机
2015-48/1912/en_head.json.gz/12171
The place where ZX Spectrums never die Gaming We visit the Vintage Computing Festival The Vintage Computing Festival visited Education not nostalgia Amiga. Spectrum. Atari. BBC Micro. Does a little shiver of excitement run down your spine when you read those hallowed names? Yes? You're not alone. See today's best Black Friday deals for gamingVintage computers and consoles are making a comeback, of sorts. No longer gathering dust in attics across the land, they're being re-introduced into the wild to show a new generation of computer fans what they were capable of and, more importantly, what people can do with them today. There are discussion groups, fan clubs, websites, museums and even festivals dedicated to classic computers. More and more people are embracing old-school computing and they're becoming increasingly vocal about it. There are people out there whose hobbies and even jobs revolve around preserving these technologies for later generations, keeping them alive and functioning so others can discover their delights. And there are more of them than you may think. The recent Vintage Computing Festival, which was held at The National Museum of Computing (TNMOC) at Bletchley Park, made this very apparent. The biggest celebration of vintage computing held in Britain to date, the VCF attracted over 30 private exhibitors, along with thousands of fans of technology through the ages. It wasn't just a static display of old computers in glass cases – visitors could touch the old machines, buy them, play classic games on them, program them and even log on to Twitter from them. Old technology in action. It wasn't just a curiosity – the festival was a celebration of these old computers, especially the rarer ones. The machines weren't just from the many stored at TNMOC either, with many from private collectors keen to show off the rarities they owned, and to share them with an appreciative crowd. Whispered gasps of "It's a ZX80 – I always wanted one of those!" and "Is that really 3D Monster Maze?" abounded. For a vast number, the festival was a homecoming; a return to their digital roots. Vintage classics Versions of the Vintage Computer Festival (VCF) have been running for over a decade in the USA – it started, naturally, in Silicon Valley – so it was high time it made its way over to our heritage-laden shores. We spoke to lead organiser Simon Hewitt to ask how the UK version came about. "One of TNMOC's trustees, Kevin Murrell, had heard about the Vintage Computing Festivals that were a regular event over in the US," Hewitt explains. "He mooted the idea of running a similar event here among the volunteers and I picked up on it. We combined the basics of the US and German events, but put a British slant on it by including more of our homegrown machines. We also wanted to give it a bit more of a broad appeal to families, as well as making it a showpiece event for the museum. I phoned around various friends and contacts who were interested in 'retro' computing and it all started from there." So why was this the right time to try it out in the UK? "Interest in retro computing and vintage computers has been steadily increasing over the last two or three years," Hewitt explains. "Various smaller events had been popping up all over the country on a regular basis, and there was always a healthy turnout in terms of visitors. Television programmes such as Micro Men and Electric Dreams, which the BBC originally screened in mid-2009, attracted healthy viewing figures that have warranted regular repeats. This told us that the interest was out there. "I spoke to a few friends who either ran or attended the events which were already taking place, basically asking them, 'Do you think it is worth us trying to do something on a much bigger scale?' The answer was a resounding 'yes', so we did." Park life There's no doubt that the VCF was a huge success. "No one had predicted what the atmosphere would be like," says Hewitt. "It actually felt like a summer festival – lots of smiling faces and people genuinely enjoying themselves." The venue for the festival couldn't have been more apt: the TNMOC is located slapbang in the middle of Bletchley Park, the birthplace of digital computing. Even when not playing host to events such as the Vintage Computing Festival, it's TNMOC's mission to collect and restore computer systems, and to allow people to explore that collection for inspiration, learning and enjoyment. The museum is a charity, relying entirely on donations to continue showing off the development of computing. Its range works back from today's digital commodity masterpieces to the pioneering wartime efforts that resulted in machines such as Colossus, the first programmable electronic computing device, which was used by British codebreakers to read encrypted German messages during World War II. It's staffed mostly by volunteers who give up their time to help share these computing relics with the general public. Kevin Murrell is one of a group of trustees that set up TNMOC. We talked to him about the VCF event and asked him if he thinks the appreciation of vintage computing is on the rise. "Over the past few years, appreciation of our computing heritage has really taken off," he answers. "People are suddenly realising how far we have come in just a few decades – in their own lifetimes. One of the most common comments we all heard at the VCF was 'I've used one of those'. People realise that they are living through a time of momentous change and that they have been part of it." Murrell clearly gleans a lot of pride from his position in the TNMOC. And the most exciting part of his role? "Bringing a machine that was thought to be lost to history back to life," he says, "and then seeing the reactions of the original designers and users when the computer is running again." In many ways it's like a modern-day Dr Frankenstein position, only with fewer torches and pitchforks… 1 Next Page Education not nostalgia See more Gaming news Load Comments
计算机
2015-48/1912/en_head.json.gz/12173
Botnets for hire likely used in attacks against US banks, security firm says The attacks are very sophisticated, security researchers say Lucian Constantin (IDG News Service) on 09 January, 2013 20:51 Evidence collected from a website that was recently used to flood U.S. banks with junk traffic suggests that the people behind the ongoing DDoS attack campaign against U.S. financial institutions -- thought by some to be the work of Iran -- are using botnets for hire. The compromised website contained a PHP-based backdoor script that was regularly instructed to send numerous HTTP and UDP (User Datagram Protocol) requests to the websites of several U.S. banks, including PNC Bank, HSBC and Fifth Third Bank, Ronen Atias, a security analyst at Web security services provider Incapsula, said Tuesday in a blog post. Atias described the compromised site as a "small and seemingly harmless general interest UK website" that recently signed up for Incapsula's services. An analysis of the site and the server logs revealed that attackers were instructing the rogue script to send junk traffic to U.S. banking sites for limited periods of time varying between seven minutes and one hour. The commands were being renewed as soon as the banking sites showed signs of recovery, Atias said. During breaks from attacking financial websites the backdoor script was being instructed to attack unrelated commercial and e-commerce sites. "This all led us to believe that we were monitoring the activities of a Botnet for hire," Atias said. "The use of a Web Site as a Botnet zombie for hire did not surprise us," the security analyst wrote. "After all, this is just a part of a growing trend we're seeing in our DDoS prevention work." "In an attempt to increase the volume of the attacks, hackers prefer web servers over personal computers," Atias said. "It makes perfect sense. These are generally stronger machines, with access to the high quality hoster's networks and many of them can be easily accessed through a security loophole in one of the sites." Another interesting aspect of the PHP-based backdoor analyzed by Incapsula is that it had the ability to multiply on the server in order to take full advantage of its resources, Atias said. "Since this is a server on the hoster's backbone, it was potentially capable of producing much more traffic volume than a regular 'old school' botnet zombie." In addition, the backdoor script provided an API (application programming interface) through which attackers could inject dynamic attack code in order to quickly adapt to changes in the website's security, Atias said. The attack script on the compromised U.K. website was being controlled through another website in Turkey that belongs to a Web design company. Incapsula's researchers believe that the Turkish site had been compromised as well and was serving as a bridge between the real attackers and their website-based botnet. A group calling itself the "Izz ad-Din al-Qassam Cyber Fighters" has taken responsibility for the recent wave of attacks against the U.S. financial websites that started in December. The same group claimed responsibility for similar attacks launched against the same financial institutions in September. The group claims that its DDoS campaign is in response to a film trailer mocking the prophet Muhammad not being removed from YouTube. However, some U.S. government officials and security experts are convinced that the attacks are actually the work of the Iranian government, The New York Times reported Tuesday. The possibility of Iran being behind the attacks has been advanced before. In September, former U.S. Senator Joe Lieberman, an Independent from Connecticut, who was chairman of the Senate Committee on Homeland Security and Governmental Affairs at the time, blamed the Iranian government for the attacks against U.S. banks and said that they were probably launched in retaliation for the economic sanctions imposed on Iran. The Iranian government officially denied its involvement and the U.S. government has not yet released any evidence that supports this claim. That said, the sophistication of the tools used in the attacks, as well as their unprecedented scope and effectiveness, have been advanced as arguments that this DDoS attack campaign might be state sponsored. The attacks against the U.S. financial industry from the past few months are unique in scale, organization, innovation and scope, Carl Herberger, vice president of security solutions at Israel-based network security vendor Radware, said Wednesday via email. The company cannot comment on the origin of the attacks, because it only focuses its resources on attack detection and mitigation, Herberger said. However, in Radware's view, the DDoS attack campaign against U.S. banks has represented the longest persistent cyberattack on a single industrial sector in history, he said. If someone in the U.S. government is indicating that the Iranians are doing it, like Lieberman did a few months ago, they're probably spot on, Scott Hammack, the CEO of DDoS mitigation vendor Prolexic, said Wednesday. These attackers are not using the traditional "pull" command and control technology where the botnet clients periodically connect to a server to check if new instructions are available. Instead, they are using a "push" technology to send instructions in a matter of seconds to hundreds of compromised servers, Hammack said. This allows for more dynamic attacks, but also leaves the attackers open to being identified a lot easier, Hammack said. The U.S. government is monitoring some of the compromised servers used in the attacks and can see exactly where those instructions are coming from, he said. Herberger described the DDoS attacks as well-organized and innovative in the sense that they use newly uncovered vulnerabilities and attack origins. One example is that they leverage the infrastructure of cloud providers instead of the resources of consumer-oriented computers. The attacks are definitely very sophisticated, Hammack said. The attackers know exactly what weak spots to hit and target them in rotation. They've obviously done a lot of research into the infrastructure of the banks and how it's configured, he said. "These attacks have, almost simultaneously, been launched on nearly every major commercial bank in the U.S.," Herberger said. However, not all of the targeted banks have suffered outages, which suggests that some effective defenses do exist, he said. Tags NetworkingsecurityProlexicfinanceradwaregovernmentindustry verticalsIncapsula A client survey reveals that customers now want more options from their support providers and are also more inclined to stay with their current provider if the level of customer support caters to expectations. Find out why service providers should be focusing on analytics to drive performance improvements to provide a well-rounded proactive approach to support client needs, why it's important to introduce mechanisms for end users to share user experiences and why leveraging on this information is key to implementing change. Third Platform demands special attention from candidates and recruiters Airbnb: Changing the Hospitality Landscape Designing a secure DNS architecture Latest News
计算机
2015-48/1912/en_head.json.gz/12412
Welcome to Wikipedia, the free encyclopedia that anyone can edit. 5,018,662 articles in English All portals From today's featured article Almirante Latorre in 1921 The Almirante Latorre class consisted of two super-dreadnought battleships designed by the British company Armstrong Whitworth for the Chilean Navy, named for Admirals Juan José Latorre and Thomas Cochrane. Construction began on 27 November 1911, but both were purchased and renamed by the Royal Navy prior to completion for use in the First World War. Almirante Latorre (pictured) was commissioned into British service as HMS Canada in October 1915 and spent its wartime service with the Grand Fleet, seeing action in the Battle of Jutland. The ship was sold back to Chile in 1920, assuming its former name. Almirante Latorre‍ '​s crew instigated a naval mutiny in 1931. After a major refit in 1937, she patrolled Chile's coast during the Second World War. Almirante Cochrane was converted to an aircraft carrier and commissioned into the Royal Navy as HMS Eagle in 1924. It served in the Mediterranean Fleet and on the China Station in the inter-war period and operated in the Atlantic and Mediterranean during the Second World War before being sunk in August 1942 during Operation Pedestal. (Full article...) Part of the South American dreadnought race series, one of Wikipedia's featured topics. Recently featured: Children of Mana Ron Hamence with the Australian cricket team in England in 1948 Rhythm Killers More featured articles... William of Nottingham lecturing to a group of students ... that the 14th-century William of Nottingham (pictured)—and not the 13th-century one—was the author of the Commentary on the Gospels based on Clement of Llanthony's One from Four? ... that the purple heron often adopts a posture with its neck extending obliquely? ... that Eric Church's album Mr. Misunderstood was released with no prior warning and sent to his fan club members the day before it went on sale? ... that in one of the matches that the England cricket team played between 1920 and 1939, they won by the largest margin of any team in Test cricket? ... that seductive details may have a negative effect on learning? ... that after advocating for the bill funding construction of the Arizona Territorial Capitol, Prosper P. Parker was Speaker of the House during the first legislative session to meet there? ... that Money Pit‍‍ '​‍s producer described the show as a "legal minefield"? ... that the site of Fordham Plaza was rezoned in an effort to make it the "Times Square of the Bronx"? Recently improved articles Start a new article Nominate an
计算机
2015-48/1912/en_head.json.gz/12441
Posts Tagged ‘open source’ Leadership, fawgawdsake! Larry 7 comments In a Google+ post last week, Aaron Seigo rightfully ripped into “community managers” — quotes intentional, because it doesn’t really apply to all who are in charge of keeping a community functioning (more on this later) — generally who lead from above or by “star power” rather than leading by the consensus of the community. I wrote about it briefly in my weekly wrap-up on FOSS Force on Friday, but it started me to think about what makes good project leadership. As I said in my FOSS Force item, I think overall Aaron is right in his tome on G+, yet part of the problem is the term “community manager” itself, which might lend itself to the boss/worker dynamic, and whether this makes it a self-fulfilling prophecy in many communities. It very well might, and that aspect needs changing. I would rather see the interpretation of those who are given the responsibility of communities — hopefully an earned responsibility granted by the consent of the wider community — to be titled something differently: community gardener, community facilitator, community cat herder, whatever. Those in leadership positions are neither bosses giving orders nor “rock stars” to be adored. Those in charge, regardless of what they’re called, are the ones who facilitate the project through inspiring a committed and focused community. Reading Aaron’s latest salvo and the myriad of interesting comments that followed, it made me think about what makes a good leader and who might serve a project community well as a facilitator. One name kept coming up. My Dad. Larry Cafiero, Sr., more “happy warrior” than “grammar hammer,” would have made a good FOSS project facilitator. Larry Cafiero, Sr. — known as Larry the Elder to my Larry the Younger, or Senior to my Junior (prepare for some pain if you call me that to my face) like the Griffeys — was really more “Happy Warrior” than “Grammar Hammer” as a newsman, but one of the traits that made him exceptional in the field was that no job was too small for him — nothing too insignificant, nothing beneath him — either as a city desk editor at The Miami Herald or as the Herald’s longtime Special Publications Editor, the position at which he worked for the last decade of his journalism career. It was really no accident that I followed my father into the field, and I always looked to him for guidance. It always impressed me that his staff, never more than one or two, always seemed to go the extra mile, and always went above-and-beyond, for the department. One time, I asked one of his assistants why, and I was told — and I’m paraphrasing — that my father “was one of them.” I didn’t know what he meant by that until Dad and I talked about leadership when I had been given the keys to a weekly newspaper in Dade County and I had to lead a group of reporters and photographers. “Did you ever read ‘Henry V’?” He asked me. I hadn’t. He said I should read it, paying special attention to the preparation for, and the fighting of, the Battle of Agincourt. So I did. And I got it. It also made something else he said several months before a little less obtuse. We were at Johnny Raffa’s Lobo Lounge — one of Miami’s press bars in the late ’70s — and we talked over identical bourbons about what makes a great newsman. Dad’s answer was simple: You had to be like Captain Kirk. Actually, I found it odd that my father was referring to a show I knew he didn’t really watch. “You mean, I have to kiss all the green alien women on the planet?” I asked. I got the look, then the eyeroll, followed by the admonishment, “Oh, fawgawdsake,” in the New York accent borne of his rearing in the Maspeth section of Queens, New York. I can still hear him explaining it this way: Kirk had the ability to do everything on the Enterprise by himself, if necessary. The entire crew could drop dead and he’d still be able to fly the ship, at least in theory if not in practice. So a great newsman knows everything about producing the news — he can report, edit, lay out pages, crop photos, set type (what we did back then), make plates, put the plates on the press, and run the press. So what it comes down to is this: Creating software, or even hardware, as a community in the open-source realm means encountering many rhetorical Battles of Agincourt, and it takes special kind of leader to marshal a team of developers to perform this task, day in and day out, like clockwork. Also, it takes a special leader to be able to “fly the Enterprise” by himself or herself if necessary, having both the knowledge and the desire to pick up where parts of the team may be lagging to bring the project up to speed. You don’t get that with so-called leaders following traditional management tenets in a traditional manager/worker role. You certainly don’t get that with “rock stars,” as if that needs saying. But you get that with leadership modeled after Henry V. And Captain Kirk. And Larry Sr., fawgawdsake. This blog, and all other blogs by Larry the Free Software Guy, Larry the CrunchBang Guy, Fosstafarian, Larry the Korora Guy, and Larry Cafiero, are licensed under the Creative Commons Attribution-NonCommercial-NoDerivs CC BY-NC-ND license. In short, this license allows others to download this work and share it with others as long as they credit me as the author, but others can’t change it in any way or use it commercially. (Among the many things he does, Larry Cafiero writes news and commentary once a week — and occasionally more frequently — for FOSS Force.) Categories: Aaron Seigo, FOSS Force, GNU, GNU/Linux, linux, open source Tags: GNU/Linux, linux, open source Larry 1 comment When I found out on Friday afternoon that the lead developer for Bodhi Linux was stepping down, I went into breaking-news mode and contacted Christine Hall at FOSS Force to let her know that I was sending her a story. As an aside, FOSS Force normally “publishes” Monday-Friday and takes the weekend off, but in this case, this is a story that is too important to wait until Monday. I called up LibreOffice Writer and went to work. “Lead developer Jeff Hoogland had an out-of-Bodhi experience on Friday, when he decided to step down . . . .” No. I didn’t go there. Instead I wrote this. Also, Jeff spoke volumes in his own blog item on why he’s leaving, and I understand completely. To see what Jeff achieved with Bodhi Linux over the last four years — all while in school, grad school, family life and now with a child — is simply remarkable and I salute him for it. He leaves for someone, or to the Bodhi community stepping up, a very viable Linux distro. My hope is that someone, or several folks who are already involved with Bodhi Linux, picks up the reins and continue what is one of the better Linux distributions, especially for older machines. I have said in these pages in the past that Bodhi is a viable distro — my only qualm, and it’s a minor one, is that it doesn’t come with enough programs by default and that you have to go get them after installing it. That’s by design — I get that — but it’s not my proverbial cup of tea. Also — and this is a very important point — it’s not necessarily a flaw because it’s a design of which I’m not fond. It’s called “different strokes for different folks,” and the distro has gained many users with the formula Hoogland developed. In other words, it may not be for me, but that doesn’t make it bad or lacking in some way. Which leads us to a broader issue, that of whether a distro like Bodhi should continue to have the opportunity to prosper and thrive. The answer clearly to this is a resounding “yes.” Some might think that there should be only $ONE_TRUE_DISTRO, and usually that distro is the one they are using. Nothing could be further from the truth, to say nothing of the fact that nothing could be more hilariously arrogant and world-class myopic than holding such a position. Clearly and unequivocally, one of the many strengths of FOSS — perhaps its biggest strength — lies in the variety of 200-something Linux and BSD distros out there. Choice is clearly good, and the competition between having more than one choice raises everyone up — the rising tide making all the vessels rise with the waters. Bodhi Linux captured a niche and, with continued perserverence, the community taking the reins will have the opportunity to continue to excel. That’s only fair, and that’s what FOSS is all about. This blog, and all other blogs by Larry the Free Software Guy, Larry the CrunchBang Guy, Fosstafarian, Larry the Korora Guy, and Larry Cafiero, are licensed under the Creative Commons Attribution-NonCommercial-NoDerivs CC BY-NC-ND license. In short, this license allows others to download this work and share it with others as long as they credit me as the author, but others can’t change it in any way or use it commercially. Categories: Bodhi Linux, FOSS Force, Jeff Hoogland, linux, Linux, open source Tags: Bodhi, Bodhi Linux, Jeff Hoogland, linux, open source OK, let’s not make a big deal about this. A few months ago, Christine Hall at FOSS Force asked me if I’d like to write for her site. I mulled it over for awhile. After seeing my good friend Ken Starks get a significant amount of exposure from his regular columns there, I thought I’d follow suit. So now you’re going to see a midweek item every week — possibly more — from yours truly at FOSS Force. While I will still continue to write this blog from time to time, I’d like for you to follow me over to FOSS Force for some of the best coverage of what’s happeneing around the Free/Open Source Software and hardware paradigm (including the excellent commentary you’ve always gotten here). The first installment, in case you missed it, is here. Categories: FOSS Force, linux, Linux, open source Tags: FOSS Force, linux, open source LibreOffice plans to come out with an Android version in their efforts to bring their great office suite to the mobile realm, hopefully aimed at Android-based tablets and nothing smaller than that. No one else has asked yet, so I guess I’ll have to. Don’t get me wrong. I love LibreOffice and use it extensively. The progress that LibreOffice has made in bringing a viable replacement for what passes as office software out of Redmond is nothing short of remarkable. But I think that moving LibreOffice toward mobile is a burdensome load placed on improving development on more useable form factors — form factors like laptops or desktops, which were designed specifically for programs like LibreOffice. Allow me to tip my hand and point out that you really can’t get much work done on an Android tablet or a Android smartphone, or any other tablet or smartphone for that matter. The form factor wasn’t really designed for it. For all intents and purposes — and marketing types will back me up on this — a tremendous majority of tablets and smartphones are used primariy for very basic digital functions like Web surfing, e-mail, texting, and watching your favorite movies thanks to Netflix. In other words, tablets and smartphones are toys, and LibreOffice wants folks to use them as a tool. Aye, there’s the rub, as Shakespeare would say, using the LibreOffice word processor on a laptop. Sure, it can be done: You can use a tablet for word processing or presentation-making, if necessary. But that begs this comparison — you wouldn’t try to cut down a redwood with a pocket knife. With enough effort you can do it, of course, but why would you when you should probably use a tool more appropriate for the job? It is akin to using Vim or Emacs on Android — it exists and when I had an Android phone, I tried downloading both and using them. Bear in mind that although the phone had a keyboard — a HTC G2 that I passed down to my daughter after getting a ZTE Open with Firefox OS — both Vim and Emacs were hilariously unworkable on such a small form factor. Again, they may work on a tablet, hopefully, but the point remains that if you are doing something important, use the right tools. This blog, and all other blogs by Larry the Free Software Guy, Larry the CrunchBang Guy, Fosstafarian, Larry the Korora Guy, and Larry Cafiero, are licensed under the Creative Commons Attribution-NonCommercial-NoDerivs CC BY-NC-ND license. In short, this license allows others to download this work and share it with others as long as they credit me as the author, but others can’t change it in any way or use it commercially. Categories: Android, LibreOffice, linux, Linux, open source Tags: Android, LibreOffice, linux, open source My good buddy Ken Starks is never at a loss for a good Linux tale. A master at putting Linux boxes in front of underprivileged kids in Texas through Reglue, Ken is also a master of weaving a folksy story in the tradition of other Texas wordsmiths like Jim Hightower (oooh, he’s going to hate me for that), and his latest installment on FOSS Force is one shining example. Go ahead and read it. I’ll wait. As is usually Ken’s standard fare, it’s a good story. Ken’s FOSS Force item puts the exclamation point on the fact that Linux users are everywhere, whether any of us have had direct involvement or not in introducing someone to it. Not only that, it accents the fact that the general reach of Linux is much further than the arm’s length we expect it to be when we hand someone a live disk or live USB stick and give them some instructions on how to use it. Many of us who advocate for the adoption of Free/Open Source Software (FOSS) have been waiting for the day when we can say, “Yeah, we’re ready for prime time.” So, yeah, we’re ready for prime time. When the Felton Linux Users Group hosted the table promoting FOSS as “organic software” (no artificial additives or preservatives, all natural 1’s and 0’s) at the Felton Farmers Market in the past, we would encounter many Linux users who were introduced by friends or neighbors. These were people we know from our town — it’s not very big — and for whatever reason they had for not coming to meetings, they used Linux and were happy with it. It’s not perfect. You still have to pay attention to your hardware and software when using Linux, much in the same way you pay attention to your house as a do-it-yourselfer who frequently haunts Home Depot or Lowe’s. As mentioned with mantra-like frequency in this blog, Linux and FOSS work best for those who consider hardware as more than just a toy or a diversion, and paying even a marginal amount of attention to it, not to mention learning some of the most basic maintenance practices, pays huge dividends. So we’re everywhere. ONE MORE THING: Speaking of friends, Don Marti posted an interesting blog item where he asks if you’re seeing buttons on his page. Are you? If you are, you need to get Disconnect or Privacy Badger (Shameless plug: I use Privacy Badger and I think it’s fantastic — thanks, Electronic Frontier Foundation). As a Privacy Badger user, I get a small button saying “Privacy Badger has replaced this button.” Good exercise, Don. Thanks for posting it. Categories: Disconnect, EFF, FOSS Force, free software, Ken Starks, linux, Linux, Linux Journal, open source, Privacy Badger, REGLUE Tags: Disconnect, EFF, Electronic Frontier Foundation, FOSS Force, Ken Starks, linux, Linux Journal, open source, Privacy Badger, REGLUE The last couple of weeks have been filled with resume-sending, waiting by the phone for the resumes to do their trick, and a trip to Arizona for a plethora of family reasons (wife went to do some New Age thing in Sedona while daughter visited friends in Phoenix — heck, I even got a phone interview with a tech company there). But while I was driving around the Southwest, a few things crossed the proverbial radar that deserve special mention, like . . . Congratulate me, I’m an “extremist”: And give yourself a good pat on the back, too, because if you’re a Linux Journal reader, the NSA thinks you are an “extremist,” too. Kyle Rankin reports on the site on the eve of Independence Day — irony much? — that the publication’s readers are flagged for increased surveillance. That includes — oh, I don’t know — just about everyone involved to some degree with Free/Open Source Software and Linux (and yes, Richard Stallman, that would also include GNU/Linux, too), from the noob who looked up “network security” to the most seasoned greybeard. Rankin writes, “One of the biggest questions these new revelations raise is why. Up until this point, I would imagine most Linux Journal readers had considered the NSA revelations as troubling but figured the NSA would never be interested in them personally. Now we know that just visiting this site makes you a target. While we may never know for sure what it is about Linux Journal in particular, the Boing Boing article speculates that it might be to separate out people on the Internet who know how to be private from those who don’t so it can capture communications from everyone with privacy know-how.” So, a quick note to our friends in the main office of the NSA in Maryland, where someone has drawn the unfortunate assignment of reading this (my apologies for not being a more exciting “extremist”) because . . . well, you know . . . I’m an “extremist” using Linux. Please pass this run-on sentence up your chain of command: “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” That’s the Fourth Amendment to the United States Constitution, in case you hadn’t noticed. One more thing: Linux Journal webmaster Katherine Druckman (sorry, the term “webmistress,” as noted on the LJ site, needs to be thrown into the dustbin of history) says that, yeah, maybe readers are a little extreme and asks readers to join them in supporting “extremist” causes like Free/Open Source Software and hardware, online freedom, and the dissemination of helpful technical knowledge by adding the graphic featured above (it comes in red, black, or white) to your site, your social media, or wherever you deem fit. On a more positive note . . . Introducing Xiki: Command-line snobs, welcome to the future. In a Linux.com article, Carla Schroder introduces Xiki, an interactive and flexible command shell 10 years in the making. It’s a giant leap forward in dealing with what some consider the “black magic” of the command line, but Carla points out another, more significant, use for the software. Carla writes, “When I started playing with Xiki it quickly became clear that it has huge potential as an interface for assistive devices such as Braille keyboards, wearable devices like high-tech glasses and gloves, prosthetics, and speech-to-text/text-to-speech engines, because Xiki seamlessly bridges the gap between machine-readable plain text and GUI functions.” It could be the next big thing in FOSS and deserves a look. Another day, another distro: Phoronix reported last week a peculiar development which either can be considered yet another Linux distro on the horizon or a bad joke. According to the article, Operating System U is the new distro and the team there wants to create “the ultimate operating system.” To do that, the article continues, the distro will be based on Arch with a modified version of the MATE desktop and will use — wait for it — Wayland (putting aside for a moment that MATE doesn’t have Wayland support, but never mind that). But wait, there’s more: Operating System U also plans to modify the MATE Desktop to make it better while also developing a new component they call Startlight, which pairs the Windows Start Button with Apple’s Spotlight. The team plans a Kickstarter campaign later this month in an attempt to raise $150,000. A noble effort or reinventing the wheel? I’d go with the latter. Our friends at Canonical have dumped a ton of Mark Shuttleworth’s money into trying to crack the desktop barrier and, at this point, they have given up to follow other form factors. Add to this an already crowded field of completely adequate and useable desktop Linux distros that would easily do what Operating System U sets out to do, and you have to wonder about the point of this exercise. Additionally, for a team portraying itself to be so committed to open source, there seems to be a disconnect of sorts around what community engagement entails. A telling comment in the article is posted by flexiondotorg — and if it’s the person who owns that site, it’s Martin Wimpress of Hamshire, England, an Arch Linux Trusted User, a member of the MATE Desktop team, a GSoC 2014 mentor for openSUSE and one of the Ubuntu MATE Remix developers. Martin/flexiondotorg says this: “I have a unique point of view on this. I am an Arch Linux TU and MATE developer. I am also the maintainer for MATE on Arch Linux and the maintainer for Ubuntu MATE Remix. “None of the indivuals involved with Operating System U have approached Arch or MATE, nor contributed to either project, as far as I can tell. I’d also like to highlight that we (the MATE team) have not completed adding support for GTK3 to MATE, although that is a roadmap item due for completion in MATE 1.10 and a precursor to adding Wayland support. “I can only imagine that the Operating System U team are about to submit some massive pull-requests to the MATE project what with the ‘CEO’ proclaiming to be such an Open Source enthusiast. If Operating System U are to be taken seriously I’d like to see some proper community engagement first.” Proper community engagement — what a concept! This blog, and all other blogs by Larry the Free Software Guy, Larry the CrunchBang Guy, Fosstafarian, Larry the Korora Guy, and Larry Cafiero, are licensed under the Creative Commons Attribution-NonCommercial-NoDerivs CC BY-NC-ND license. In short, this license allows others to download this work and share it with others as long as they credit me as the author, but others can’t change it in any way or use it commercially. Categories: free software, linux, Linux, Linux Journal, Mark Shuttleworth, open source, Operating System U, Phoronix, Ubuntu, Xiki Tags: linux, Linux Journal, Mark Shuttleworth, NSA, open source, Phoronix, Ubuntu I want to be a part of this, New York, New York. New York City Council Member Ben Kallos recently introduced the Free and Open Source Software Act (FOSSA) that, if passed by the City Council, would require the City to look first to open source software before purchasing proprietary software. Kallos, who represents the Upper East Side and chairs the Council’s government operations committee, also introduced the Civic Commons Act, embracing the notion that government should be sharing technology resources by setting up a portal for agencies and other government entities to collaboratively purchase software. “Free and open-source software is something that has been used in private sector and in fact by most people in their homes for more than a decade now, if not a generation,” Kallos said in an article on the political Web site Gotham Gazette. “It is time for government to modernize and start appreciating the same cost savings as everyone else.” If FOSS can make it there, it’ll make it anywhere. The Web site nextcity.org outlines some of the legislation that’s currently on the New York City Council radar, with some insights from Kallos as well. Like, “Our government belongs to the people and so should its software.” European cities like Munich and Barcelona have already shown the benefits of using FOSS in municipal governments. While he was mayor of San Francisco, California’s Lt. Gov. Gavin Newsom also got the ball rolling for FOSS in the City by the Bay. There are numerous other examples of how the free/open source paradigm has provided a shift — hugely for the better — in the societies it touches. These two bills — FOSSA and the Civic Commons Act — hold huge promise not only in the wide range of benefits that FOSS will provide the local government, but it will also show how important to society, generally speaking, FOSS is to the wider world. Their adoption and implementation in New York — perhaps the world’s greatest city — would signal a quantum leap for those who advocate for the free/open source philosophy and strive for its implementation to create a better world. So thank you, Councilman Ben Kallos, for going to bat for Free/Open Source Software, and you have my support from 2,967 miles away. Consider done anything I can do from such a distance, if anything. It’s up to you, New York, New York. Categories: Ben Kallos, free software, Gavin Newsom, linux, Linux, New York, open source Tags: Ben Kallos, Civic Commons Act, FOSSA, Free Open Source Software Act, linux, New York, open source Create a free website or blog at WordPress.com. The INove Theme. Larry the Free Software Guy Blog at WordPress.com. The INove Theme. Follow
计算机
2015-48/1912/en_head.json.gz/12481
Manjaro: A Different Kind of Beast From Manjaro Linux Although Manjaro is Arch-based and Arch compatible, it is not Arch. As such, far from being just an an easy-to-install or pre-configured version of Arch, Manjaro is actually a very different kind of beast. In fact, the differences between Manjaro and Arch are far greater than the differences between the popular Ubuntu distribution and its many derivatives, including Mint and Zorin. To help provide a clearer understanding of Manjaro, a few of its main features have been outlined. 2 Dedicated Repositories 3 Exclusive User-Friendly Tools 4 So, in Conclusion... Manjaro is developed independently from Arch, and by a completely different team. Manjaro is designed to be accessible to newcomers, while Arch is aimed at experienced users. Manjaro draws software from its own independent repositories. These repositories also contain software packages not provided by Arch. Manjaro provides its own distribution-specific tools such as the Manjaro Hardware Detection (mhwd) utility, and the Manjaro Settings Manager (msm). Manjaro has numerous subtle differences in how it works when compared to Arch. A more detailed outline of these differences has been provided below. Dedicated Repositories note: An important benefit from Manjaro's use of its own repositories is that the developers will automatically implement critical updates on your behalf, and there will therefore be no need for you to intervene manually. To ensure continued stability and reliability, Manjaro utilises its own dedicated software repositories. With the exception of the community-maintained Arch User Repository (AUR), Manjaro systems do not –and cannot– access the official Arch repositories. More specifically, popular software packages initially provided by the official Arch repositories will first be thoroughly tested (and if necessary, patched), prior to being released to Manjaro's own Stable Repositories for public use. Manjaro actually uses three types of repositories: Unstable: About a day or two behind Arch, this is also used to store software packages that have known or suspected stability and/or compatibility issues. This software may therefore be subject to patching by the Manjaro developers prior to being released to the testing repositories. Although the very latest software will be located here, using the unstable respositories may consequently break your system! Testing: Usually about a week or so behind Arch, these are used to store patched software packages from the unstable repositories, as well other new software releases that are considered at least sufficiently stable. This software will be subject to further checks by developers and testers for potential bugs and/or stability issues, prior to being released to the stable repositories for public use. Stable: Usually about two weeks behind Arch, these are the default repositories used by Manjaro systems to provide updates and downloads to the general user base. A consequence of accommodating this testing process is that Manjaro will never be quite as bleeding-edge as Arch. Software may be released to the stable repositories days, weeks, or potentially even months later; however, users who wish to access the very latest releases can still do so by enabling access to the Unstable Repository or the Testing Repository… at their own risk! Exclusive User-Friendly Tools Another feature that sets Manjaro apart from Arch and other Arch-based distributions is its focus on user-friendliness and accessibility. This extends far beyond just providing an easy graphical installer and pre-configured desktop environments. Manjaro also provides a range of powerful tools developed exclusively by the Manjaro Team, including: Manjaro Hardware Detection (mhwd) The mhwd command enables the automatic detection and configuration of your hardware for you, usually undertaken during the installation process. This includes support for hybrid graphics cards, as well as setting everything up such as module dependencies for Virtualbox virtual machine installations; however, it can also be used by users with limited technical knowledge to easily undertake this task for themselves. A guide on how to manually configre graphics cards has been provided. Manjaro Hardware Detection Kernel (mhwd-kernel) While automatic support for the use of multiple kernels is a defining feature of Manjaro, the mhwd-kernel command also empowers users with no technical knowledge to easily manage them as well. This includes automatically updating any newly installed kernels with any modules being used, such as those required to run Manjaro within Virtualbox. A guide on how to manage kernels has been provided. Manjaro Settings Manager (msm) This user-friendly application allows you to quickly and easily manage user accounts, install new language packs, and even switch your system's default language and keyboard layout on-the-fly. msm will also automatically notify you of any updates available for installed language packs, too. Recently new features were added such as easy ways to choose and install between multiple kernels and drivers for your graphics card. Please look here for more detailed explanations about Manjaro Settings Manager. Pamac - The Graphical Software Manager Exclusively developed by the Manjaro Team, this intuitive application allows you to easily search for, install, remove, and update software applications and packages. pamac will also automatically notify you of any updates; keep your system up-to-date with just a single click! There are more detailed explanations available for Pamac. So, in Conclusion... Manjaro is definitely a beast, but a very different kind of beast than Arch. Fast, powerful, and always up to date, Manjaro provides all the benefits of an Arch operating system, but with an especial emphasis on stability, user-friendliness and accessibility for newcomers and experienced users alike. Any enquires about the Manjaro operating system should therefore be directed towards the Manjaro Forums or Manjaro Internet Relay Chat (IRC) channels in order to receive the best help and support possible. All are welcome! About Manjaro Manjaro FAQ The Rolling Release Development Model Retrieved from "https://wiki.manjaro.org/index.php?title=Manjaro:_A_Different_Kind_of_Beast&oldid=10423" Category: Contents Page Navigation menu About Manjaro Linux
计算机
2015-48/1912/en_head.json.gz/12609
Super Scribblenauts fixes problems of original Super Scribblenauts is the game that Scribblenauts should have been. Better … by Ben Kuchera There are puzzles in Super Scribblenauts where you see a row of different items and have to generate and then place an item between them, an item that has shared characteristics. So, for instance, I had a puzzle showing a leech, a vampire, a helicopter, and a bird. At first, I tried to think of something that sucked blood and could fly... a mosquito? The game accepted it, and the level was beaten. To master the level, however, I had to find three different solutions. Another puzzle had buildings and animals. I typed "wooden bear" and the game promptly generated the beast and I beat the level. I added squirt guns and a beach ball to a level to get a party started. I used a hang glider to help a man jump off a cliff safely. I'm not giving anything away; these are simple answers. You can come up with many, many more. Super Scribblenauts is a game that took the problems of the original and made them go away. That's a very good thing. Super Scribblenauts ds* now MSRP: $29.99 Official site * = platform reviewed The fixes In Super Scribblenauts, each level has some problem to solve. To solve it, you simply type in the name of objects you'd like to spawn. Ask for a typewriter, and you get one. A squid can be yours. So can a jetpack. The game's vocabulary is vast, so be sure to spend some time trying to think of different and original ways to do things. So far, this sounds like the first game, however. How does the sequel differ? The biggest improvement is in moving Maxwell, the game's hero, as you can now use the d-pad to put him where you want him. This is a huge advantage over the frustrating touch-based movement scheme from the first game, and it goes a long way to generate some goodwill from the player. Maxwell now runs and jumps like he's in a standard platforming game, and the experience is better for it. Object interactions are also better; the physics are improved, and tethering one object to another is no longer nearly as touchy as it was in the first game. Things just seem to work, and you don't feel like you're constantly fighting the controls and objects. There are still a few problems, as I spawned some stairs that I couldn't flip around, but overall everything feels better. The developers listened to the feedback from the first game and acted on it. Well done. The additions The biggest change to the game is the use of adjectives. You could spawn a car in the first game, but now you can spawn a red car. I typed in "angry doctor" and the character spawned, scowling, and started to slap the other people on the screen. Why have a bear when you can have a wooden bear? Many levels incorporate adjectives, and you'll need to be specific to beat some of the levels. When you spawn an item and try to solve a level with multiple steps, the game gives you a thumbs up or thumbs down to let you know if you're on the right path or not. It's a gentle nudge in the right direction, and since levels go from the insultingly easy to the utterly inscrutable... it can be needed. The game also features a tiered hint system that works in real time. If you want all the hints right now, you need to pay using the in-game currency. The longer you work on the puzzle, the cheaper the hints become. It's a friendly system that you don't have to use, but it can take the edge off the frustration on certain levels. The problems There are still moments where you won't agree with the game's solutions (or where the game won't agree with yours), although with a little bit of thinking you can get around them. I'm still skeptical that fireworks wouldn't wake a sleeping astronaut, but that playing a violin in another room would. In another level, you have to provide people with vehicles that match their clothing. The bride was given a white limousine, the goth girl had a black hearse to drive away, and I gave the hippie a rainbow van, because "paisley" isn't an option that the game recognizes. Then a police officer shows up. I type in "police car," but that doesn't work. "Paddy wagon"? The game recognizes it, but it doesn't work. The game only allowed me to win by giving the officer a blue car. That seems... a little off. Being able to add adjectives to objects is a huge addition to the experience, and it really lets your mind soar when you try to solve problems, but sometimes it seems to confuse the game; solutions that seem perfectly valid don't seem to register. You can buy clues to each of the puzzles, but you get more points by using trickier words and creative solutions. If it seems like it should work, it most likely will, but it can be frustrating when the system breaks down. The swift brown fox The addition of adjectives, better level design, and the d-pad movement make this the game the original Scribblenauts should have been. While some of the puzzles may not support solutions you feel are fair, many of them will stretch your brain and reward experimentation to find the game's limits. The first game was a great concept with many near-fatal flaws. This time, it's much easier to recommend. Verdict: Buy
计算机
2015-48/1912/en_head.json.gz/12795
Training-Based and Blind Channel Estimation and Their Impact on MIMO System Performance Xia Liu (2010). Training-Based and Blind Channel Estimation and Their Impact on MIMO System Performance PhD Thesis, School of Information Technology and Electrical Engineering (ITEE), The University of Queensland. s4101784_PhD_finalabstract.pdf Thesis abstract s4101784_PhD_finalthesis.pdf Full thesis final version Xia Liu School of Information Technology and Electrical Engineering (ITEE) PhD Thesis Marek E. BialkowskiVaughan Clarkson Total colour pages Total black and white pages 08 Information and Computing Sciences In the last decade, the number of mobile and wireless communications users has dramatically increased. Wireless communication systems have experienced evolution of first generation (1G) and second generation (2G). Currently, the third generation (3G) wireless communication system is rapidly spreading all over the world providing high data rate wireless multimedia services. Yet, the pursuit for higher data rates, larger coverage and more spectral efficient mobile communication still goes on. Multiple Input Multiple Output (MIMO) systems have emerged and been nominated as a promising solution for future generation wireless communications systems due to their capability of enhancing channel capacity, spectral efficiency and coverage. These enhancements are possible in a rich scattering environment when array antennas are used at transmit and receive ends of a communication link. To realize these benefits, a MIMO system requires the channel state information (CSI) to be available at the receiver. It has to be known before any data can be decoded. In practice, CSI is obtained by channel estimation and as a result most of MIMO detection schemes rely on an accurate estimation of CSI at the receiver end. Therefore, channel estimation is extremely critical to the proper functioning of MIMO systems. However, as CSI can be determined only in an approximate manner, it is unclear how estimation errors affect MIMO system performance. Most of works focusing on the MIMO systems’ potential assume perfect CSI available at the receiver end while estimating capacity. This assumption is not true in practice and thus the existing gap on the relationship between an estimated CSI and MIMO channel capacity forms the main motivation for the work undertaken in this thesis. The MIMO channel estimation can be performed by sending training sequences which are known both to the transmitter and the receiver. This is the most popular method to estimate the MIMO channel. In majority of works which reported the training-based MIMO channel estimation methods, the channel coefficients in the channel matrix are assumed to have Gaussian identical independent distribution (i.i.d). This assumption is not true in practical scenarios because of a limited number of scattering objects and non-ideal operation of array antennas. The shortfall of training-based channel estimation is that the training sequence does not contain any information and thus it sacrifices a considerable amount of bandwidth. To save the bandwidth and increase the spectral efficiency, blind channel estimation and semi-blind channel estimation can be used to obtain CSI. For blind and semi-blind channel estimation, no training symbols or fewer training symbols are needed to estimate the channel. Also, the transmitter does not need to cooperate with the receiver. Most of these methods are based on second or higher order statistic models of the received signal. Their disadvantage is the increased computational complexity, gradual convergence, and scalar and phase ambiguities of channel matrix elements. The aim of this thesis is to investigate the performance of training-based channel estimation and assess its impact on MIMO channel capacity under realistic channel models; optimize training sequences when CSI is available to the transmitter and the receiver, and develop blind channel estimation algorithms with low complexity and without scalar or phase ambiguity. To realize these goals, the thesis introduces the information theory of MIMO system and shows how MIMO channel capacity is related to the properties to the MIMO channel matrix. Next, realistic channel models are described which include varying distributions of scattering objects and actual electrical properties of different configurations of antennas with varying elements spacing. These models are used to assess performances of training-based channel estimation methods including Least Square (LS), Scaled Least Square (SLS) and Minimum Mean Square Error (MMSE) methods when CSI is available only at the receiver side. It is shown that as the assumed channels properties divert from the i.i.d. case, because the scattering environment and antennas operation turn away from ideal conditions, spatial correlation affects both channel estimation and capacity. In order to have a better insight into the obtained results, spatial correlation is linked to physical parameters of the channel as well as to the mathematical properties of the channel matrix. It is shown that an increased spatial correlation helps to improve the channel estimation accuracy. However, the overall effect is the decreased channel capacity. This finding shows that the MIMO system does not have to rely on the perfect knowledge of CSI to achieve increased capacity. In the next step, considerations extend to the case when CSI is assumed to be available both at the receiver and the transmitter. This scenario creates an opportunity to optimize transmitted training sequences and thus improve channel estimation. As in the previous considerations, optimized training sequences are devised under the assumption of advanced channel models. This part of investigations also includes derivations of closed-form expressions of MIMO channel capacity and bit error rate (BER) performance by taking into account channel estimation accuracy. The obtained results show that the use of LS, SLS and MMSE methods has a different effect on MIMO BER performance, with LS offering the worst performance and MMSE giving the best BER performance. The final part of the thesis focuses on devising a new blind channel estimation algorithm which employs a simple coding scheme to avoid scalar and phase ambiguities for the MIMO channel matrix. Its validity is verified by extensive simulations followed by experiments on a MIMO test-bed employing a Field Programmable Gate Array (FPGA). It is shown that the proposed blind channel estimation algorithm is easy to implement in a DSP firmware due to its low computational complexity. Also it shows fast convergence. It offers good performance for fast fading channels in addition to slow fading channels. The work undertaken as part of this thesis has been published in several journals and refereed conference papers, which underline the originality and significance of the thesis contributions. MIMOChannel estimationtraining-basedestimation errorchannel modelchannel capacityspatial correlationmutual couplingblindambiguity 55, 60-63, 65-67, 69, 71, 73-77, 97-98, 101, 103-109, 112-114, 119-120, 128, 130, 132-133, 136-137, 150-151, 153, 160-161, 163-164, 178, 185, 187, 191, 193, 196, 199 UQ Theses (RHD) - UQ staff and students only UQ Theses (RHD) - Official Wed, 02 Mar 2011, 21:46:21 EST by Mr Xia Liu on behalf of Library - Information Access Service The University of Queensland
计算机
2015-48/1912/en_head.json.gz/12931
release date:May 24, 2011 The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website 1 dvd for installation on a x86 platform back to top
计算机
2015-48/1912/en_head.json.gz/14429
Laying out Web pages involves a bit of wizardry. HTML was designed by engineers and scientists who never envisioned it as a page layout tool. Their aim was to provide a way to describe structural information about a document, not a tool to determine a document's appearance. Once the real world started to work on the Web, graphic designers began adapting the primitive tools of HTML to produce documents that looked more like their print counterparts. The point was not to produce "jazzier" or "prettier" pages. The layout conventions of print documents have evolved over hundreds of years for concrete and practical reasons, and they offer many functional advantages over the simplistic, single-column page layout envisioned by the original designers of the World Wide Web. www.section508.gov Flexible design The Web is a flexible medium designed to accommodate different types of users and a variety of display devices. Unlike a printed document, which is "fixed" in its medium, the look of a Web page depends on such elements as the display size, resolution, and color settings, the height and width of the browser window, software preferences such as link and background color settings, and available fonts. Indeed, there is no way to have complete control over the design of a Web page. The best approach, then, is to embrace the medium and design flexible pages that are legible and accessible to all users. Layout with style sheets One of the visual properties that Cascading Style Sheets are meant to describe is how elements are positioned on the page. Style sheet positioning allows designers to set margins, to position text and images on the page relative to one another, to hide and show elements, and to stack elements so they overlay one another. In theory, style sheet positioning should provide all the design control needed to lay out visually appealing and legible Web pages. In practice, however, browser inconsistencies have rendered style sheet positioning useless, at least for the time being. Though the W3C specifications for style sheet positioning contain most of the tools needed for good design, Microsoft and Netscape have done a particularly poor job of implementing them, so that properties such as borders and margins display quite differently from browser to browser. If you are creating a site for a diverse audience you should steer clear of style sheet positioning for now and design your pages using layout tables as described below. If standards compliance is a priority, use style sheet positioning for page layout, but keep your layouts simple and be ready to accept variability across browsers and platforms. From Web Style Guide www.webstyleguide.com Copyright 2002 Lynch and Horton
计算机
2015-48/1913/en_head.json.gz/394
The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website 1 dvd for installation on an 86_64 platform back to top
计算机
2015-48/1913/en_head.json.gz/2203
Time-lapse video shows 48 hours in the life of an indie developer One indie developer put together a time-lapse video that shows 48 hours of his … This is a good time to be an indie developer. There are multiple ways to sell your game to the public, getting the attention of the press is at least a little easier, and there are a few games that have broken out to become massive hits. Look at Minecraft or Super Meat Boy for examples of what is possible from small teams with big ideas. But people tend to overlook how much work it takes to create a game. There's a lot more to it than simply having a good idea, sticking it to the man, and then collecting an award at the Game Developers Conference. Which is why the time-lapse video that shows 48 hours of game development of an indie title called Retro/Grade has been making the rounds on the blogs: it shows the dim, monotonous reality of indie development. Indie development "I turned the camera off overnight, so it's not 48 hours of continuous work. I think I worked about 14 hours each day," Matt Gilgenbach of 24 Caret Games told Ars. "My record for crunching is about 36 hours straight, but unfortunately, I didn't film it." Last year the game was included in Indiecade, so the three-person team had to create a video. Since all their time was spent making the danged game, the decision was made to just film that process. "I think people take for granted just how much work it is creating an indie game. It's very easy to compare them to games developed by large teams with funding, but creating a game with a small team doing everything is a herculean effort, which is often taken for granted," he explained. The video is a good reminder of what happens before people like me talk about the game, or gamers play it. I asked about eating, something I strained to see in the video. "Meals are one of the few times I'm away from my desk, but I try to keep them short because I put in a lot of hours..." The game explains the actions of a fighter pilot whose war against an alien invading force has damaged both space and time, and now he has to reverse his actions. "To play the game, you must un-fire your lasers, which are all timed to the rhythm of our original soundtrack. As well, you must avoid enemy lasers that are returning to the ships that fired them. If you stop them before they are fired, it creates a paradox that damages the space/time continuum," Gigenbach explained. "If you make mistakes, you can use the Retro/Rocket, which reverses the flow of time (making time go forward), and repeat sections to improve your score. If that doesn't sound interesting enough, the game is playable with a guitar controller as well as the standard gamepad." The team has been working on the game for around three years, and it is coming to the PlayStation Network this year. At that point we'll get to see if all the hard work paid off.
计算机
2015-48/1913/en_head.json.gz/2426
Google Apps Premier Edition Google Apps Premier Edition is the promised offering for small businesses. It includes 10 gigabytes of mail storage, 99.9% uptime guarantee for email, APIs to integrate with the existing infrastructure of a business (single sign-on, user management, email gateway), 24/7 phone support. Everything for $50 a year per user (there's a free trial until April 30th).Google continues to offer two free editions of Google Apps: * a edition for schools, that includes the APIs and 24/7 phone support* a edition for families and groups that has all the features that were available until now.All editions of Google Apps* include Google Docs & Spreadsheets and are compatible with the BlackBerry version of Gmail's mobile application.Google's intention is to convince it can deliver "simple, powerful communication and collaboration tools for your organization without the usual hassle and cost" and the package can integrate into an existing environment. Google has learned a lot since last August, when it first introduced Google Apps, and has adapted to fit the needs of a corporate environment. Will businesses adapt to use Google's web applications and trade some features for an always-available online interface?* You'll notice that Google Apps for Your Domain has been rebranded as Google Apps.
计算机
2015-48/1913/en_head.json.gz/2706
Recursivity Recurrent thoughts about mathematics, science, politics, music, religion, and Recurrent thoughts about mathematics, science, politics, music, religion, and Recurrent thoughts about mathematics, science, politics, music, religion, and Recurrent thoughts about .... No Ghost in the Machine Back when I was a graduate student at Berkeley, I worked as a computer consultant for UC Berkeley's Computing Services department. One day a woman came in and wanted a tour of our APL graphics lab. So I showed her the machines we had, which included Tektronix 4013 and 4015 terminals, and one 4027, and drew a few things for her. But then the incomprehension set in:"Who's doing the drawing on the screen?" she asked.I explained that the program was doing the drawing."No, I mean what person is doing the drawing that we see?" she clarified.I explained that the program was written by me and other people."No, I don't mean the program. I mean, who is doing the actual drawing, right now?I explained that an electron gun inside the machine activated a zinc sulfide phosphor, and that it was directed by the program. I then showed her what a program looked like. All to no avail. She could not comprehend that all this was taking place with no direct human control. Of course, humans wrote the program and built the machines, but that didn't console her. She was simply unable to wrap her mind around the fact that a machine could draw pictures. For her, pictures were the province of humans, and it was impossible that this province could ever be invaded by machines. I soon realized that nothing I could say could rescue this poor woman from the prison of her preconceptions. Finally, after suggesting some books about computers and science she should read, I told her I could not devote any more time to our discussion, and I sadly went back to my office. It was one of the first experiences I ever had of being unable to explain something so simple to someone. That's the same kind of feeling I have when I read something like this post over at Telic Thoughts. Bradford, one of the more dense commentators there, quotes a famous passage of LeibnizSuppose that there be a machine, the structure of which produces thinking, feeling, and perceiving; imagine this machine enlarged but preserving the same proportions, so that you could enter it as if it were a mill. This being supposed you might visit its inside; but what would you observe there? Nothing but parts which push and move each other, and never anything which could explain perception.But Leibniz's argument is not much of an argument. He seems to take it for granted that understanding how the parts of a machine work can't give us understanding of how the machine functions as a whole. Even in Leibniz's day this must have seemed silly. Bradford follows it up with the following from someone named RLC:The machine, of course, is analogous to the brain. If we were able to walk into the brain as if it were a factory, what would we find there other than electrochemical reactions taking place along the neurons? How do these chemical and electrical phenomena map, or translate, to sensations like red or sweet? Where, exactly, are these sensations? How do chemical reactions generate things like beliefs, doubts, regrets, certainty, or purposes? How do they create understanding of a problem or appreciation of something like beauty? How does a flow of ions or the coupling of molecules impose a meaning on a page of text? How can a chemical process or an electrical potential have content or be about something?Like my acquaintance in the graphics lab 30 years ago, poor RLC is trapped by his/her own preconceptions, I don't know what to say. How can anyone, writing a post on a blog which is entirely mediated by things like electrons in wires or magnetic disk storage, nevertheless ask "How can a chemical process or an electrical potential have content or be about something?" The irony is really mind-boggling. Does RLC ever use a phone or watch TV? For that matter, if he/she has trouble with the idea of "electrical potential" being "about something", how come he/she has no trouble with the idea of carbon atoms on a page being "about something"? We are already beginning to understand how the brain works. We know, for example, how the eye focuses light on the retina, how the retina contains photoreceptors, how these photoreceptors react to different wavelengths of light, and how signals are sent through the optic nerve to the brain. We know that red light is handled differently from green light because different opsins absorb different wavelengths. And the more we understand, the more the brain looks like Leibniz's analogy. There is no ghost in the machine, there are simply systems relying on chemistry and physics. That's it. To be confused like RLC means that one has to believe that all the chemical and physical apparatus of the brain, which is clearly collects data from the outside world and processes it, is just a coincidence. Sure, the apparatus is there, but somehow it's not really necessary, because there is some "mind" or "spirit" not ultimately reducible to the apparatus.Here's an analogy. Suppose someone gives us a sophisticated robot that can navigate terrain, avoid obstacles, and report information about what it has seen. We can then take this robot apart, piece by piece. We see and study the CCD camera, the chips that process the information, and the LCD screens. Eventually we have a complete picture of how the robot works. What did we fail to understand by our reductionism? Our understanding of how the brain works, when it is completed, will come from a complete picture of how all its systems function and interact. There's no magic to it - our sensations, feelings, understanding, appreciation of beauty - they are all outcomes of these systems. And there will still be people like RLC who will sit there, uncomprehending, and complain that we haven't explained anything, saying,"But how can chemistry and physics be about something?" Jeffrey Shallit Paul C. Anagnostopoulos People commenting on an immaterialist consciousness aren't asking how biochemical processes in the brain can be [i]about[/i] something. I don't think they have any problem with that, because they always say they can imagine zombie humans doing all the things regular humans do. What they are asking is how biochemical processes can [i]feel[/i] like something to the owner of the processes. That's a lot trickier to imagine, even though I have no doubt we will understand it eventually. I sometimes say that qualia will end up being "internal behaviors," although then I get told that behaviorism is dead. 8:31 AM, May 29, 2010 "But how can chemistry and physics be about something?"I suspect that quite a few of the people asking this really do understand, but the idea makes them uncomfortable. Like evolution, neurology is knocking humanity off its pedestal. The more we learn, the more we realize that we really are just clever apes. The question is really special pleading for humanity; that the soul really does exist. touching conclusion .. yogis quietly smiling .. with compassion .. 11:05 AM, May 29, 2010 Perhaps the problem is one that has plagued people since the birth of science....We don't want to believe that we are basically machines.When we are sick, we are okay with doctors treating our body like a machine that does not function properly.With psychology and psychiatry, we are okay with doctors treating our mind (brain) if it is not functioning properly.But the thought that we are nothing more than a complex machine is perhaps too frightening for them to bear.It seems like the frontier of science right now is explaining consciousness. Will science ever be able to explain consciousness? I do not know. But the admission of "I do not know" is what drives science onwards.... whereas for those who cling to superstition, "I do not know" is where the line of questioning ends. Here's an analogy. Suppose someone gives us a sophisticated robot that can navigate terrain, avoid obstacles, and report information about what it has seen. We can then take this robot apart, piece by piece. We see and study the CCD camera, the chips that process the information, and the LCD screens. Eventually we have a complete picture of how the robot works. What did we fail to understand by our reductionism?But can you then tell us what the robot experiences when it hears "high C" on a piano, or what it experiences when it "sees green"?Or take an alien. Forgetting all ethics, we study it while live, then kill it and study it in excruciating detail. We find that it has a neural system a lot like ours. Can we thereby tell what qualities it experiences when seeing green, or hears harmonies? Can we even say that it has qualitative experiences such as we do? If not, what have we truly explained about its perceptions and feelings?I don't doubt that everything can be explained in essentially a "reductive" manner, although emerging properties would be a part of that. But we have to consider what explaining our experiences even can mean, and realize that we may never be able to explain a "thing in itself" at the deepest level.One important difference between the robot and the human is that once we've explained everything via our abstractions, we're pretty much done. We don't have to answer "What's it like to be a robot?" because in all likelihood it's not much different than being a television or a rock.We may someday know nearly all that is knowable about humans (not every last detail, rather the principles governing those details), yet we might never be able to say why the experience of green really is "like what it is" because that is a "thing in itself," and not explainable via our abstractions. However, that doesn't mean that we won't understand the perception of green adequately in a science sense (within its limitations). The reason consciousness can be explained, almost certainly, is that we can compare information reported by the subject with information gathered from the subject's brain. Wherever the information can interact as experienced "subjectively" is certainly a potential area for consciousness of that information to be occurring.Leibniz's problem was that he was thinking in mechanical abstractions. We think in terms of fields, though, and although, say, electric fields are not knowable as entities that "experience green," that these might "in themselves" experience energy changes in that way is hardly anything anybody can rule out. So that if electric fields evidently contain the information contained in a subject's reports of consciousness, the correlation ought to be considered likely to be meaningful.Electric fields are what I think are (mostly at least) responsible for consciousness, but that's not my point here. Other fields (quantum, maybe, but I'm highly suspicious of made-up fields that also fail to explain unconscious areas of the brain) might possibly do it. The point is that consciousness needs actual explanation via something like fields, and cannot be simply assumed to be the sum of the electrochemical interactions that explain so much of neural activity so well (again, why is some conscious and some apparently not?).Glen Davidson I'm a B.Sc. Computer Science student, I've just finished my degree and I am now applying for several Ph.D.s and M.Sc.s in artificial intelligence. Given that my current level of education excludes any in depth coverage of neuroscience I have already read many books that explain the confusion. Perhaps they should read some literature detailing the working of the brain that postdates 1714, that might help. He should definitely read Society of Mind by Marvin Minsky, it also explains the difference between knowing how to use something and knowing how it works, which would debunk the later paragraphs about the scientist and the colour red. I came here from Pharyngula, so I don't have a backstory on this, but it's blatantly apparent you're misconstruing Bradford's argument, whether because this is the first time you've come across it or what I don't know. Clearly he was talking about perception, the phenomenological things, you know, things like qualia. Asking about these things is totally orthogonal to asking about mechanisms and functions. Neither the wavelength of red nor a mathematical model explain red.I'd recommend Facing Up to the Problem of Consciousness by Chalmers, or at least the first half of it, to see what's really being discussed. uzza Glen Davidson = Missing the Point x 10^zillion
计算机
2015-48/1913/en_head.json.gz/2904
FSAA & Image Quality Comparison - 3dfx vs ATI vs NVIDIA by Anand Lal Shimpi on July 27, 2000 3:11 AM EST Posted in 3dfx 3dfx Image Quality Level of Detail (LOD) Bias NVIDIA's Image Quality S3TC = Nasty Looking Sky? NVIDIA's Direct3D FSAA 3dfx vs ATI vs NVIDIA 4 Sample FSAA Comparison Quake III FSAA Comparison If you had asked us last November to recommend a video card that had support for FSAA, the one and only option would have been to wait for the Voodoo4/5 from 3dfx. It wasn't until just before the official launch of the GeForce2 GTS that we realized NVIDIA had included support for FSAA in their latest Detonator drivers and in spite of ATI's feelings that FSAA isn't the way of the future, the Radeon out of the box has support for FSAA. Now that 3dfx isn't the only game in town, the next question is which of the three major players has the best looking and the best performing FSAA solution. In an attempt to help you decide the answer to this question on your own (since image quality is a very subjective topic) we've put together a comparison guide to help you notice the differences, if any, that exist between the FSAA solutions provided by 3dfx, ATI and NVIDIA. Hardware vs. Software One of the most confusing things about the various methods of implementing FSAA in a video card is the debate over whether the feature is implemented in "hardware" as a feature of the chip or in "software," meaning that it is a function of the drivers alone and can be enabled on any card that the drivers support. The main thing to understand here is that regardless of whether FSAA is supported in "hardware" through 3dfx's T-Buffer or in "software" through the NVIDIA Detonator drivers, it currently takes the same performance hit. If you're implementing a 2 sample FSAA algorithm, you're going to have effectively 1/2 the fill rate at your disposal since you're rendering twice as many pixels. This applies to all of the cards we're talking about today, the Voodoo4/5, the Radeon and the GeForce/GeForce2 MX/GeForce2 GTS. The second thing to keep in mind is a very simple principle, but it is commonly misunderstood when talking about FSAA performance. As we just finished pointing out, regardless of whether you're talking about a Voodoo5, a Radeon or a GeForce2 GTS, if you make any one of those cards render twice as many pixels, it's going to effectively have 1/2 the fill rate. 3dfx's 2 sample FSAA offers the same theoretical performance hit as NVIDIA's 2 sample FSAA since in both cases we're decreasing the available fill rate by 50% by rendering twice as many pixels. While it is true that the Voodoo5 and the GeForce2 GTS perform differently when their respective 2 sample FSAA modes are enabled, that is not because one card is "faster" at FSAA than another, it's simply because the two cards do perform differently. Now that we've gotten that out of the way, let's move onto the various forms of FSAA offered by the three manufacturers.
计算机
2015-48/1913/en_head.json.gz/3085
› Search › Search Marketing SEM Wants to Embrace the CMO Fredrick Marckini | September 20, 2004 | Comments Search has earned its way up the corporate food chain, from Webmaster to marketing coordinator to director of online marketing and VP of online marketing. It's time to claim the CMO as our own. In 1996 and '97, search engine marketing (SEM) was called "search engine positioning" (SEP). Webmasters were in charge of it. When potential customers came calling to early search engine optimization (SEO)/SEP firms, the first contact was made by the Webmaster, sent on a mission by someone in IT when it was discovered the site couldn't be found in any search engine on any query. The Webmaster wasn't the right person to evaluate or engage the services of a SEP firm, but he didn't have much choice. The IT director hired the Webmaster to build a Web site because people were nagging him the company didn't have one. Everyone else has a Web site, so they should have one too, right? In those days, SEP wasn't perceived as a marketing activity. It was a technology challenge. It was a matter of writing bits of code, inserting them into HTML documents, then submitting to search engines. End of project. During that dark time, I ranted at my audiences (and anyone else who would listen: my mom, dad, sister, girlfriend, butcher, mechanic) that SEP was a marketing, not an IT, function. "Get your IT department out of the promotion of your Web site!" I shouted on deaf ears. Usually, we provided our deliverables to Webmasters and network admins. When speaking at conferences I'd ask the rhetorical question, "Would you let your network administrator write your press releases? No? Then why on earth would you allow your technology team to have control over how your brand is presented to a qualified audience of interested searchers?" Maybe someone heard me, because in early 1999 something remarkable happened, something wonderful. Someone called who wasn't a Webmaster. A major pharmaceutical company called. Actually, a consultant the company employed called. He was charged with identifying a vendor to help his client increase "visibility" in search engines. I visited the company and spoke to room packed full of... product managers. Some even had the title "online marketing coordinator!" I look back and think how visionary they were: Marketers hiring a firm to improve their online marketing outcome. "Marketing coordinators" called during most of 1999 and 2000. They were tasked with identifying vendors who "do this sort of stuff." Hey, I'm not complaining. It was still the marketing department. Already, this was a big improvement. These marketing coordinators never made final decisions. They were the champions who would get the SEP vendor in front of the online marketing director or VP in those rare companies where such titles existed. In 2001, we began to hear from online marketing directors themselves. It marked the end of marketing coordinators running interference. In those first three or four years, companies spent the majority of their online budgets on banner ads, email, affiliate, and viral marketing, in that order. SEP was an afterthought. A miniscule budget (if any) was set aside for something "the Webmaster should have been doing, anyway." At conferences, I presented a slide showing how companies spent online ad dollars, with search in last place. I challenged them, "Your marketing mix is upside down. Search is foundational. Search must be first!" Again, perhaps someone heard me because, by 2002, the director of online marketing was regularly involved in all conversations with what we now call the SEM firm. They were serious about search and made it a priority. Perhaps the evolution of the category name to search engine "marketing" helped. SEM exploded in 2002 and 2003, due in large part to pay-per-click (PPC) advertising. We began telling the marketplace that SEM drives real value and costs real money. People listened to successful search marketers. Companies realized huge gains from SEM. SEM was driving many qualified customers to Web sites, they were converting, and it was completely measurable. It raised the bar on all other marketing spending. We began to consistently hear from the VP by late 2003 and early 2004. The online marketing VP listened to each SEM firm's pitch. He personally negotiated the contract and wanted to be kept in the loop on campaign returns and strategy. Where to next? Straight to the CMO. In many Fortune 500-size organizations, there's a disconnect. It's not just between the CMO and the SEM campaign, but the CMO and online marketing in general. Even brand marketers aren't involved in online, much less SEM. How do we get CMOs' ears? They must be convinced their audiences are in motion. The consumer is no longer "reachable." Instead, she's searching. She's migrated so many of her activities online that, to reach her, companies must help her find them. It's inquiry marketing. It sounds passive, but it's not. To get CMOs' ears, we must convince them customer behavior has fundamentally changed. That a new medium is an entrenched part of the customer's life, and this medium is dominated by search activity. It puts the customer in control but enables the marketer to be in the path of the customer's inquiry, her research, and her purchase intent. If only that marketer knew how to intersect that path. It's one part PPC search advertising, one part paid inclusion, and one part natural SEO. In 2005, we'll reach CMOs but only if we prove potential customers' behavior has fundamentally changed and they've made search a critical part of the buying cycle. Search and be found, or fail to be found and lose the searcher. And by the way, "the searcher" is your customer. Welcome to our tribe. Want more search information? ClickZ SEM Archives contain all our search columns, organized by topic. Fredrick Marckini is the founder and CEO of iProspect. Established in 1996 as the nation's first SEM-only firm, iProspect provides services that maximize online sales and marketing ROI through natural SEO, PPC advertising management, paid inclusion management, and Web analytics services. Fredrick is recognized as a leading expert in the field of SEM and has authored three of the SEM industry's most respected books: "Secrets To Achieving Top-10 Positions" (1997), "Achieving Top-10 Rankings in Internet Search Engines" (1998), and "Search Engine Positioning" (2001, considered by most to be the industry bible). Considered a pioneer of SEM, Frederick was named to the Top 100 Marketers 2005 list from "BtoB Magazine." Fredrick is a frequent speaker at industry conferences around the country, including Search Engine Strategies, ad:tech, Frost & Sullivan, and the eMarketing Association. In addition to ClickZ columns, He has written bylined articles for Search Engine Watch, "BtoB Magazine," "CMO Magazine," and numerous other publications. He has been interviewed and profiled in a variety of media outlets, including "The Wall Street Journal," "BusinessWeek," "The New York Times," "The Washington Post," "Financial Times," "Investor's Business Daily," "Internet Retailer," and National Public Radio. Fredrick serves on the board for the Ad Club of Boston and was a founding board member of the Search Engine Marketing Professional Organization (SEMPO). He earned a bachelor's degree from Franciscan University in Ohio. 3 ways organizations can improve digital strategy Meet the CMO Behind the Transformation of America's (Favorite) Diner Sales Is the New Marketing: 7 Ways to Embrace It Let's Not Be Nostalgic for the 'Mad Men' Era Ask the Digital Marketing Experts: What’s Up in 2015? Get the ClickZ Search newsletter delivered to you. Subscribe today!
计算机
2015-48/1913/en_head.json.gz/3184
NSF Announces New Expeditions in Computing Awards Pursuing ambitious, fundamental research that promises to define the future of computing and information The National Science Foundation (NSF) has announced three new Expeditions in Computing awards. The awards will provide up to $10 million in funding over five years to each of the selected projects. "There is a great deal of creativity in the computer science research community today," said Deborah Crawford, acting assistant director for Computer and Information Science and Engineering (CISE) at NSF. "Our intentions with the Expeditions in Computing program are to stimulate and use that creativity to expand the horizons of computing," she said. "For example, several of the projects will be exploring new computational approaches to some of the most vexing problems we face in the science and engineering enterprise as well as in the larger society."Each award features a top-notch team working on one of the most challenging computing and information science and engineering issues today. The new awards are:Computational Behavioral Science: Modeling, Analysis, and Visualization of Social and Communicative Behavior
Lead PI: James Rehg, Georgia Tech
Collaborators: USC, Boston University, UIUC, CMU, MITIt is well-known that the social and communicative behavior of children as young as 12-24 months contains important clues about their risk for a variety of developmental disorders, such as autism and Attention Deficit Hyperactivity Disorder (ADHD). Moreover, the ability to identify and treat such disorders at an early age has been shown to significantly improve outcomes. Autism represents a particularly compelling need in the US, since it affects one child in 110 with a lifetime cost of care at $3.2 million per person. This Expeditions project aims to develop novel techniques for measuring and analyzing the behavior exhibited by children and adults during face-to-face social interactions, including interactions between caregivers and children, children playing and socializing in a daycare environment, and clinicians interacting with children during individual therapy sessions. By developing methods to automatically collect fine-grained behavioral data, this project will enable large-scale objective screening and more effective therapy delivery and assessment to those in need, including socio-economically disadvantaged populations. More generally, this new computational technology will make it possible to automatically measure the behavior of large numbers of individuals in a wide range of settings over long periods of time. Other disciplines, such as education, marketing, and customer relations, could benefit from a more objective data-driven approach to behavioral assessment. The long-term goal of this project is the creation of a new scientific discipline of computational behavioral science, which draws equally from computer science and psychology in order to transform the study of human behavior.Understanding Climate Change: A Data Driven ApproachLead PI: Vipin Kumar, University of MinnesotaCollaborators: North Carolina A & T University, North Carolina State University, Northwestern University, University of Tennessee/Oak Ridge National LaboratoryClimate change is the defining environmental challenge facing our planet. Yet, there is considerable uncertainty regarding the social and environmental impact due to the limited capabilities of existing physics-based models of the Earth system. Consequently, important questions relating to food security, water resources, biodiversity, and other socio-economic issues over relevant temporal and spatial scales remain unresolved. A new and transformative approach is required to understand the potential impact of climate change. Data driven approaches that have been highly successful in other scientific disciplines hold significant potential for application in environmental sciences. This Expeditions project aims to address key challenges in the science of climate change by developing methods that take advantage of the wealth of climate and ecosystem data available from satellite and ground-based sensors, the observational record for atmospheric, oceanic, and terrestrial processes, and physics-based climate model simulations. These innovative approaches will help provide new understanding of the complex nature of the Earth system and the mechanisms contributing to the adverse consequences of climate change, such as increased frequency and intensity of hurricanes, precipitation regime shifts, and the propensity for extreme weather events that result in environmental disasters. Methodologies developed as part of this project will be used to gain actionable insights and to inform policymakers.Variability-Aware Software for Efficient Computing with Nanoscale DevicesLead PI: Rajesh Gupta, University of California, San DiegoCollaborators: Stanford, UC Irvine, UCLA, University of Illinois at Urbana-Champaign, University of MichiganAs semiconductor manufacturers build ever smaller circuits and chips, they become less reliable and more expensive to produce — no longer behaving like precisely chiseled machines with tight tolerances. Understanding the variability in their behavior from device-to-device and over their lifetimes — due to manufacturing, aging, and different operating environments — becomes increasingly critical. This project fundamentally rethinks the hardware-software interface and proposes a new class of computing machines that are not only adaptive but also highly energy efficient. It envisions a computing system where components — led by proactive software — routinely monitor, predict and adapt to the variability of the manufactured systems in which they are placed. These machines will be able to discover the nature and extent of variation in hardware, develop abstractions to capture these variations, and drive adaptations in the software stack from compilers to runtime to applications. The resulting computer systems will work while using components that vary in performance or grow less reliable over time and across technology generations. A fluid software-hardware interface will thus mitigate the variability of manufactured systems and make machines robust, reliable and responsive to the changing operating conditions. Changing the way software interacts with hardware offers the best hope for perpetuating the fundamental gains of the past 40 years in computing performance at a lower cost. In addition to plans for involving graduate and undergraduate students in the research, the team has built strong industrial ties and is committed to outreach to community high-school students through a combination of tutoring and summer school programs.The Expeditions in Computing program made its debut in 2008 with four awards. With funding appropriated to NSF in 2009 through the American Recovery and Reinvestment Act (ARRA), the agency was able to support three trailblazer Expeditions. The new awards bring the total number of Expeditions projects currently receiving NSF support to ten. In the future, NSF will make Expeditions awards following an 18-month cycle. "Past Expeditions awards are beginning to show exciting results in a variety of applications and fields," said Mitra Basu, program director for the Expeditions program. "We're confident that this latest group of projects will continue to push the frontiers of computing. Related Reading Tools To Build Payment-Enabled Mobile AppsAppGyver AppArchitect 2.0 AppearsSmartBear Supports Selenium WebDriverXMind 6 Public Beta Now AvailableMore News» Commentary Things That Go BoomFarewell, Dr. Dobb'sXamarin Editions of IP*Works! & IntegratorDevart dbForge Studio For MySQL With Phrase CompletionMore Commentary» Slideshow Jolt Awards 2014: The Best Testing ToolsThe Most Underused Compiler Switches in Visual C++Jolt Awards: Mobile Development ToolsDeveloper Reading List: The Must-Have Books for JavaScriptMore Slideshows» Video Verizon App Challenge WinnersIntel at Mobile World CongressOpen Source for Private CloudsConnected VehiclesMore Videos» Most Popular The C++14 Standard: What You Need to KnowJolt Awards 2015: Coding ToolsA Gentle Introduction to OpenCLBuilding Scalable Web Architecture and Distributed SystemsMore Popular» More Insights More >> Reports Hard Truths about Cloud Differences Return of the Silos More >> Webcasts Real results: Speeding quality application delivery with DevOps [in financial services] New Technologies to Optimize Mobile Financial Services More >> INFO-LINK Agile Desktop Infrastructures: You CAN Have It All High Performance Computing in Finance: Best Practices Revealed Client Windows Migration: Expert Tips for Application Readiness IT and LOB Win When Your Business Adopts Flexible Social Cloud Collaboration Tools Accelerate Cloud Computing Success with Open Standards More Webcasts>> State of Cloud 2011: Time for Process Maturation SaaS and E-Discovery: Navigating Complex Waters SaaS 2011: Adoption Soars, Yet Deployment Concerns Linger Research: State of the IT Service Desk Will IPv6 Make Us Unsafe? More >>
计算机
2015-48/1913/en_head.json.gz/3259
MMO Glitch Unlaunches, Goes Back to Beta Two months ago, developer Tiny Speck launched Glitch, a unique MMO that distinguished itself from other entries in the genre. The game successfully allows you to screw around and play it as you see fit. It's also completely devoid of battling and the usual MMO tropes and instead bases its gameplay around collection, skills, exploration, and socializing for the fun of it. I don't like MMOs, but I sure as hell like Glitch. Heck, I really, really dig Glitch. By offering something different from the typical MMO, the game really invited me to experience its world at my own pace, making it a joyous experience to return to repeatedly. Though Glitch may be fun, Tiny Speck has decided to go back to the drawing board. According to a post on the game's official site, Glitch will be "unlaunched" and go back to Beta. Several bugs will be fixed in this time, and other aspects of the game will be addressed, including the addition of new skills and quests. Quite possibly the biggest reason for the move, however, is Tiny Speck's aim to make Glitch flow much faster right from the get-go, all the while providing a more engaging experience for those players who have sunk several hours into the game. Oh, and don't worry, if you spent real world money on the game, the developers are totally willing to give you a refund if you seriously feel that the step back to Beta is a total buzzkill. Tiny Speck is likely to have more updates on the game's post-launch development, so stay tuned. David Sanchez is the most honest man on the internet. You can trust him because he speaks in the third person.
计算机
2015-48/1913/en_head.json.gz/3463
Home About Contact Collections Computing Donating Employment Science News RSS Services 25th Anniversary of the World Wide Web On March 12, 1989, Tim Berners-Lee submitted his idea for what would become the World Wide Web. Berners-Lee was working at the European Organization for Nuclear Research, or CERN, at the time and saw the need to exchange information amongst many people in a quick and easy manner. Berners-Lee’s idea was generally ignored, but he began work on a “large hypertext database with typed links” using a NeXT workstation. While many of his colleagues were not excited about the idea, Robert Cailliau began working alongside Berners-Lee. After pitching the idea of connecting hypertext with the Internet to the European Conference of Hypertext Technology in September of 1990, the men still had no support from vendors or at CERN. By December of 1990, Berners-Lee has developed all the tools necessary for the Web. The first website and server were launched at CERN. Berners-Lee’s first webpage was all about his WWW project and the components needed to build webpages. The NeXT computer that he used as a server is located at Microcosm, which is the public museum at CERN. Berners-Lee worked on his WWW project both at CERN and at home. The first webpage is lost. By January of 1991, CERN began distributing information about their WWW system to others in the physics community in order to help others build their own software. On August 6, 1991, Berners-Lee released a report on his WWW project to the public. By December of 1991, the first server was installed at the Stanford Linear Accelerator Center in California. By 1993, the University of Illinois released the Mosaic browser, which allowed for the WWW to be run on everyday personal computers. In 1993, CERN also released the WW source code, ensuring that it would remain in the public domain. In late 1993, there were approximately 500 webservers. This number skyrockets by the end of 1994, where there were over 10,000 servers and 10 million users. Items in Our Collection Weaving the Web: the original design and ultimate destiny of the World Wide Web by its inventor by Tim Berners-Lee; Mark Fischetti How the Web was born: the story of the World Wide Web by James Gillies; R Cailliau Computer: a history of the information machine by Martin Campbell-Kelly; William Aspray Last update: Mar 06, 2014 Hours |
计算机
2015-48/1913/en_head.json.gz/3641
Contact Advertise Microsoft Manager: We Copied the Mac OS X Look-and-Feel posted by Thom Holwerda on Wed 11th Nov 2009 20:40 UTC Okay, so this is new. When it comes to graphical user interfaces, everyone is copying everyone, but you'll always find supporters of platform Abc claiming platform Xyz is stealing from them - and vice versa. Mac supporters have often stated that Vista and Windows 7 were copying from Apple, and according to Microsoft's partner group manager, Simon Aldous, this is true. Wait, what?I stumbled upon this one just now on an Apple news website, and it made me stare at my screen for a few blinking seconds. A Microsoft manager saying flat-out that Windows 7 copied Mac OS X? Surely, he had been misquoted. Or something. Well, turns out he wasn't. Microsoft is currently holding a partner conference in Wembley Stadium, and as such, PCR Online decided to interview Microsoft's partner group manager, Simon Aldous. When asked "is Windows 7 really a much more agile operating system, in terms of the specific uses it can be moulded to?", he replied: The interesting thing is, it's basically the next version of Vista. Vista was a totally redesigned operating system from XP. We've improved upon Vista in that way. We've stripped out a lot of the code, we've made a lot of it much more efficient, it sits on a smaller footprint, it operates far more quickly, it's far more agile and effective in terms of the calls it makes. I saw an article recently that described it as 'Vista on steroids', and in some ways you can absolutely relate to that. One of the things that people say an awful lot about the Apple Mac is that the OS is fantastic, that it's very graphical and easy to use. What we've tried to do with Windows 7 – whether it's traditional format or in a touch format – is create a Mac look and feel in terms of graphics. We've significantly improved the graphical user interface, but it's built on that very stable core Vista technology, which is far more stable than the current Mac platform, for instance. I have a faint feeling we'll see a redaction or a I've-been-misquoted claim soon, because something like this surely shouldn't be said by a Microsoft manager. It sure is refreshing, but at the same time, it's also complete nonsense. I see little in Windows 7 that makes it look or function like Mac OS X, and I see little Aqua in Aero either. I'm having the feeling a certain group of people are going to have a field day with this one. I have to say, if this is indeed what Microsoft wants to profess - than more power to them. The honesty would be a welcome change of pace in this industry. (5) 105 Comment(s) Related Articles Lumia 950 reviews: too little, too lateMicrosoft investigating Win32 support for ContinuumMicrosoft's Android app emulation not happening anytime soon
计算机
2015-48/1913/en_head.json.gz/3666
Symantec backtracks on Adobe Flash warning A bug originally reported by Symantec to be a new, unpatched vulnerability in Adobe Flash Player was actually patched last month. Robert McMillan (IDG News Service) on 29 May, 2008 08:24 After warning on Tuesday that hackers were exploiting an unpatched bug in Adobe Systems' Flash Player software, Symantec has backtracked from this claim, saying the flaw is "very similar" to another vulnerability that was patched last month. Symantec's initial warning described a disturbing threat -- a previously unknown and unpatched flaw that was being exploited on tens of thousands of Web pages. The flaw allowed attackers to install unauthorized software on a victim's machine and was being used to install botnet programs and password-logging software, Symantec said. Now Symantec believes that the bug was previously known and patched by Adobe on April 8, said Ben Greenbaum, a senior research manager with Symantec Security Response. However, the Linux version of Adobe's stand-alone Flash Player, version 9.0.124, is vulnerable to the attack. On Tuesday Symantec researchers saw that the attack worked on Linux and that it caused Flash Player to crash on Windows XP, so they reasoned that they had a new bug that was just not working properly on the Windows platform, possibly due to a programming error by the hackers. "We thought it was a problem with the exploit," he said. Now Symantec believes that the vulnerability was simply not properly patched in this one version of Adobe's software, Greenbaum said. That means that Windows and Mac OS X users with the latest updates are not vulnerable, and even Linux users who are running the latest Flash Player plugin inside their browser, rather than as stand-alone software, are safe. However, Windows XP users running the older Flash Player, version 9.0.115, are vulnerable to the attack, Greenbaum said. This kind of missed security assessment is rare, but it does happen from time to time, said Matt Richard, director of VeriSign's iDefense Rapid Response Team. "It looks like they just jumped the gun and put it out a little bit too early without doing all the homework," he said of Symantec. "When we did our testing in the lab, the latest version completely fixes the issue: No crashes, no exploits, no nothing." IBM's Internet Security Systems (ISS), which is credited with discovering the Flash Player bug, echoed Richard's analysis. "Several reports have stated that a zero-day Flash vulnerability is being exploited through several Chinese hacker websites," ISS wrote in its advisory on the flaw. "All of the samples X-Force has seen target the vulnerability disclosed in this Advisory." In a note on its Web site, Symantec said that it was working with Adobe to figure things out. An Adobe spokesman said Wednesday that his company was "still trying to get to the bottom of this," but expected to have an update by around noon Pacific time on Wednesday.
计算机
2015-48/1913/en_head.json.gz/3848
BrowserID: what it is and why you should care By Marco Fioretti, Internet BrowserID wants to help you prove your identity online Shares BrowserID is a method, presented in July 2011, to use email addresses to prove an identity and sign in to a website quickly and safely. See today's best Black Friday dealsThe system was developed by Mozilla Labs. It's designed to be easier and faster than the esisting method of a site sending you an email and you clicking a link to verify your true identity. So why is it important and how will it work? We decided to find out. Q. How would it work in practice? A. In order to log in on a website that supports BrowserID, you would only have to click on a Sign In button and then select from a menu what email address you want to use. Your browser and the website would take care of everything else. Article continues below Q. What about logging in via Facebook, Twitter or Google? That would be even faster and simpler, wouldn't it? A. Yes, when you're browsing while logged in to any of those portals, you don't have to do anything, since any website connected with them will immediately know who you are. And that's the problem. Outsourcing these tasks to giant private providers creates lots of lock-in and privacy protection issues. Q. That's surely true, but wait a second! Wasn't OpenID supposed to provide (more or less) the same service? A. Indeed it was. In practice, it looks as if OpenID failed to reach critical mass for several reasons. Probably the biggest one was the need to temporarily go to another website to gain access to the one you wanted to visit. Unless someone really understands the value of reliable online authentication services (and cares about it) that's much more cumbersome than just telling a browser to remember all passwords, or click on the Remember Me boxes provided by most log-in web forms. BrowserID tries to provide the same level of security and trust as OpenID, but in a much more transparent way. Q. Tell me more about privacy protection in BrowserID, please.A. First of all, unlike other sign-in systems, BrowserID does not force the user to share or transmit online personal, sensitive data, such as date of birth. In addition to this, BrowserID is designed not to pass to any server data about which web pages you visit. Q. Why is BrowserID based on email addresses? A. First of all, because everybody using the web on a regular basis already has at least one email address and knows it's already used as an identity and authorisation token. Next, because email addresses are not controlled or controllable by any single organisation. Finally, because practically all websites that require their users to log in already store their email addresses to handle direct communications, password reset requests and other services: therefore, BrowserID gives them a better way to use for authentication some user data that they have already. Q. Would BrowserID prevent me from using my favourite nicknames on those websites? A. Not at all. The email address is used only for the initial authentication. BrowserID doesn't limit in any way how a website lets you configure your local account. Q. Could I have multiple BrowserID identities then? A. Of course. The only requirement is that each of them is associated with a different email address. Q. What about other applications, such as chat clients? Could I use BrowserID with them too, or is it a browser-only thing? A. Yes you could, as long as those programs implement the protocol, and provide their users with an interface to log in to their identity provider to get the keys. These may then be stored in Kwallet or any other desktop-based password manager. Q. Sorry, what protocol and keys? Is BrowserID based on some sort of proprietary technology? A. No. Technically speaking, BrowserID is an application of the Verified Email Protocol; a decentralised authentication system based on public/private key cryptography, through which users can prove to a website that they own an email address. Q. Does BrowserID work on all browsers? A. BrowserID can work on every modern browser, including mobile ones. The only requirement is that those browsers be compatible with the BrowserID JavaScript API. This said, even if you were forced to use a noncompliant browser, it would still be possible to use an equivalent web-based service. Q. What should I do to start using BrowserID? A. You should log in the old way to the website of your identity provider. That server will then tell your browser, through a JavaScript API, to generate a public/private pair of cryptographic keys. Right after that, the browser will send the public key to the identity provider and get back a signed identity certificate. The browser will then store the private key and certificate as it would do with traditional passwords. Q. What would happen next, when I visit a BrowserID-compliant website? A. That website will tell your browser to run a JavaScript function that asks you if you want to log in and with which identity – that is email address. Q. And when I accept... A. The browser will send to the website the identity certificate, signed with the private key. At that point, the website will download your public key from your identity provider and verify that the signature is authentic. Q. And that's how I'll prove to that website that I really am who I say I am? A. Yes… and no. What this procedure provides is a third-party confirmation (unlike what happens with cookies!) that the authentication request comes from a browser that has the secret key associated to the provided email address. Which means that… Q. I should never let other people use my browser! A. That's absolutely true. However, that's the same risk you already face with every other authentication system that doesn't force you to enter a password every time, isn't it? Q. I suppose that's true, but this also means I won't be able to authenticate from other browsers, right? A. It depends. That's really up to you. In and by itself, BrowserID does allow you to have one certificate for each computer or smartphone you use, including borrowed or public ones such as internet kiosks. Of course, in those cases you would have to delete the private key and certificate as soon as you're done! Q. Let's go back to identity providers. You keep mentioning them – who are they? A. In the simplest and most natural scenario, your BrowserID identity provider would be your email provider. Q. What if it doesn't support the system? A. You could still use, without problems, a trusted, secondary identity provider that offers the same services. The Mozilla Foundation, for example, has set up a website called BrowserID.org for this very purpose, in order to speed up testing and adoption of BrowserID. Q. Ah, yes, adoption. What is the current status of BrowserID? Is anybody already using it? A. At the time of writing this piece (late November), BrowserID is still in its infancy. Most browser developers haven't announced any official plans to integrate BrowserID support in their software. That's not the main problem, though. Q. Really? What is it then? A. The real open issue is if and when the major email providers and online communities, such as Facebook and Twitter, will support BrowserID – that is become identity providers. Especially when, like Facebook, they have their own in-house alternative. Besides, all these providers would need to agree on a standard way to make public keys accessible. Luckily, none of this makes it impossible to try BrowserID or implement it on your website. Q. That's cool. How can I try it today? A. For the moment, the best way to see how using BrowserID looks is to visit the official demo site at Myfavoritebeer.org.Q. What about webmasters? A If they use popular open source software, such as WordPress or Drupal, they're lucky: BrowserID plug-ins for those content management systems already exist. Alternatively, they'd have to follow the instructions for developers published at browserid.org. Even in that case, though, they'd be able to use BrowserID without having to write any authentication code by themselves.--------------------------------------------------------------------------------------------------First published in Linux Format Issue 154 Apple Pay becomes a target for fraudsters Gmail is taking another step closer to being ultra-secure 'Every indication' that Ashley Madison user info leak is real How to beef up your internet privacy with Google's new 'My Account'
计算机
2015-48/1913/en_head.json.gz/4544
Last year, Hewlett Packard Company announced it will be separating into two industry-leading public companies as of November 1st, 2015. HP Inc. will be the leading personal systems and printing company. Hewlett Packard Enterprise will define the next generation of infrastructure, software and services. Public Sector eCommerce is undergoing changes in preparation and support of this separation. You will still be able to purchase all the same products, but your catalogs will be split into two: Personal systems, Printers and Services and Servers, Storage, Networking and Services. Please select the catalog below that you would like to order from. Note: Each product catalog has separate shopping cart and checkout processes. Personal Computers and Printers Select here to shop for desktops, workstations, laptops and netbooks, monitors, printers and print supplies Server, Storage, Networking and Services Select here to shop for Servers, Storage, Networking, Converged Systems, Services and more. Privacy Statement | Limited Warranty Statement | Terms of Use ©2015 Hewlett Packard Development Company, L.P
计算机
2015-48/1913/en_head.json.gz/5195
Posted Ouya: ‘Over a thousand’ developers want to make Ouya games By Aaron Colter Check out our review of the Ouya Android-based gaming console. Even after the relatively cheap, Android-based Ouya console proved a massive success on Kickstarter (the console was able to pull in nearly $8.6 million from investors despite having an initial goal of only $960,000), pundits and prospective owners of the new gaming machine loudly wondered how well it would be able to attract developers who would otherwise be making games for the Xbox 360, iPhone or PC. Assuming you believe official statements made by the people behind the Ouya console, there is nothing to worry about on that front. “Over a thousand” developers have contacted the Ouya creators since the end of their Kickstarter campaign, according to a statement published as part of a recent announcement on who will be filling out the company’s leadership roles now that it is properly established. Likewise, the statement claims that “more than 50” companies “from all around the world” have approached the people behind Ouya to distribute the console once it is ready for its consumer debut at some as-yet-undetermined point in 2013. While this is undoubtedly good news for anyone who’s been crossing their fingers, hoping that the Ouya can make inroads into the normally insular world of console gaming, it should be noted that while these thousand-plus developers may have attempted to reach the Ouya’s creators, the company offers no solid figures on how many of them are officially committed to bringing games to the platform. That “over a thousand” figure means little if every last developer examined the terms of developing for the Ouya and quickly declined the opportunity in favor of more lucrative options. We have no official information on how these developer conversations actually went, so until we hear a more official assessment of how many gaming firms are solidly pledging support to the Ouya platform, we’ll continue to harbor a bit of cynicism over how successful this machine might possibly be. As for the aforementioned personnel acquisitions, though they’re less impressive than the possibility that thousands of firms are already tentatively working on games for the Ouya, they should offer a bit more hope that the company making the console will remain stable, guided by people intimately familiar with the gaming biz. According to the announcement, Ouya has attracted former IGN president (and the first investor in the Ouya project) Roy Bahat to serve as chairman of the Ouya board. Additionally, the company has enlisted former EA development director and senior development director for Trion Worlds’ MMO Rift, Steve Chamberlin, to serve as the company’s head of engineering. Finally, Raffi Bagdasarian, former vice president of product development and operations at Sony Pictures Television has been tapped to lead Ouya’s platform service and software product development division. Though you may be unfamiliar with these three men, trust that they’ve all proven their chops as leaders in their respective gaming-centric fields. Expect to hear more solid information on the Ouya and its games line up as we inch closer to its nebulous 2013 release. Hopefully for the system’s numerous potential buyers, that quip about the massive developer interest the console has attracted proves more tangible than not.
计算机
2015-48/1913/en_head.json.gz/5420
1/9/201309:34 PMLarry SeltzerCommentaryConnect Directly1 CommentComment NowLogin50%50% Should Microsoft Switch Internet Explorer to WebKit? NoInternet Explorer has made great strides in recent years and is now an excellent, very fast browser. Yet it still gets second-rate treatment from developers for whom IE has the taint of "uncool," and it has a small presence on mobile devices. Perhaps the best thing would be for Microsoft to throw in the towel on their Trident browser layout engine and adopt WebKit, the emerging de facto standard. There are also plenty of reasons not to switch browser layout engines.Just as it has been since Windows 95 was ascendant and dinosaurs roamed the earth, Internet Explorer is the dominant web browser in desktop computer use. But even though there seem to be hundreds of millions of users running it on desktops and notebooks, Internet Explorer gets no respect from the Web developer community, and it often gets second-rate support among desktop browsers. On mobile devices IE is no doubt growing as a share of the total, but still a very small player. The intelligent mobile Web developer focuses on getting his or her web site to look good in the dominant mobile browsers — Safari, the (pre-4.0) Android Browser, and Google Chrome — all of which are based on the WebKit layout engine. This means that Windows Phone and Windows 8 users often run into web site problems in IE 10. Windows 8 users can at least install a different browser, but Windows RT and Windows Phone users have only Internet Explorer. As Microsoft MVP Bill Reiss argues, this is bad for Microsoft's users. He thinks it's time for Microsoft to throw in the towel on their own layout engine, known as Trident and implemented on desktop Windows in the MSHTML.DLL program file, and switch to WebKit. This is really a fascinating proposal. There are plenty of very good reasons to do it. There are also plenty of reasons not to. On the whole, I have to decide against the move, but it's not an easy decision. Caving in to the WebKit juggernaut would reduce a lot of friction that makes life difficult for Windows developers and users. It might even inspire many developers who now shun Windows 8 and Windows Phone to support those platforms, since it would be much less work to do so. And for all the progress that Microsoft has made with IE, there are some areas where it really lags, with HTML5 compliance at the top of the list. Tests just now at html5test.com give me these results all out of a total of 500 points, higher being better): Internet Explorer 9 (Windows 7)138 Internet Explorer 10 (Windows 8 and Windows Phone)320 Safari 6.0.2 (OS X 10.7)368 Firefox 18 (OS X 10.7)389 Chrome 23.0.1271.97 m (Windows 8)448 So IE10 is a huge improvement on IE9, but it's still clearly at the rear of the pack, and Chrome makes it look really bad. So why not make the switch? I'm a security guy, and security problems often are the first thing to come to mind for me. Most people still don't appreciate it, but IE is probably the most secure browser available, and has been for some time. If you follow vulnerability reports you'll see that WebKit has a high volume of them, and they are fixed on very different schedules in the various WebKit products. Microsoft can fix the much smaller number of security problems in IE on their schedule. By joining in with the WebKit consortium, Microsoft loses some control over the schedule for such fixes. Microsoft would also lose control over feature decisions, some of which involve security. Consider WebGL, an open standard for high-speed graphics in browsers, supported by all the major browsers, except Internet Explorer. Microsoft has decided that WebGL is inherently unsecurable and it won't be in any of their browsers. If they move to WebKit, they don't get to make decisions like this. Reiss doesn't say whether he's speaking only about mobile browsers or also about the desktop, but it's a point worth exploring. Many, many corporate developers write web code with Internet Explorer as their development target. Messing things up for them would be a bad thing. But Microsoft can't decide to make the changes only for mobile, because it's central to Microsoft's marketing that the tablet market is really just part of the PC market. There could be a middle ground I suppose. Microsoft could provide two browsers, or allow the user (or maybe even the web site) to switch engines. But it's just not something they would do. It's too complicated and they still get all the downsides of WebKit. Finally, as Reiss himself points out, it's often not a good thing to have one dominant standard. He cites Daniel Glazman, the co-chairman of the W3C's CSS standards working group, who is concerned about the tendency of so many mobile developers to target WebKit rather than standards. WebKit has many features that go beyond standards and many sites rely on them. If the WebKit-only phenomenon is unstoppable, then the only practical way to deal with it may be to cry "Uncle!" and switch to WebKit. I don't think things have gotten that bad. Microsoft needs to keep features in IE/Trident developing to keep up with WebKit and then, if Microsoft can produce the market share to justify it, developers will support IE. Probably. It's not a clear decision. What do you think? Please argue in the comment section below. jeffweinberg, I Disagree As a professional web developer for the past 10 years, I cant tell you how many hundreds, perhaps thousands of hours that I have wasted on IE bugs. If you multiply that across the entire industry, how many billions of dollars in productivity are wasted on dealing with IE bugs? Its time to get on board with webkit
计算机
2015-48/1913/en_head.json.gz/5754
F-Secure Internet Security 2011 F-Secure Internet Security 2011: Easy to use but takes a long time to scan Christopher Null Easy to use; Good for family computers Leaves behind some code when uninstalled; Scans can take a while F-Secure Internet Security did a reasonably good job at blocking threats and is easy to use, but it suffers from slow scan speeds. Internet Security 2015 - 1 User Simple, simple, simple. That's the marching order of F-Secure Internet Security 2011 ($119 for one year, three PCs, as of 12/2/2010), an antimalware utility that focuses on safeguarding the computers of novices and especially families. You'll see this approach right from the start: Even before the software is installed you are asked to configure parental controls and set an access password, which is used to change settings. And once you do get F-Secure up and running, that aim to make things simple continues: The F-Secure home screen has only six real options to choose from, and even the most oblivious novice should be able to figure out how to get around its interface. Our only quibble: Clicking the Scan button only runs a quick scan by default; you have to use the pull-down arrow to run a full scan of your PC. You might think that this focus on newbies would result in stripped-down security levels, but although F-Secure has been an also-ran in prior years, for its 2011 release, the company has stepped up its game to compete with the big boys. In our tests, the software fully blocked 22 of 25 real-world attacks (it partially blocked an additional two) and detected 98.1 percent of known malware. False positives? Absolutely none. And F-Secure's 80 percent success rate at disinfecting active malware components on virus-ridden systems was among the top performers. Operation speed was another issue: While F-Secure barely slowed our test systems during background operation, it was terribly slow at on-access scans, pulling a dismal last place in the time it takes to scan a file as it opens. Things were marginally better, but not much, with on-demand scanning; the app was still in the bottom tier of performers though at least it wasn't camping out at the end of the list. F-Secure's approach to hand-holding is that you don't need it. Internet Security 2011 has one of the most Spartan help systems of any application we tested, just a handful of entries in a typical help tree, and zero documentation aside from a browser-based tutorial. F-Secure, fortunately, is basically correct: We can't imagine actually needing to refer to the documentation for the app, unless you're dying to know, say, what "DeepGuard" is and what it means to turn it on and off. It's inspiring to see a company which has languished as an unimpressive performer for years finally get back on the horse and take a leadership position in the security software space. While its speed problems are seriously troubling and it left behind some code remnants after we uninstalled it, those are really the only sore spots in what is otherwise an impressive and worthwhile security suite.
计算机
2015-48/1913/en_head.json.gz/6015
Exploring the Impact of HTML5 on Broadcasters By Wes Simpson Controversy has erupted in the past few months regarding the future of video over the Internet, in particular the refusal of Apple to authorize deployment of an Adobe Flash decoder for their iPad and the iPhone product lines. In an open letter published in April, Apple CEO Steve Jobs gave a lengthy explanation of "why we do not allow Flash on iPhones, iPods and iPads." In place of Flash, Apple is promoting a new standard for video files called HTML5. In this column, we'll take a look at how this emerging standard might impact broadcasters who distribute content over the Web. It's important first to understand what HTML5 is not. It is not a new video compression algorithm like H.264 or Dirac. It is not a new container format like MP4 (MPEG 4 Part 14), AVI (Audio-Video Interleave) or 3GP (used for 3G mobile phones). It is also not a radical departure from what has come before, as it is built upon HTML and XML, which are two versions of the markup language that are used everywhere on the Web today. WHAT IT IS So what is HTML5? It is a new method for delivering instructions to Web-enabled devices about how to handle video content. Many of today's browsers (Apple's Safari, Mozilla's Firefox, Google's Chrome) and ones that will be coming on the market in the near future (Microsoft's Internet Explorer 9) offer native support for video. This means that built-in support will be provided for decoding some types of compressed video within the browser itself, not requiring a plug-in such as Flash or Microsoft Silverlight to be installed. For example, Safari, which is available for Mac and Windows operating systems, has native support for H.264 built into the browser's native code, as does Chrome. Firefox doesn't support H.264 internally (due to the natural reluctance of the open-source community to use a patented technology like H.264), but it does natively support another video codec called Theora, which is also supported by Chrome and by the VLC media player program. So why the switch? Well, Mr. Jobs has a lot of reasons, but for most companies the key is open standards. While Adobe claims overall desktop penetration of their player at 98 percent, Flash is still an Adobe-exclusive product. So companies like Apple and Google might consider it to be in their best interests to find a way to offer an alternative to Flash. And, in HTML5, they may have found it. To see how HTML5 works, look at Fig. 1, which shows how a video file could be configured for playback over a Web connection. In this case, the video has been compressed with H.264 Main profile, and the audio is compressed using low-complexity advanced audio codec (AAC). Fig. 1: HTML5 in action, showing Web server, HTML5 code fragment, and Web page as displayed on portable device. These two content elements (shown in pink in the diagram) are then combined into an MP4 container (shown in yellow), which also contains other data that is useful in the streaming and playback operations. This other data might be a hint track that helps a streaming server determine how to packetize the video and audio files; or it could be metadata about the file or other useful stuff. This container (really a file) is then stored on a Web server until the user's browser requests it. Continuing with Fig. 1, the user's browser needs to be told how to find and process the container with the audio and video content. That's where HTML5 comes in, with a set of instructions that are embedded in the code for the Web page that the user is viewing. Note that, as with many Web media applications, the website that delivers the HTML5-coded webpage to the browser can be separate from the server that delivers the media content, as shown in this example. The HTML5 code can be written so that the video will automatically start playing when the user browses to the page, or it can be written to require the user to hit a play button. It may be interesting to note that currently Apple's handheld devices ignore the parameter, thereby preventing users from consuming mobile bandwidth until they hit the play button. IMPACT ON BROADCASTERS The good news about HTML5 is that broadcasters don't necessarily need to purchase a new set of compression devices if they are already using H.264. This is true for some of the big Web video providers (such as YouTube) who are already encoding their content using H.264. Of course, some tool changes will still be required, because many Web video streaming systems are based on Flash scripting. In the near future, these tools will also need to support HTML5, particularly if they are going to be useful for producing content for Apple's portable devices. One major issue for HTML5 deployment is the current lack of a Digital Rights Management (DRM) capability. While the popularity of DRM-protected content has never been high with end users, many content owners are uncomfortable publishing their video assets on the Web without some form of protection against unauthorized redistribution of content. This is particularly important for sites that charge user fees for content access, such as the forthcoming Hulu Plus service, which will be offering entire seasons of some popular network programs. For broadcasters, who may hold only limited rights to the content that they are streaming, delivering video streams that are not protected by DRM may simply be out of the question. In these cases, broadcasters may need to stick with Flash until or unless DRM is deployed for HTML5. For the near future, many Web content providers will probably use some combination of both HTML5 and Flash video. This can be done quite simply, by embedding Flash video instructions as an inside the HTML5 elements. For browsers that support HTML5, they will decode the video as indicated, and ignore the Flash objects. For browsers that don't support HTML5, they will ignore the elements and process the Flash object. Instructions for doing this can be found easily on the Web. So what is the bottom line for broadcasters? Well, the transition may be like other transitions that this industry has gone through, where two different formats may need to be supported for awhile. Fortunately, this transition, if it really does happen, should be much less expensive than the switch from SD to HD. We can only hope. Wes Simpson is busy working on his new website at www.telecompro.tv. Sorry, there's no HTML5 video on it, yet, but your comments are always welcome at [email protected]. LINK: EOBC Says Broadcasters Could Get $35 Billion for Spectrum LINK: Moving Days Are Dead Ahead for Broadcasters Michigan Broadcasters Break Ground on New HQ Newsletter Signup
计算机
2015-48/1913/en_head.json.gz/6785
Grand Central Dispatch: building for the future by Paul Daniel Ash Much has been made of the fact that there are not a lot of new features in Snow Leopard from a user point of view. While it's true that from a marketing standpoint Snow Leopard could have made a much bigger splash, especially this close to the Windows 7 release, Apple deserves some real credit for doing the right thing from an engineering point of view. Snow Leopard fixes a lot of what was broken in Leopard, provides real speed and usability improvements, and most importantly builds in solid support for technology - such as multi-core processors - that will be central to computing in the years to come. I want to take a brief look at one of the most significant new "under-the-hood" elements of Snow Leopard: the Grand Central Dispatch (GCD) model. GCD provides a set of extensions to common programming languages - specifically C, C++ and Objective C - that make it easier to take advantage of the multiple cores on modern processors. Cores are essentially separate computing engines located on the same chip that allow tasks to be handled in parallel - simultaneously - rather than one at a time. The problem with this approach in software - called task parallelism - is that it requires a lot of challenging work for the programmer. Computing jobs need to be broken up intelligently into tasks, and these tasks need to be farmed out to separate cores. That's being handled in threads by modern code. Programmers need to ensure that each thread has the resources it needs, that it locks the data that it's using and releases it when it's done, and that no other threads are using the resources at the same time. Complicating all of this is the fact that the programmer has no way of knowing whether the code will be run on a dual-core or quad-core chip... or some future William Gibson thousand-core black-ice �ber-mainframe. GCD allows the programmer to define units of code and data into Blocks, which can then be put on Queues for scheduling. The system then handles execution of all the scheduled Queues in an efficient, coordinated way, rather than letting individual programs grab resources willy-nilly. Apple's clearly invoking Grand Central Station to suggest the efficient flow of large volumes of traffic. It's an apt metaphor: the amount of parallelizable code will likely grow enormously to take advantage of faster chips with more cores. Leaving thread management up to individual programs, as in the status quo, will likely lead to worse and worse pile-ups. The Windows 7 release will certainly be a big media splash. Hopefully, we'll also see real improvements in the underlying technology on which most of the world's PCs depend. It will be interesting to see which architecture best takes advantage of future processors as the technology continues to advance. Paul Daniel Ash
计算机
2015-48/1913/en_head.json.gz/7406
Senior Environment Artist (Titanfall) Senior Level Designer (Titanfall & Unannounced Title) Interview: Blizzard's Afrasiabi On WoW's Cataclysm-ic Expansion September 24, 2009 | By Chris Remo More: Console/PC, Exclusive Blizzard's most recently-announced World of Warcraft expansion, Cataclysm, is said to radically alter nearly every part of the game's world, which makes it the perfect game for the studio to show off its storytelling chops. Cataclysm's lead world designer Alex Afrasiabi says that over the game's five-year history, Blizzard has learned a great deal about how to convey information to the player using less text, in service of the well-worn advice "show, don't tell." For example, one of Blizzard's most crucial storytelling tools, phasing -- a system by which a particular player's perception of the game world differs from that of other players based on his or her accomplishments in the game -- was the inadvertent result of a simple bug fix. Gamasutra sat down with Afrasiabi to discuss Cataclysm's extensive scope, how Blizzard prioritizes content development, and how a bug fix became a game design linchpin. I imagine "lead world designer" is a particularly involved role in the new expansion, since it seems to be the most significant redesign of existing World of Warcraft content yet. Is that the case? Alex Afrasiabi: Oh, absolutely. You might go so far as to say any MMO ever, for an expansion. I would say every zone in the old world is hit one way or another at varying degrees, from complete redos like Darkshore and Azshara, to moderate questing changes like Feralas, to moderate redos of the terrain and the quests, to light -- but even "light" is debatable -- [modification] in Loch Modan. Every zone is hit by this cataclysm to some degree. The cataclysm starts out with rumbles, and what those rumbles are are the stirrings of Deathwing beneath the world. He's in this elemental plane of earth locked away in Deepholm. When he finally breaches into terrestrial Azeroth, it causes that gaping wound on the surface of the world -- a cataclysmic shockwave that hits pretty much everything. It's Deathwing, the world-breaker, who is the chief source of this destruction. So you've got that lore. But how do you determine from a development standpoint how to translate that into design, asset production, writing, and so on? How do you determine what areas are higher-priority for more extensive recreation? AA: I don't want to say we play it by ear, because we really don't. We know our game really, really well. We've had a lot of time now -- five years, more really -- in development to hone our skills. Each expansion, in my eyes, gets progressively better. We become better designers of the content. We understand what the players want from our quests and our content, and we try to provide that. We really know our game very well, and that includes [level] one to 60 [zones]. The first thing we did when we set out to do this was prioritize. You basically get that big list of zones, and you give them that -- "This [zone] is a five, the worst. This is just a mess." Like Darkshore. And then, "This [zone] is a one. Moderate work." We basically make this huge prioritized list, and then go through it. Other mitigating factors come into play, of course. What do you do with Silithus? It's a [level] 55-to-60 zone in this expansion. Is it as relevant? We almost have to triage the zones. We know what our production schedule is roughly -- I'm not going to tell you what that is [laughs]. But we have to triage the zones. A zone like Silithus is probably not the best of zones right now. I'd go on to say it's actually pretty bad. It's not as important a zone like Azshara. So it will probably get less of the treatment, because the people who reach that point are probably going to somewhere else at that point [anyway]. It's not as important as Aziara, which is now a [level] 10-to-20 zone for the Horde. That absolutely was terrible before, and now it's got to be amazing. So as you said, you guys have been doing this for well over five years now, and you've learned a lot. What are some of the things you've learned about MMO design, particularly when it comes to conveying story in an integrated, interesting way? AA: The most important one, I think, and this is just from sitting at meetings -- any new guys who come in, they always have that urge to tell their story. "I'm going to tell this amazing story. It's going to make you weep when you read it." That's when I stop them right there. I'm like, "Stop right there. Nobody's going to read whatever you're trying to do. It could be the greatest thing since Hemingway. Nobody cares. Nobody cares. Nobody's going to read it." You have to take a different approach, and you show the player that. It's the old adage: show, don't tell. You show them. It's a different world. That's when you're starting down the right path. When we first started doing this, sure we knew it, but we didn't understand it. There's a difference, and it only really comes from practice. It's almost a zen thing with the quest guys at this point, where it's a [matter of] "Do this quest without any text." Just blindfolded. "Do this quest, and let's see if I even know what's going on. Create something. What's going on? Can I tell if I'm entering this room or entering a point of interest? What am I looking at? What is happening?" I think that's improved our design vastly over the years. Of course, we're still going to have text, but we're not dependent on it. As we advance our technology, too, with quest map [points of interest] and things like that, we'll become less dependant on it. Because right now, what we use it for is as a means of direction. Certainly, we will provide story and lore when we can, but we want to provide that in the actual act of doing the quest. The one thing we still can't decouple from it is directions -- where do you go? But we're getting there. That's certainly something MMOs struggle with -- are people going to bother with the text? It seems like with Cataclysm, that's got to be almost the whole point of the expansion almost. A huge part of the experience as the player is seeing how everything has changed. Can you talk about any design tools or methods you use to strive for that? AA: Absolutely. It's actually interesting. Initially, we created phasing as a bug fix. It was used to fix a bug with the Blade's Edge quest. That was it. Case closed, right? There was this bug, we couldn't solve the problem, and one of our programmers -- a brilliant guy -- implemented this system. Nobody thought twice about it. [Expansion pack] Wrath [of the Lich King] rolls around, and we're in early alpha. We're getting feedback from the team, and one of my friends on the team is talking to me about [the] Howling Fjord [zone], and he's irate. He's saying, "I can't believe this. I go into [capital city] Valgarde, and I keep getting trained by these [native enemies] Vrykul. I killed them, and I did the quest. Why do I keep running into them?" It seems really kind of innocuous. "Yeah, of course. That's how the game works. There's an event playing out. Even though you've done the quest, these events don't stop." But that's kind of what got me to start to more seriously approach it. It was almost a blow to the gut. I was aware of it. It was almost a challenge at that point. How could we change the world for the player so that it actually dynamically alters, so they can actually say, "I did take that quest to kill those Vrykul, and once I did that, guess what? They're gone. They're no longer there." That was all the fire that was needed. From there, it was experimentation. It's funny. If you really break down how Lich King went, the way we tackled zones, we did Howling Fjord, Borean Tundra, and Dragonblight, in that order essentially, during development. Once you get to Dragonblight, you start seeing some of those effects. You start seeing a lot of invisibility -- not phasing -- because at the time, that phasing thing still hadn't clicked. But you start seeing more and more of it. When you get into Wintergarde, you rescue captives or villagers first. Once you bring them into town, the town actually changes. After that, we went onto [the then-new] Death Knight [class], and it was almost a proof of concept at that point. How can we do this? This obscure bug fix just popped up. We were thinking, "What about that? Could that work?" Sure enough, we did a quick run through with a test, went through from one phase to the next, and we said, "Wait a minute. This actually did change, and it totally worked. Okay. We might have something here." From there, phasing was born, essentially, in its current [form]. It became a great tool for us, to be able to tell stories like the battle for the [Undead capital] Undercity. You go to [Orc capital] Orgrimmar, and it's completely phased out into another phase, and you have all these [undead] Forsaken refugees pouring in instantly. You don't need to read anything. You just look. Forsaken refugees are on the floor, begging you for help. The Horde are all rounded up. Shops are all closed -- just straight up just closed, can't use them. Guards direct you to the other cities. It's exciting. That was a big one. So our tools have essentially gotten better. Using phasing is one example, but there are many advancements like that. Vehicles have taken a lot of flak -- some good, some bad. For things that are used for that the player never actually controls, they're actually a very powerful tool for us. An example of a vehicle is the Kologarn -- that's a boss where you have the arms separate from the body. Just using the vehicle tech, he's actually technically just one big vehicle with two passengers as his arms. Again, it allows us to tell this greater story. It's no longer just that boss -- his arm breaks off, and then his other arm breaks off. The technology is definitely improved, and it's helped us tremendously, I think. Were you apprehensive at any point in taking something that was basically a bug fix, and resting so much of the game on it? AA: Well, of course we're apprehensive. But the thing to me is that advances in our industry, and in Blizzard, don't necessarily stem from ideology. Ideology is a powerful thing, and it keeps us rooted. It keeps the foundation firm. But it's ingenuity and deviance, dare I say, that pushes you beyond that. So, you take something like this where it's just this thing for a bug fix and you deviate -- you say, "What if we can do something else with it?" You have to push it. You have to. You have to chase it down and see where it goes. A lot of times, it is a dead end. You'll chase something down, and you're like, "Ah, it didn't work out. Too bad." But you have to push that bounds because otherwise, your game won't grow. That's that. In that vein, people have criticized Blizzard at times for being a relatively conservative developer, design-wise. How would you respond to that? AA: Well, like I said, we do have a very strong ideology. We are firm in our beliefs, and we won't release a game until it's done -- you've heard that said time and time again -- but we mean that. That implies a lot of things. So we'll certainly take the criticism for it, but I think in the end, the result is often great. /view/news/116136/Interview_Blizzards_Afrasiabi_On_WoWs_Cataclysmic_Expansion.php
计算机
2015-48/1913/en_head.json.gz/7577
Chalk Farm '84HomePage | RecentChanges | Preferences The Encyclopaedia has now been locked; contributors must log in to make changes. [more]Chalk Farm '84 (CF'84) is the most commonly-used 'official' ruleset, although the amended Holland Park 2000 ruleset using the blocked understrile to the Croydon Tramlink has gradually begun to gain greater acceptance. In its day, the set of rules drawn up by the IMCS and Mrs Trellis in a meeting at Chalk Farm in 1984 was regarded as the greatest of its type. It was intended to be the standard ruleset for all MC everywhere, and was almost totally successful in this aim. Indeed, even now there are many who hold to these as being the most consistent, straightforward and fair rules ever written down, and many MC tournaments and clubs still treat them as standard. CF'84 is also the only ruleset of recent vintage to have been accepted by both CAMREC and the IMCS. However, in recent years there has been a tendency away from Chalk Farm '84 on the grounds that it no longer reflects the reality on the Underground, as several changes have happened to the network since then. The creation of the Hammersmith & City line as a separate line (it was formerly part of the Metropolitan) did not unduly affect things, nor did the changes to the various peak-hour schedules. (For instance, the Metropolitan line between Baker Street and Aldgate, and the Hammersmith & City line up to Barking were changed to run at standard London Underground hours, with the stations closing at around 11:00pm.) The closure of several stations affected things a little more – in particular there are far fewer Amersham-Aldwych loops now Aldwych is a ghost station, and the Ongar Denial is no longer as easy a way out of a Dollis Hill for the same reason. However, when the ghost station rules were applied, neither of these changes required a rewrite of the rules as a whole. Of greater concern was the long extension to the Jubilee Line, and the admission of the Docklands Light Railway to full Underground status. This seriously unbalanced quadrant four, and the ruleset proved too inflexible to cope. The general answer from the IMCS has been to consider the DLR an honorary rather than actual part of the Underground, somewhat like the North London line, and bring the Jubilee extension under the foetal station rules. However, many players' committees did not accept this change, and have proposed various amendments to bring both the DLR and North London lines into the fold, some of which have been accepted by the IMCS (but none by CAMREC.) Unfortunately, none of the amendments has been completely bug-free – for example, the 1997 set generated a loop around [Leicester Square]? (!). Perhaps the most successful amendment was the Finsbury Option amendment of 1988, which introduced a fifth 'quadrant' to hold the Jubilee extension and the Docklands Light Railway. This amendment was used successfully on the York MC server until its closure. [JLE] Categories: A to Z, Rulesets This page is read-only | View other revisionsLast edited April 21, 2007 11:06 pm by Simons Mith (diff)Search:
计算机
2015-48/1913/en_head.json.gz/7629
Research shows that computers can match humans in art analysis Release Date: March 18, 2013 Jane Tarakhovsky is the daughter of two artists, and it looked like she was leaving the art world behind when she decided to become a computer scientist. But her recent research project at Lawrence Technological University has demonstrated that computers can compete with art historians in critiquing painting styles. While completing her master’s degree in computer science earlier this year, Tarakhovsky used a computer program developed by Assistant Professor Lior Shamir to demonstrate that a computer can find similarities in the styles of artists just as art critics and historian do. In the experiment, published in the ACM Journal on Computing and Cultural Heritage and widely reported elsewhere, Tarakhovsky and Shamir used a complex computer algorithm to analyze approximately1,000 paintings of 34 well-known artists, and found similarities between them based solely on the visual content of the paintings. Surprisingly, the computer provided a network of similarities between painters that is largely in agreement with the perception of art historians. For instance, the computer placed the High Renaissance artists Raphael, Da Vinci, and Michelangelo very close to each other. The Baroque painters Vermeer, Rubens and Rembrandt were placed in another cluster. The experiment was performed by extracting 4,027 numerical image context descriptors – numbers that reflect the content of the image such as texture, color, and shapes in a quantitative fashion. The analysis reflected many aspects of the visual content and used pattern recognition and statistical methods to detect complex patterns of similarities and dissimilarities between the artistic styles. The computer then quantified these similarities. According to Shamir, non-experts can normally make the broad differentiation between modern art and classical realism, but they have difficulty telling the difference between closely related schools of art such as Early and High Renaissance or Mannerism and Romanticism. “This experiment showed that machines can outperform untrained humans in the analysis of fine art,” Shamir said. Tarakhovsky, who lives in Lake Orion, is the daughter of two Russian artists. Her father was a member of the former USSR Artists. She graduated from an art school at 15 years old and earned a bachelor’s degree in history in Russia, but has switched her career path to computer science since emigrating to the United States in 1998. Tarakhovsky utilized her knowledge of art to demonstrate the versatility of an algorithm that Shamir originally developed for biological image analysis while working on the staff of the National Institutes of Health in 2009. She designed a new system based on the code and then designed the experiment to compare artists. She also has used the computer program as a consultant to help a client identify bacteria in clinical samples. “The program has other applications, but you have to know what you are looking for,” she said. Tarakhovsky believes that there are many other applications for the program in the world of art. Her research project with Shamir covered a relatively small sampling of Western art. “this is just the tip of the iceberg,” she said. At Lawrence Tech she also worked with Professor CJ Chung on Robofest, an international competition that encourages young students to study science, technology, engineering and mathematics, the so-called STEM subjects. “My professors at Lawrence Tech have provided me with a broad perspective and have encouraged me to go to new levels,” she said. She said that her experience demonstrates that women can succeed in scientific fields like computer science and that people in general can make the transition from subjects like art and history to scientific disciplines that are more in demand now that the economy is increasingly driven by technology. “Everyone has the ability to apply themselves in different areas,” she said.
计算机
2015-48/1913/en_head.json.gz/7636
PROFESSIONAL EDITION | March 12 | Issue 3 Live Webinar - Driveline Modeling and Simulation: Challenges and Solutions for HIL Simulation Wednesday, March 21st, 2012 at 10:00am ET In this webcast, Dr. Derek Wright, MapleSim Product Manager, and Dr. Orang Vahid, Senior Modeling Engineer, at Maplesoft will present some of the challenges faced by hardware- and software-in-the-loop (HIL and SIL) simulation of complex driveline models. These models present unique difficulties to engineers wishing to do control design and validation via HIL and SIL simulation. Specifically, there is often a trade-off necessary between model fidelity and simulation speed that may necessitate model approximations and result in reduced effectiveness of the designed controllers. Discussion will include the role of Modelica and acausal modeling in achieving manageable physical models and how symbolic formulations can increase simulation speed without reducing model fidelity. Dr. Derek Wright, MapleSim Product Manager, Maplesoft Dr. Derek Wright received a Ph.D. degree in the collaborative electrical and biomedical engineering program at the University of Toronto, Canada. His research has focused on the physics of medical imaging, and he has also worked in the areas of robotics, control, analog and digital VLSI and board-level circuit design, and CCD cameras. Dr. Orang Vahid, Senior Modeling Engineer, Maplesoft Dr. Orang Vahid holds a Ph.D. in mechanical engineering from the University of Waterloo. He has over 10 years of experience in industry covering advanced dynamical systems, friction-induced vibration and control, automotive noise and vibration, and mechanical engineering design. Maplesoft Events ICTCM March 23rd-25th 2012 ECEDHA Annual Conference and Expo March 24th-27th Austin, Texas SAE 2012 World Congress April 24th-26th Austin, Texas For further details about these events, click here. Information Sheets HIL Datasheet MapleSim produces high-performance, royalty-free code suitable even for complex real-time simulations, including hardware-in-the-loop (HIL) applications. To access all Maplesoft Information sheets, click here. Community Social Networks Recorded SAE Webinar: Physical Models of Automotive Batteries for EV and HEV Applications The automotive industry is in transformation. New generation vehicles deploying hybrid (HEV), fully electric (EV), and fuel cell powerplants are presenting significant challenges to automotive engineers. The complexity of the automobile has increased exponentially in the past decades with higher performance components and digital control. This has triggered a design revolution in the industry which stresses detailed modeling and simulation steps prior to committing to metal and plastic. With new generation vehicles the need for advanced physical modeling solutions is considerably greater due to increasing system complexity, and the battery is becoming one of the most critical components in alternative powerplant development. This webinar covers new approaches to modeling and simulation for hybrid electric vehicle (HEV) and fully electric vehicle (EV) applications with particular emphasis on the development of high fidelity physical models of automotive batteries. Developed as part of the research program for the NSERC/Toyota/Maplesoft Industrial Research Chair for Mathematics-based Modeling and Design, these models will plug into comprehensive system models for HEV and EV applications. Conventional lead-acid, but also NiMH and Lithium battery models will be presented. VIDEOS Video: Powerful Control Systems Design Tools From simple and intuitive model entry to insightful and complex analysis, this brief demonstration will illustrate how these tools allow you to assess the stability, controllability and sensitivity of your engineering system designs. BLOGS Rebound Rumble: To Drive or Not to Drive Dr. Lai is using MapleSim and Maple to help the team understand the principles involved and design their basketball-shooting robot. There are a number of factors governing the trajectory of a projectile. For the purpose of the competition, each team is really looking at what combinations of the shooter launch speed and launch angle for a given location within the game arena would be best to maximize the scoring. MAPLESOFT WEBINAR SERIES Live Webinars Featured: Driveline Modeling and Simulation: Challenges and Solutions for Hardware-in-the-Loop Simulation Wednesday, March 21 at 10:00am ET Industry Applications of Maple 15 Tuesday, March 20, 2012 at 10:00 am ET Automotive Applications of MapleSim. Part 2: High Fidelity Vehicle Dynamics Thursday, March 29, 2011 at 1:00 pm AEST A Guide to Evaluating MapleSim 5 Tuesday, March 27, 2011 at 1:00 pm AEST Modélisation, simulation et contrôle d’un robot gyropode avec MapleSim Modeling High-Fidelity Models for Aerospace Mechatronics Applications MAPLESOFT IN THE PRESS NASA’s Jet Propulsion Laboratory Deploys Maplesoft Technology Desktop Engineering, January 26, 2012 JPL is implementing Maple, MapleSim and MapleNet in its various projects. Whether creating America's first satellite, Explorer 1, sending the first robotic craft to the moon or exploring the edges of the solar system, JPL has been at the forefront of pushing the limits of exploration. Maplesoft offers testing and assessment software for technical education The Engineer (UK), January 4, 2012 Maple T.A. 8 testing and assessment software from Maplesoft is designed for technical education and research, and aims to help instructors to improve students’ comprehension. Slapshot robot aims to create game-changing hockey sticks DesignFax, December 20, 2011 MapleSim played a critical role in the design and development of the SlapShot XT. MapleSim allowed Hockey Robotics to efficiently and accurately simulate the coupled dynamic electrical and mechanical behavior of the equipment. www.maplesoft.com 1-800-267-6583 (US & Canada) | 1-519-747-2373 (Outside US & Canada) � 2012 Maplesoft, a division of Waterloo Maple, Inc., 615 Kumpf Drive, Waterloo, ON, Canada, N2V1K8. Maplesoft, Maple, MapleSim, Maple T.A., MapleNet, and MaplePrimes, are trademarks of Waterloo Maple Inc. All other trademarks are property of their respective owners. You are receiving this newsletter in an effort to keep you up-to-date on the latest developments at Maplesoft. To manage subscriptions or to opt out of all commercial email communications from Maplesoft, please click here. To view our privacy policy, click here.
计算机
2015-48/1913/en_head.json.gz/7965
Crytek considered free-to-play multiplayer for Crysis 2 and 3 Crytek CEO Cevat Yerli talks about making Crysis free-to-play, and how it will make the franchise more accessible. Free-to-play is the future for Crysis developer Crytek. While Crysis 3 will be a full retail release published by Electronic Arts, the developer considered implementing F2P elements into the upcoming shooter."We even considered a standalone free-to-play version for Crysis 2, to be honest. Launching the single-player as a packaged good and then making multiplayer free-to-play-only," Crytek CEO Cevat Yerli said. "We also considered that for Crysis 3, and it didn't happen again."Sony has recently made standalone multiplayer versions of its online shooters, like Killzone 3 and Starhawk. However, it's likely that the lack of F2P play on consoles made it impossible for Crytek to implement F2P into their last two retail releases.The next Crysis game promises to be radically different, but will it be F2P? Yerli isn't sure yet, telling RPS that "it's too early to say," but promised that "when I said free-to-play's gonna be our future, I meant that and I hold to it."One of the reasons why Crytek is so adamant about moving to a microtransaction model is that it will eliminate the desire for people to pirate their games, a problem that has plagued Crytek for years. Both Crysis and Crysis 2 were pirated on a massive scale, but a switch to F2P will dissuade the need to illegally download Crytek's future games. "My desire is that everybody can just play Crysis and don't have to spend money from day one," he said. "I just want them to be able to give it a try. And then they can make their choices about spending money." Chatty Kerranger Too true. StealthHawk What do you mean? F2P games generally require internet connectivity, right? This is the same standard DRM that most game... nem00 there's a lot of synonyms to cover, you'd need things like paytowinium for freemium. Visit Chatty to Join The Conversation Crysis Series
计算机
2015-48/1913/en_head.json.gz/8039
THE NEW LEAKY Feb 01, 2006Posted by: Melissa AnelliUncategorizedWELCOME to the new and very much improved Leaky Cauldron! We have been absolutely dying to get this version released and out to you (yes, this is the project code named Fiddy Five). We don’t even know where to start telling you what’s new about it – but we think there are several things you’re going to notice right away. So go on and click around, get comfortable, keep your fingers crossed that we don’t implode, and then come back here and read some more. We know there will be bugs. We know there will be errors. We know there will be things we’ve missed and things you’ll find – that’s the fun (ha ha, sure) of a redesign. Please bear with us as we iron the kinks, as they say. There’s a known one with Macs (yes, grr, I know) that will eventually be fixed; it works better on FireFox than on Safari. You might have noticed that our address changes to a number, and that our PotterCast feed is not coming up right now; this all has to do with the changeover and the connecting of servers, which will be complete very soon – so just hang tight. Thanks. So, welcome to the new Leaky, our fifth design. Except, this time, for the first time, it’s not a redesign. It’s more like a rethink. It’s a re-do. It’s an entirely new site, designed for development, designed to be fast, designed to be the site you need to carry you through the rest of the books and the movies. This does not mark the end of our work but of the very beginning – this is a tiny bit of what it will eventually include. For more of that, read our Guide to Leaky Five, which is also linked in the left sidebar. For now, we have some very important people to thank. These people are some of the most amazing people I’ve ever had the privilege to work with or know. The amount of work and dedication they have put in over the past months, bringing this to you, rivals anything we should ever expect, and anything we should ever desire. Since they are so wonderful I’m going to take my sweet time thanking them here. I feel nothing but luck that they ended up on Leaky’s doorstep. JOHN NOE. His name is in all caps for a reason, and for once, it’s not because I’m getting annoyed at him on PotterCast. No, this entire redo is John’s fault. He conceptualized it and brought it to me more than six months ago. The amount of work he has put into this, and the number of brilliant…stuff…(because really, how else do we characterize what goes on in John Noe’s head?) he’s done since have been overwhelming. The last design was John Noe’s entree to Leaky; he e-mailed us with it one night and instantly became a staff member and our new designer. I’d be lying if I said I wasn’t very proud to see him show what he’s capable of two years later. NICK POULDEN. When Nick Poulden joined staff as a programmer, we all through he was pretty smart. Now we know he’s somewhere in the Weird Crazy Genius category. He recoded this entire site – he started everything from scratch, started with a completely blank slate. And the results – well, you can click around and discover the results. This site is coded using some of the newest techniques out there, all of which Nick had already mastered, because he has been able to modify them. He has bent them to his will! All we ever had to say to Nick was, “I’d like to do X Y and Z,” and he’d say, “All right.” No, “That’s not possible,” no, “You’re insane, lady,” no, “What?” Just a constant stream of, “Of course we can do that”s. It’s been amazing. Alex Robbin. He used to just bother us and ask if he could do anything to help Leaky – honestly, we all thought he was sort of annoying, and too young. But, like every one of the integral people on this site, he started making his talents known whether we liked it or not – and he has become one of this site’s very best friends. This update would Not. Be. Possible. without the sweat he’s put into it over recent weeks. Even as this thank-you is being written, he is off doing last minute bugs, coding last-minute things. You can’t find people like this – they usually come knocking on your door. Alex nearly battered ours down, but we’re so glad he did. Doris Herrmann. You’ll hear more about this lady soon, which is why I’m not saying much – you can visit the About Us page for more. She is our new Project Coordinator, and she was VITAL to getting this off the ground. Heather Campbell. Leaky’s newest designer, she worked to our deadlines like a champ and put up with all our nitpicking like she’s been doing it for years. Her art is everywhere on this site, and will be in more places soon. Sue Upton and Julie Tynion. These ladies wrote so much new content – all for your finding, all around the site – and all under deadline, all while doing the proper research and all with such cheer, that we’re amazed. They will be spearheading your new newsletter, Owl Post, and they will be adding a lot more content to the site very soon. And they always do so with a smile. Kim M. Parker. She’s not just everyone’s last name (listen to PotterCast to get that one) – she’s also been a terrific resource, organizing a lot of staff and a lot of information, and being an extra pair of eyes and hands when we’ve needed it most. DH at Idologic – for putting up with so much, all the time. Nick Rhein, for controlling the madness in the forum while this has been going on, and for helping get running what we’ll all soon know as Scribbulus… To the PotterCast forum at Leaky Lounge – for enjoying the heck out of the teasing we’ve done for the past two weeks. You’ve made all the stress manageable (and given us almost 5,000 posts to read). Thanks.The Floo Team. They were very patient while we were all sucked up by this. And to a ton of people on our forums, our elves, and a lot of other helpers who did odd jobs whenever they came up: Kyrane, Trozam, Memyslfni, KimmyBlair, sunny_elf, Jackdoor, Chelikins, Julie, Tanaqui, LisaQQQ, Lexcion_Bel~, Cncrpl, Anguinea, Asphodel Wormwood, Kelazma, Guru of Sloth, Hpaddict, minime, Lilly, Mr. Internet, Ask Jeeves, The MuggleBoys (Andrew, Ben, Kevin, Micah and company, for putting up with a LOT of our teasing, and for offering us great moral support), Joseph (for modeling for the “kissy face Harry”), Peeves, the Chipotle people, Starbucks, Sirius Black, and St. Mungo’s, for lending us all those stress-relieving potions. Ahh. Well, that’s it from the podium. We hope you enjoy this. It took…well it took all these people and more many months to create. We hope you enjoy it for a lot longer than that.
计算机
2015-48/1913/en_head.json.gz/8311
AboutARTIST’S STATEMENTCapstoneCONTACTPortofolio Matthew Duncan ~ Graphic Designer & Digital Strategist Still Photo Blog 27 Posted by Matthew Duncan in Multimedia Storytelling ≈ Leave a comment An ionic image brings a moment to life. The image is usually easily recognized and generally represents an object or concept with great cultural significance to a large group of people. Most of the time, the object or person in the image is regarded as a special status as particularly representative of, important to, or loved by, a particular group of people, a place, or a period in history. Julia DeIuliis, writer for Quora.com, explains that there are three reasons of that makes an image iconic. The first one is if it perfectly captures an event or artistic style. If an image represents and tells the story of an event or particular style, than that image will be become referenced or discussed anytime that event or style is discussed. In all future discussions of the topic or style, people need to discuss that image, and eventually, that image becomes emblematic, representative, or synonymous with the topic being discussed. Eventually, that strong association is part of what makes the image iconic The second reason an image to be iconic, is for a lot of people need to have seen it and be familiar with it. It should be the norm that people have seen it and know what it is. Because of its ubiquity, other artists reference it in their own work. If something becomes parodied a good deal, that’s one form of reference, and its a sure sign that something is on its way to becoming iconic. Alternatively, artists could reference the image in their work to help users feel an immediate familiarity with that work. The final reason an image can be iconic it has an impact on public opinion. It’s very rare that an image directly results in action, so if an image is striking, visceral, and temporal enough to do so, it’s on its way to becoming iconic. The Kiss by Alfred Elisenstaedt-1945 This photo is iconic because it represents a significant time frame in our countries history. On August 14, 1945, Japan surrender was announced signaling the end of World War II. The photo depicts the celebration of this event as a young man runs down the streets of downtown New York grabbing any and every girl in sight. The media portrayed the photo and story as a sailor reuniting with his long lost love, which it was not, but the image became an enduring symbol of America’s exuberance at the end of a long struggle. Migrant Mother by Dorothea Lange-1963 This photo became an iconic image of the Great Depression. At the time, the 32 year-old women was a widow with seven children. Forty years later, the woman was identified as Florence Owens Thompson. Black Power salute by John Dominis-1968 This photo was taken during the 1968 Olympics in Mexico City. Athletes Tommie Smith and John Carols are shown making a black power salute as a civil rights protest while receiving the gold and bronze medals in the 200m dash. The athletes were later banned from the Olympics for making the political statement.The event was one of the biggest political statements in the history of the modern Olympic games. COM530_Theory&AudienceAnalysis Multimedia Storytelling Follow “Matthew Duncan”
计算机
2015-48/1913/en_head.json.gz/8573
Stay Business Intelligence in a Social eCommerce System Tuesday, February 21, 2006 - 06:30PM Cubberley Community Center4000 Middlefield Road, Room H-1Palo Alto, CA Business Intelligence Tweet You are hereBusiness Intelligence in a Social eCommerce System Business Intelligence in a Social eCommerce System The Monthly Meeting of the Business Intelligence SIG Neel Sundaresan, Distinguished Research Scientist, eBay Research Labs Presentation Overview This talk discusses the nature of a social ecommerce system of which eBay is a prime example. Buyers and sellers come together to look for, bid and buy or to list and sell products creating a phenomenon of electronic interaction for the purpose of trade. Unacquainted users are connected through the transactions and associated feedbacks into circles of familiarity or trust for this purpose. We discuss the tools and techniques required to build an intelligence model behind this network to better understand these buyer and seller behaviors, enhance user experience and promote better commerce relationships. About the Presenter Neel Sundaresan is a Distinguished Research Scientist at the eBay Research Labs. Dr. Neel Sundaresan's current work focuses on intelligence in electronic commerce systems. His startup company experience includes the role of a CTO building intelligent search systems. Before that he was a research manager of the eMerging Internet Technologies department at the IBM Almaden Research Center where he pioneered several XML and internet related research projects. He was the chief architects of at IBM's XML-based search engines. He received his PhD in Computer Science in 1995. His research and advanced technology work have been diverse including Compilers and Programming Languages, Parallel and Distributed Systems and Algorithms, Information Theory, Data Mining and Semi-structured Data, Speech Synthesis, Agent Systems, and Internet and Electronic Commerce Systems. He has authored over 40 research publications and has given several invited and refereed talks and tutorials at national and international conferences. He is a frequent speaker at user groups and industrial conferences. He has been a member of the W3C standards effort. Event Logistics Cubberley Community Center 4000 Middlefield Road, Room H-1 6:30 - 7:00 p.m. Registration / Networking / Refreshments (please arrive before 7:00 p.m.) 7:00 - 8:30 p.m. Presentation and Discussion $15 at the door for non-SDForum members No charge for SDForum members More on the Business Intelligence SIG.... Cubberley Community Center4000 Middlefield Road, Room H-1Palo Alto, CA Tuesday, February 21, 2006 - 06:30PM Add to my calendar Outlook Calendar
计算机
2015-48/1913/en_head.json.gz/9400
Posted MyDoom Author Gets Tricky With DoomJuice By Sophos virus experts have an interesting theory on a peculiar payload of the W32/Doomjuice-A worm. The Doomjuice worm drops a copy of the prevalent W32/MyDoom-A‘s source code onto infected computers, possibly in an attempt to make it more difficult to convict the true author. The Doomjuice worm drops a compressed copy of MyDoom’s C source code into a number of directories on the infected user’s PC. Detectives investigating the authorship of the MyDoom worm would normally treat discovery of the source code on a computer as a significant clue. “There is already a $500,000 reward for information leading to the conviction of MyDoom’s author,” said Graham Cluley, senior technology consultant for Sophos. “If he has spread his code around the net onto innocent computers in an attempt to hide in the crowd, then he’s more sneaky than the average virus writer.” “The other possibility is that MyDoom’s author is spreading the code to encourage others to write copy-cat viruses which try and mimic MyDoom’s global spread. The need for sensible security policies and multi-tier virus protection has never been greater,” continued Cluley. The Doomjuice worm attempts to launch a distributed denial of service attack against Microsoft’s website: www.microsoft.com
计算机
2015-48/1913/en_head.json.gz/9604
IANA Report on Redelegation of the .ke Top-Level Domain IANA Report Request of Kenya Network Information Center for Redelegation of .ke Top-Level Domain The Internet Assigned Numbers Authority (the IANA), as part of the administrative functions associated with management of the domain-name system root, is responsible for receiving requests for delegation and redelegation of top-level domains, investigating the circumstances pertinent to those requests, and reporting on the requests. In June 2002, the IANA received a request for the redelegation of the .ke (Kenya) country-code top-level domain (ccTLD). This report gives the findings and conclusions of the IANA on its investigation of that request. Factual and Procedural Background The .ke ccTLD registry was first delegated by the IANA in April 1993 to Dr. Shem J. Ochuodho, Kenya, as administrative contact, and Mr. Randy Bush, United States, as technical contact. At that time and today, that two-letter code was and is set forth on the ISO 3166-1 list maintained by the ISO 3166 Maintenance Agency as the approved alpha-2 code for Kenya. Since the initial delegation, Dr. Ochuodho has served in a voluntary capacity as the administrative contact for the .ke ccTLD. Likewise, from 1993 to the present, Mr. Bush has generously donated his time and energy to serve as technical contact and provided a free domain-name-registration mechanism and associated DNS services for the .ke registry. In May 2000, a group of Kenyan Internet stakeholders launched an initiative to form a participatory, community-based non-profit organization located in Kenya to manage both the administrative and technical aspects of the .ke ccTLD registry. Since October 2001, there have been broad-based consultations and research led by the Communications Commission of Kenya (CCK), with the participation of stakeholders including the Telecommunications Service Providers Association of Kenya (TESPOK), the East Africa Internet Association (EAIA), Kenya Information Society (KIS), Kenya Education Network (KENET), the Computer Society of Kenya, the Institute of Computer Science, the Kenya Health Information Network, the Network Operators Association, Telkom Kenya, the Kenyan government's Directorate of Information Technology Services, and the National Task Force on Electronic Commerce (NTF-ecom). The result of these consultations was the Kenya Network Information Center, Limited (KENIC), organized under Kenyan law as a company limited by guarantee (a not-for-profit entity). In addition to performing the technical, administrative, and policy-setting functions for the .ke registry, a stated objective of KENIC is to "promote, manage and operate the delegated .ke ccTLD in the interest of the Kenyan Internet community and being mindful of the global Internet community interest in consistent with ICANN policies." Through the KENIC website, open mailing lists, Steering Committee and other organizational meetings, and public forums, the KENIC organizers undertook to develop technical and administrative plans, and to take input from and build support within the Kenyan Internet community. By mid-2002, the KENIC organizers has completed KENIC's Memorandum & Articles of Association, and prepared and circulated for review and comment a draft annual budget for registry operations and a draft set of registration and administrative policies. Through the Computer Society of Kenya, an open membership organization, the organizers undertook a public awareness campaign aimed at increasing the involvement of individual and organizational Internet users in KENIC. To get off the ground, KENIC has relied upon contributions from its various members and supporters. For example, the member Internet service providers of TESPOK pledged to contribute engineering talent to establish KENIC's technical operations, and to provide a dedicated link between KENIC and the Kenyan Internet Exchange Point (KIXP). The Computer Society of Kenya pledged to pursue some donations of hardware from its members. Telkom Kenya committed to supply two independent upstream links to the global Internet. And the Communications Commission of Kenya pledged an initial allocation of 10 million Kenyan shillings (approximately US $110,000) to fund the start-up of KENIC. According to the organizers, one of the motivations for the KENIC process was a growing dissatisfaction in the Kenyan Internet community with the unresponsiveness of the current administrative contact, Dr. Ochuodho, to the needs of the local Internet community. The KENIC organizers have sought to involve Dr. Ochuodho in the community-based consultation process, repeatedly stressing their belief that Dr. Ochuodho deserves much credit for his dedicated and selfless labor to bring Internet connectivity and services to Kenya, and to establish the .ke ccTLD registry. However, by the late 1990s Dr. Ochuodho's commitments and demands on his time had increased, seemingly limiting his ability to administer the .ke ccTLD in a manner that fulfills the growing needs of the Kenyan Internet community. Among his increase in responsibilities were heading up the African Regional Centre for Computing (ARCC) and serving as a member of Kenya's national parliament. For these understandable reasons, as he himself has noted in conversation with IANA representatives, Dr. Ochuodho ceased to be as active and accessible a participant in Kenya's rapidly expanding Internet community as would be expected of a ccTLD delegee under these circumstances. Above all, the organizers of KENIC expressed frustration that Dr. Ochuodho had failed to engage in dialogue with the Kenyan Internet community about any aspect of the .ke ccTLD. Through the KENIC process, the KENIC organizers regularly and repeatedly invited Dr. Ochuodho to participate in their initiative. He was invited to make a presentation or otherwise take part in organizational meetings of the Steering Committee, to observe or speak at KENIC's open community forums, to communicate his concerns or suggestions via e-mail, and to join the board of directors of KENIC. Dr. Ochuodho declined to respond to these invitations, and did not attend any of KENIC's organizational meetings or open community forums. The KENIC organizers regularly sent Dr. Ochuodho updates on their activities and minutes of meetings and included him on their mailing lists, in the hope that he might choose to participate or otherwise engage in dialogue toward the creation of a stable institutional home in Kenya for the .ke registry. On 9 June 2002, KENIC representatives contacted the IANA to formally request redelegation of the .ke ccTLD from the current administrative contact to KENIC. That same day, the IANA forwarded the KENIC request to Dr. Ochuodho, as the current administrative contact, for his review and comment. On 10 July 2002, Dr. Ochuodho responded to the IANA that his host ISP, ARCC, was hoping to upgrade its servers over the coming several months, in order to assume responsibility for the .ke technical functions. He further stated that "[n]o ISP or KENIC has drawn our attention to any substantial problems with current arrangement." In view of Dr. Ochuodho's apparent failure to respond to or undertake any discussions with the KENIC organizers on their dissatisfactions and proposals, the IANA sought to promote dialog among the parties. After several inquiries by the IANA, Dr. Ochuodho met with IANA representatives (the President, Vice President, and Counsel for International Legal Affairs of the Internet Corporation for Assigned Names and Numbers (ICANN)) at the East Africa Internet Forum, held in Nairobi on 7 August 2002. At that meeting, Dr. Ochuodho acknowledged that he had been less than responsive to the Kenyan Internet community over the previous several years, but noted that he had been kept extremely busy by his other important responsi
计算机
2015-48/1913/en_head.json.gz/10005
Inside Free accessibility softwareEye health Inside Free accessibility softwareAbout us You are here:HomeInformation for everyday livingUsing technologyComputers and tabletsFree accessibility software Free accessibility software There are an increasing number of free assistive technology options for a computer.These will be less sophisticated and have fewer features than software you can buy, but if you want to surf the web, send and receive emails, and write basic documents, one of these might be just the ticket.Windows magnificationCommercial magnifiers offer many features that you won't find in a free option, like the ability to magnify the screen before logging on. The free options are aimed at people who need only a relatively low level of magnification and mainly use the mouse.The free applications often have no installation process or they have an installation process that does not require administrator privileges. This means they can be easily used on a public computer, or even run from a pen drive plugged into a computer. You should always check with the owner of a computer before doing this!Lightning ExpressOne of the restrictions of Windows Magnifier is that it has no full screen mode before Windows 7, and only in Windows 8 does it work with all colour settings.The big plus of Lightning Express is that it gives full screen magnification. It works with 32-bit versions of Windows XP, Vista and 7, and with desktop apps in Windows 8.A significant limitation of Lightning Express is that it has to be downloaded or run from the Internet each day.Find out more about and download Lightning ExpressDesktop ZoomDesktop Zoom works with Windows XP, 2000, Vista and 7. It has keyboard control for many of its options, which include mouse pointer size and shape enhancements, and speech output using the Windows voice. There are also some unusual features, such as the ability to turn off by moving the mouse pointer to the bottom right corner of the screen.Desktop Zoom isn't as reliable as Magnifier or Lightning Express. Keyboard tracking and text smoothing don't work smoothly and zoom level settings are difficult to set. At high levels of magnification, smoothing and general movement deteriorate.Around the mouse magnifiersThere are a number of free magnifiers that only magnify an area around the mouse pointer. They are usually aimed at people who need to do detailed graphics work and do not track the keyboard.Examples include Virtual Magnifying Glass, Zoom Lens, Desktop Magnify, Magnifying Glass and ZoomIt.Windows speechThere are a number of free text-to-speech applications which can read out emails or documents. They will leave out lots of visual information such as if text is bolded or an email has an attachment, and are therefore not very useful if you need to use a computer but can't see the screen. Software that reads out this additional information is called a screen reader, and that is what this section covers.NVDANVDA (Non-Visual Desktop Access) is the most popular free screen reader. It is an open-source programme that comes in portable and installer versions - the portable version can be run from a pen drive without any installation.NVDA uses the eSpeak synthesizer which includes UK regional accents. It works on Windows XP, Vista, 7 and 8, where it supports touch screens. It also supports ARIA-enabled web pages.NVDA has support for the basic features of Windows, Internet Explorer and Mozilla Firefox, and a growing support for Microsoft Office.Window-Eyes for users of Microsoft OfficeGW Micro, the makers of the Window-Eyes screen reader, have released a free version for users of Microsoft Office 2010 and above.The free version is the same as the paid-for version of Window-Eyes except that it does not include the same voices, has no print, braille or audio CD documentation, and comes with little free technical support.This offer only started in January 2014, so it's too early to say how popular it will be.Find out more about and download Window-Eyes for users of Microsoft Office.Other free screen readersThere are other, less widely used, free screen readers:Thunder works well with Windows XP through to 7, and less well with Windows 8. It comes with the WebbIE suite of simplified alternatives to standard Windows applications, such as Calendar, BBC iPlayer, PDF reader and a text-based web browser. It assumes the use of a desktop keyboard - some of its commands rely on the number pad.System Access to Go works on Windows XP and later (although support on Windows 8 is restricted to desktop apps), and includes support for some braille displays and even basic screen magnification. You have to connect to the website to download and start it. It is a free version of the System Access screen reader, intended for use only in temporary situations.Microsoft Speech Platform voicesThe voices used by free screen readers may not appeal at first, as they can seem quite robotic. Voices may grow on you, or they may have other benefits such as responsiveness or staying understandable at high speech rates.If you want to explore other voices, Microsoft's Speech Platform and voices are free, or you can buy more human-sounding voices from any vendor of commercial screen readers.One way to get the Microsoft voices is via the GW Micro website:Go to gwmicro.com/voicesFind the heading "Microsoft Speech Platform Downloads" at the bottom of the page and read the instructions.In the combo box used to select a voice, the English voices start with "en".Eldy - simplified computer interfaceEldy is aimed at people aged over 60 or who are new to computing. It presents large, clear controls to allow writing of documents and email, sharing pictures and surfing the web. It may be of use to a partially sighted person interested in basic computer use.Eldy is available for Windows, Mac and Linux computers, and some Android tablets. It contains simple instructions and video tutorials to help you get started. Frequently asked questions Search our most frequently asked questions to find the detailed answers you need about benefits, eye health, education, employment, travel and much more. Search FAQs
计算机
2015-48/1913/en_head.json.gz/10737
FTI: high performance fault tolerance interface for hybrid systems PDF Get this Article Leonardo Bautista-Gomez Tokyo Institute of Technology, INRIA Seiji Tsuboi JAMSTEC Dimitri Komatitsch University of Toulouse Franck Cappello INRIA, University of Illinois Naoya Maruyama Tokyo Institute of Technology Satoshi Matsuoka 2011 Article · Downloads (6 Weeks): 19 · Downloads (12 Months): 107 · Downloads (cumulative): 443 · Citation Count: 27 Published in: · Proceeding SC '11 Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis Article No. 32 ACM New York, NY, USA ©2011 table of contents ISBN: 978-1-4503-0771-0 doi>10.1145/2063384.2063427 Recent authors with related interests Concepts in this article Concepts inFTI: high performance fault tolerance interface for hybrid systems FLOPS In computing, FLOPS (or flops or flop/s, for floating-point operations per second) is a measure of a computer's performance, especially in fields of scientific calculations that make heavy use of floating-point calculations, similar to the older, simpler, instructions per second. Since the final S stands for "second", conservative speakers consider "FLOPS" as both the singular and plural of the term, although the singular "FLOP" is frequently encountered. more from Wikipedia Fault-tolerant design In engineering, fault-tolerant design is a design that enables a system to continue operation, possibly at a reduced level (also known as graceful degradation), rather than failing completely, when some part of the system fails. The term is most commonly used to describe computer-based systems designed to continue more or less fully operational with, perhaps, a reduction in throughput or an increase in response time in the event of some partial failure. more from Wikipedia Application checkpointing Checkpointing is a technique for inserting fault tolerance into computing systems. It basically consists of storing a snapshot of the current application state, and later on, use it for restarting the execution in case of failure. more from Wikipedia Petascale computing In computing, petascale refers to a computer system capable of reaching performance in excess of one petaflops, i.e. one quadrillion floating point operations per second. The standard benchmark tool is LINPACK and Top500. org is the organisation which tracks the fastest supercomputers. Some uniquely specialized petascale computers do not rank on the Top500 list since they cannot run LINPACK. This makes comparisons to ordinary supercomputers hard. more from Wikipedia Node (networking) In communication networks, a node (Latin nodus, ‘knot’) is a connection point, either a redistribution point or a communication endpoint. The definition of a node depends on the network and protocol layer referred to. A physical network node is an active electronic device that is attached to a network, and is capable of sending, receiving, or forwarding information over a communications channel. A passive distribution point such as a distribution frame or patch panel is consequently not a node. more from Wikipedia Graphics processing unit A graphics processing unit or GPU (also occasionally called visual processing unit or VPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory in such a way so as to accelerate the building of images in a frame buffer intended for output to a display. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. more from Wikipedia Reed–Solomon error correction In coding theory, Reed–Solomon (RS) codes are non-binary cyclic error-correcting codes invented by Irving S. Reed and Gustave Solomon. They described a systematic way of building codes that could detect and correct multiple random symbol errors. By adding t check symbols to the data, an RS code can detect any combination of up to t erroneous symbols, and correct up to ⌊t/2⌋ symbols. more from Wikipedia Computer file A computer file is a block of arbitrary information, or resource for storing information, which is available to a computer program and is usually based on some kind of durable storage. A file is durable in the sense that it remains available for programs to use after the current program has finished. Computer files can be considered as the modern counterpart of paper documents which traditionally are kept in offices' and libraries' files, and this is the source of the term. more from Wikipedia Recommend the ACM DLto your organization TOC Service: Save to Binder Export Formats: ACM Ref Upcoming Conference: Switch to single page view (no tabs) **Javascript is not enabled and is required for the "tabbed view" or switch to the single page view** Powered by The ACM Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc. Terms of Usage Privacy Policy Code of Ethics Contact Us Useful downloads: Adobe Reader window.usabilla_live = lightningjs.require("usabilla_live", "//w.usabilla.com/2348f26527a9.js"); Did you know the ACM DL App is now available? Did you know your Organization can subscribe to the ACM Digital Library? The ACM Guide to Computing Literature Export Formats
计算机