id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
16,967 | 2,018 | "Nintendo Switch Lite coming from the company that made 6 3DS models | VentureBeat" | "https://venturebeat.com/2018/10/03/new-nintendo-switch-lite-xl" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages New Nintendo Switch model coming from the company that released 6 3DSes Share on Facebook Share on X Share on LinkedIn FIFA 18 in action on the Nintendo Switch.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Nintendo is planning a revision of its megapopular hybrid home/handheld Switch console for 2019, according to a report in The Wall Street Journal.
The goal is to maintain sales momentum for one of the fastest-selling video game systems of all time.
This news is among the least surprising stories of the week.
A revised Nintendo Switch model was inevitable. Still, the WSJ did great work to get some facts. The newspaper is reporting that manufacturers and suppliers are gearing up for the production of the “Switch Lite” (that’s my name for it). But it’s still early days. Nintendo isn’t sure what improvements it wants for the 2019 model.
But you only have to look at Nintendo’s history to know this was coming. Over the course of the 3DS, Nintendo launched six different models. That was up from four different DS models. Hell, the Game Boy and Game Boy Advance had multiple different models as well.
I would argue that Nintendo has always known it was going to revise the Switch. It probably expected to do it by 2018 if sales were slow. It was just a matter of time before we would get something like. this.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! And the reasons are obvious.
The Little Brother Effect People often like to point out that the reason Nintendo handhelds sell so well is because many people buy multiple versions. And that is absolutely right. I bought a 3DS, two different 3DS XLs, and a New 3DS XL. But here’s the crucial thing: I only still own the New 3DS XL and the original 3DS. I sold one of the 3DS XLs and gave the other one away.
Nintendo handhelds benefit from The Little Brother Effect. When the company releases a new Game Boy, DS, or 3DS, people typically didn’t throw their old system into a drawer and forget about it. Most people either sell them second-hand or give them to someone like a younger sibling. This means that Nintendo got another hardware sale to the same customer, but it likely also got a new software customer as well.
And this whole strategy works because hardware purchases are expensive. A $300 Switch is a lot to ask for a lot of people. But a $150 Switch on Craigslist or a hand-me-down from an older sister who can’t help but always buy the latest hardware is a gateway into Switch software ecosystem.
Sony and Microsoft have absolutely tried to emulate this strategy with the PlayStation 4 Pro and Xbox One X. All of these companies are trying to expand their install base by building hardware for the fans most willing to spend $300+ for a noticeable upgrade.
Put another way, you and I — the fools who already own a Switch and will go out and buy this new one as well — are subsidizing people who are more thrifty and probably happier and better as humans.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,968 | 2,019 | "Intel unveils packaging innovations for building 3D and multi-die chips | VentureBeat" | "https://venturebeat.com/2019/07/09/intel-unveils-packaging-innovations-for-building-3d-and-multi-die-chips" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel unveils packaging innovations for building 3D and multi-die chips Share on Facebook Share on X Share on LinkedIn Foveros can sandwich two chips in the 3D space where only one fit before.
Intel is unveiling packaging innovations for creating three-dimensional chip packages and other solutions that put together multiple chips.
In advance of the Semicon West conference in San Francisco, Intel shared more details on several of its latest packaging technologies, building on previous news related to its Embedded Multi-Die Interconnect Bridge (EMIB) technologies and Foveros 3D chip packages.
Why it’s important Chip packaging has always played a critical — if under-recognized — role in the electronics supply chain, Intel said. As the physical interface between the processor and the motherboard, the package provides a landing zone for a chip’s electrical signals and power supply. As the electronics industry transitions to the data-centric era, advanced packaging will play a much larger role than it has in the past.
More than just the final step in the manufacturing process, packaging is becoming a catalyst for product innovation. Advanced packaging techniques allow integration of diverse computing engines across multiple process technologies with performance parameters similar to a single die, but with a platform scope that far exceeds the die-size limit of single-die integration. These technologies will improve product-level performance, power, and area while enabling a complete rethinking of system architecture, Intel said.
The first disclosure is what Intel is calling co-EMIB. Co-EMIB brings together EMIB and Foveros technologies — already in production today in products such as Intel Stratix 10 field programmable gate arrays (FPGAs), 8th Gen Intel Core processors with Radeon Graphics, and the forthcoming Lakefield 10-nanometer hybrid CPU architecture.
Above: A demo of Intel’s Foveros 3D chip stacking technology last December.
Embedded Multi-die Interconnect Bridge (EMIB) enables the connection of two or more Foveros (3D stacked chip) elements to create a package of chiplets that essentially performs as a single chip. These Foveros elements can also can be connected to analog, memory, and other tiles with very high bandwidth and at very low power. This makes co-EMIB packaging technology ideal for large-die high-performance applications that could otherwise be limited by reticle size.
Intel is also showing a preview of Omni-Directional Interconnect (ODI) technology. ODI, the next step beyond co-EMIB, will bring together the best of EMIB and Foveros, plus additional technology innovation to provide even greater flexibility for communication among the chiplets in a package.
In short, the top chip in a stack can communicate horizontally with other chiplets, similar to EMIB. It can also communicate vertically through TSV connections in the base die below, similar to Foveros. Additionally, ODI leverages large vertical vias to allow power delivery to the top die directly from the package substrate.
Much larger than traditional TSVs, the large vias have lower resistance, which the company says provides more robust power delivery with higher bandwidth and lower latency for high-performance datacenter workloads, such as AI and supercomputing. Intel said it is the first in the industry to develop this packaging technology and to begin preparing to move it into its manufacturing process.
Finally, Intel shared more details on a new die-to-die interface called Management Data Input/Output (MDIO), a PHY-level communication protocol that controls the interface between chiplets. The company says MDIO provides better power efficiency and more than double the pin speed and bandwidth density offered by its current Advanced Interface Bus technology, with availability planned for 2020.
“Our vision is to develop leadership technology to connect chips and chiplets in a package to match the functionality of a monolithic system-on-chip,” said Babak Sabi, Intel corporate vice president for test and assembly tech development, in a statement. “A heterogeneous approach gives our chip architects unprecedented flexibility to mix and match IP blocks and process technologies with various memory and I/O elements in new device form factors. Intel’s vertically integrated structure provides an advantage in the era of heterogeneous integration, giving us an unmatched ability to co-optimize architecture, process and packaging to deliver leadership products.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,969 | 2,019 | "EDSFF in action: Flash storage capacity like you’ve never seen | VentureBeat" | "https://venturebeat.com/2019/11/04/edsff-in-action-flash-storage-capacity-like-youve-never-seen" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages EDSFF in action: Flash storage capacity like you’ve never seen Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This article is part of the Technology Insight series, made possible with funding from Intel.
It never made sense to me that SSDs held on to the 2.5” form factor borrowed from mechanical disks for as long as they did. NAND chips — the non-volatile memory technology that stores data inside of SSDs — offer versatility and resilience against operating shock that spinning platters can only dream of. They’re almost completely unconstrained by physical dimensions. Really, the only reason to continue packing solid-state storage into hard drive enclosures is legacy compatibility.
The reality is that new form factors are downright disruptive, and nobody wants to fight an uphill battle for adoption unless there’s an exceptional reason for it. While today’s SSD market is more diverse, including PCI Express add-in cards and motherboard-mounted M.2 drives, dense storage servers still utilize 2.5” bays.
The Enterprise & Datacenter SSD Form Factor (EDSFF) is poised to change that. The availability of first-wave storage products based on this hot design specification promises disruptive, screamingly fast new possibilities for high-performance computing. Think big data servers, AI, high-end gaming, and other data-or graphics-intensive applications where large amounts of fast memory and storage are advantageous.
Now that we’ve introduced EDSFF and explained how it changes datacenter storage , it’s time to look at the form factor’s practical applications. Get ready to learn about: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The first EDSFF-based SSDs already available 1U servers and JBOF (just a bunch of flash) enclosures with 32 EDSFF drive bays across the front How EDSFF will evolve, including support for PCIe 4.0 and 5.0, plus wider links for additional throughput Remind me: What is EDSFF? And why do I care? With measurements borrowed from Intel’s “ruler,” the specification is named for its long, narrow shape. EDSFF makes it possible to lay out NAND chips on a PCB sized to maximize capacity and heat transfer, rather than cramming them into an enclosure meant for disks. That translates to more terabytes per rack unit for applications that can’t get enough capacity, and lower operating temperatures in dense server systems. The new approach optimizes airflow and provides an unprecedented opportunity to facilitate on-demand, disaggregated storage for heavy-compute and GPU-bound workloads.
The new form factor sheds compatibility with legacy drive sleds in the front of rack-mounted systems.
It represents a fresh start for flash memory in the datacenter , thanks to the latest generation of lower-cost QLC NAND, plus higher-capacity drives that deliver more storage in less physical space. And it’s ready to rock.
EDSFF’s readiness might come as a surprise to folks familiar with its proprietary beginnings. However, an industry-wide call for a flexible, flash-optimized form factor expedited the design’s evolution. What we have now is a physical standard built to unlock the potential of NVMe —a high-performance interface for attaching non-volatile memory to the PCI Express bus—through a common connector. It’s backed by 15 industry heavy hitters, ranging from flash manufacturers to ecosystem enablers and solution providers.
Taking a leadership role in next-gen storage Because EDSFF is based on Intel’s proprietary “ruler” form factor, we weren’t surprised to see the company listed alongside Dell EMC, Facebook, HPE, Lenovo, Microsoft, and Samsung as EDSFF promotors. Together with eight EDSFF contributors (Amphenol, Foxconn Interconnect Technology, Micron, Molex, Seagate, TE Connectivity, Toshiba Memory, and Western Digital), the form factor has strong support at every stage of the supply chain.
When Intel first started talking about the ruler back in 2017, the same year as first Optane availability , its goal was to fit one petabyte of data into a 1U platform. Now that the form factor is standardized under EDSFF, and the ecosystem of drives, connectors, servers, and solutions exists as products available for sale, it’s time to look at how EDSFF’s aspirations translate to real-world storage.
Meet the first of its kind Above: Intel’s SSD D5-P4326 offers up to 15.36TB of capacity today, with a planned 30.72TB model coming soon.
Because the ruler and EDSFF Long (E1.L) form factors are nearly identical, it makes sense that Intel is first out the gate with a compatible product family. Currently, the company’s SSD D5-P4326 series drives are available in U.2 and E1.L form factors at capacities of up to 15.36TB using four-lane PCIe 3.1 NVMe interfaces. Both versions employ QLC 3D NAND, allowing each memory cell to store 33% more bits than Intel’s previous-generation flash.
While EDSFF’s size and shape play a starring role in the standard’s ability to pack lots of storage into small spaces, don’t underestimate the significance of QLC NAND in making Intel’s SSD D5-P4326 possible. It’s the fundamental building block that allows 15.36TB of capacity to work equally well in two different form factors. Because QLC-based SSDs cost less upfront than their predecessors, there’s a good economic case for replacing hard drives with them. Less power consumed through comparable workloads, reduced cooling costs, and lower annualized failure rates all factor into the SSD D5-P4326’s TCO advantage over mechanical storage.
Performance comparisons between the two storage types aren’t even fair. Whereas the fastest enterprise hard drives may sustain transfer rates as high as 300 MB/s, the SSD D5-P4326 can read data sequentially at up to 3,200 MB/s and write at 1,600 MB/s over its PCIe x4 link. Executing a data retrieval operation happens in as little as 135 microseconds, versus the 2-millisecond average latency of a 15,000 RPM disk. In short, the SSD gets to your information faster and moves it more quickly through a larger pipe. A 10x speed boost is especially useful in applications currently limited by storage performance. And in a world facing mountains of big data for real-time processing, keeping CPUs fed with fresh bits is the name of the game.
It’s worth mentioning that all of the SSD D5-P4326’s vital specs are shared between the E1.L and U.2 form factors. Why favor the EDSFF one, then? Intel’s testing shows that a server built to support EDSFF can cool its drives to the same temperature using up to 55% less airflow than 2.5” U.2 SSDs. Slower-spinning fans make less noise, use less power, and cost less to operate. So, regardless of whether you’re choosing between the two form factors at similar capacities or targeting the highest density possible, Intel’s E1.L SSD D5-P4326 is the smart choice for new builds.
No doubt we’ll see other members of the EDSFF working group announce compatible SSDs of their own. Western Digital, for instance, recently started talking about its E1.L Ultrastar DC SN640, featuring capacities as high as 30.72TB using 96-layer BiCS4 NAND and its own in-house NVMe controller technology. The 2.5” U.2 version caps out at 7.68TB, making a switch to EDSFF for dense storage servers even more compelling. Micron, Samsung, Seagate, and Toshiba are sure to follow.
EDSFF-compatible platforms are here, too Above: Supermicro’s SuperServer SSG-1029P-NEL32R does away with the bulky backplane needed on most U.2-based servers, allowing air to flow more freely and fans to use less power.
New form factors are massively disruptive to established ecosystems. They require changes at every turn. Fortunately, Supermicro already had a 1U server designed for Intel’s ruler form factor, so when EDSFF was finalized with very slight modifications, it didn’t take long for Supermicro to tweak the connectors and officially support EDSFF as well. In fact, if you put a picture of the SSG-1029P-NEL32R system next to its ruler-based predecessor, they look identical.
Most striking is how well 32 EDSFF drives fit across the front. “The pitch here is 12 millimeters,” said Michael Scriber, senior director of server solution management at Supermicro. “So you end up with two and a half millimeters of gap for air to flow between each drive.” Aside from its two drive sleds with room for 16 E1.L SSDs per sled, the SSG-1029P-NEL32R’s specs read like many other 1U servers. It supports a pair of Intel Xeon Scalable processors, up to 6TB of DDR4 across 24 DIMM slots, and two M.2 slots to host boot drives. A couple of PCIe x16 slots out the back side can take 100 Gb/s network adapters. Or, you can stick with the 10 GbE controllers built onto Supermicro’s motherboard.
Above: The Intel Ethernet 800 series supports port speeds as high as 100 Gb/s to move data on or off EDSFF-equipped servers at unprecedented rates.
There’s also a switch to turn 64 lanes of PCIe connectivity from Intel’s CPUs into 128 lanes for the EDSFF drives (four lanes for each of 32 links). The decision to keep that a 2:1 ratio, rather than multiplexing fewer host-facing PCIe lanes, preserves balance across the storage subsystem. Even under heavy load, the switch doesn’t become a bottleneck. Case in point, Scriber says his team has seen 13,000,000 IOPS and 57 GB/s of bandwidth from the ruler server using Intel SSDs.
Supermicro is also working on a JBOF (just a bunch of flash) version called the SSG136R-NEL32JBF that doesn’t have any CPUs or memory. Instead, it pipes 64 lanes of PCIe connectivity out the back using mini-SAS HD ports. Those ports can map to two, four, or as many as eight hosts using an external PCIe x8 cable to a host interface card of Supermicro’s own design. And because it’s still using PCIe, performance remains exceptional. “We’ve measured 52 GB/s out the back of our 1U JBOF system,” Scriber said.
The idea of JBOF enclosures attached to multiple hosts is especially interesting in datacenter applications where compute horsepower, DRAM performance, and storage resources don’t always scale independently. Separate boxes filled with CPU cores, add-in accelerators, and solid-state memory make it easier for enterprise customers to grow when and where their workloads require. EDSFF makes it easier to get more capacity into less space, which is great for deploying on-demand disaggregated storage.
Future innovation Intel’s SSD D5-P4326 and Supermicro’s SSG-1029P-NEL32R are excellent examples of how EDSFF builds on the flexibility of flash memory to increase storage density and improve thermal efficiency. Further out, we’ve already seen some of the ways that the standard will promote innovation.
For example, Supermicro’s upcoming BigTwin E1.S packs four server nodes into 2U of rack space. Each node supports two Xeon Scalable CPUs, up to 6TB of DDR4 (including Optane DC persistent memory ), two M.2 SSDs, and 10 of the shorter E1.S drives, each with up to 4TB of NAND. That’s incredible compute and storage performance from a compact platform.
Above: Although today’s E1.L and E1.S drives employ a four-lane PCIe link, x8 and x16 edge connector specs facilitate much higher bandwidth for compute-intensive applications.
In a nod to EDSFF’s forward-thinking design, its connector supports PCIe 4.0 and 5.0 transfer rates. It’s also scalable beyond the four-lane link used by Intel’s SSD D5-P4326. E2 (x8) and E3 (x16) connector specs effectively double and quadruple available bandwidth, and options for wider enclosures make it possible to dissipate up to 70W of heat. That opens the door to PCIe-attached compute, networking, and storage accelerators operating side-by-side with high-capacity SSDs.
Bottom line If you’re taking in and processing lots of data, particularly in real-time, a shift to EDSFF-based infrastructure should pay dividends in lower TCO, greater serviceability, and future expansion that just isn’t available from servers built around 2.5” disks.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,970 | 2,016 | "Canva raises $15 million at a $345 million valuation for its online design tool | VentureBeat" | "https://venturebeat.com/2016/09/14/canva-raises-15-million-at-a-345-million-valuation-for-its-online-design-tool" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Canva raises $15 million at a $345 million valuation for its online design tool Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Canva has raised an additional $15 million for its online design platform, putting the company at a $345 million valuation. The latest round was led by Blackbird Ventures and Felicis Ventures and will be used to make Canvas’ tools more accessible and to develop a suite of products and offerings to help employees better present their skills and projects.
Founded in 2012 by Cliff Obrecht, Cameron Adams, and Melanie Perkins, Canva is on a mission to make graphic design easily accessible and simple for everyone. It now counts more than 10 million users in 179 countries, with more than 100 million designs created.
Canva seeks to reduce companies’ dependence on designers to create quality presentations, business cards, posters, and more. The service is available on both the desktop and mobile devices, with an iPhone app released this summer.
“Design is no longer a niche thing that needs to be done for a couple of ads in the newspaper; every single profession needs design to help communicate their message,” Perkins said. “The world is rapidly becoming more visual, and design is becoming more and more important across every profession.” To date, Canva has raised over $42 million, with the last round just shy of a year ago. This latest investment comes from existing investors, amid what the company characterized as increased attention from venture capitalists. It opted to accept additional funding from Blackbird and Felicis because “We have loved working with them…Both focus on investing in companies that will transform entire industries and have a strong product focus. If there’s strong alignment on your long-term vision, it makes the smaller decisions a lot easier.” The company claims that it didn’t need to raise the money, that this was an opportunistic round. Perkins explained that this meant Canva was “able to choose the investors we want to work with to ensure that we set Canva up for long-term success. We have so much left to do as a company,” he added, “it’s great to be able to have the resources to quadruple-down on all of our plans.” Previous investors include 500 Startups, Founders Fund, InterWest Partners, Vayner/RSE, Shasta Ventures, Matrix Partners, Square Peg Capital, and actors Owen Wilson and Woody Harrelson.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,971 | 2,018 | "Dropbox will let users store Google G Suite files | VentureBeat" | "https://venturebeat.com/2018/03/01/dropbox-will-let-users-store-google-g-suite-files" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dropbox will let users store Google G Suite files Share on Facebook Share on X Share on LinkedIn At the launch event for Dropbox Paper and Smart Sync.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Users of Google’s G Suite productivity service will be able to keep their Docs, Sheets, and Slides files inside Dropbox’s cloud storage environment, thanks to a partnership the two tech firms announced today.
It’s a massive win for the cloud storage company, which plans to make the feature available to all of its users, whether they pay for the service or not. In addition, Dropbox Business administrators will be able to manage those files like they would any other content that resides with the service, something that could be a boon to IT and compliance professionals.
This deal is supposed to make Dropbox a unified home for its users’ content and collaboration so they can access their G Suite files alongside all of the other data they store with the service. In addition to the file storage capabilities, Dropbox is also building new integrations with Gmail and Hangouts Chat so that it’s easier for people to share and preview files within those different communication services.
Collaboration between the two technology companies makes a great deal of sense, according to Billy Blau, Dropbox’s head of technology partnerships.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “It really comes from the fact that we have a big overlap of customer bases,” he said in an interview with VentureBeat. “Over 50 percent of Dropbox Business teams also use G Suite, so both sides see that connection. We have hundreds of millions of users who have Gmail addresses and are also G Suite users.” Blau said the forthcoming email add-on would allow customers to view rich previews of Dropbox files inside Gmail, something that goes beyond anything the company’s other email integrations offer at the moment. (Dropbox may expand that capability to other services in the future.) The company’s integration with Google echoes a similar partnership between the tech titan and Box. Those two announced a deal that would allow Box users to store G Suite files inside that company’s enterprise cloud storage and content management environment in October 2016.
That feature hasn’t materialized yet, and Dropbox’s own G Suite storage capability will be in the works for months still. Blau said his company plans to have some sort of file integration with G Suite available by the end of this year, though the feature may still be in beta and may not be rolled out to all the company’s users.
This deal also marks an interesting turn for Dropbox, which has partnered deeply with Microsoft on integrating its storage services into Office 365.
In fact, it has been a significant couple of weeks for Dropbox. Last Friday, the company released its filing for an initial public offering , revealing that it made $1.1 billion in revenue last year. Representatives for the company declined to comment on if or how this partnership could affect that offering.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,972 | 2,018 | "Dropbox IPO priced at $21 per share with a market cap of $9.18 billion | VentureBeat" | "https://venturebeat.com/2018/03/23/dropbox-ipo-priced-at-21-per-share-with-a-market-cap-of-9-18-billion" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dropbox IPO priced at $21 per share with a market cap of $9.18 billion Share on Facebook Share on X Share on LinkedIn Dropbox app Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
( Reuters ) — Dropbox’s initial public offering, the largest tech stock debut in more than a year, was priced at $21 per share, the company announced on Thursday, higher than expected.
At $21, the San Francisco-based company will have a market cap of about $9.18 billion on a fully diluted share count.
The cloud-based file-storage firm on Wednesday raised the expected price range by $2 to $18 to $20 per share, on the back of strong demand.
The IPO raised about $756 million in the largest tech IPO since Snap raised $3.9 billion in its debut last year. Dropbox shares are set to start trading on Friday at the Nasdaq under the symbol “DBX.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The strong pricing bodes well for other highly anticipated IPOs from tech unicorns, or startups valued at more than $1 billion.
“Pricing above the revised range indicates there is more demand than supply for growth technology IPOs especially those generating substantial positive free cash flow,” said Leslie Pfrang of Class V Group, an IPO advisory firm.
Streaming music leader Spotify is scheduled to do a direct listing of shares on the New York Stock Exchange on April 3.
Dropbox has 500 million users and competes with Alphabet’s Google, Microsoft, Amazon.com and Box, which had a market value of about $3.1 billion as of Thursday’s close.
“For me the biggest problem I have with Dropbox is they don’t have any unique competitive advantages or proprietary offerings that differentiate them from the pack,” said Adam Sarhan of investment advisory service 50 Park Investments.
Dropbox co-founder and Chief Executive Officer Andrew Houston will have 24 percent of the company after selling 2.3 million shares in the offering.
Venture capital firm Sequoia Capital will retain a stake of about 25 percent.
Dropbox reported revenue of $1.11 billion in 2017, up 32 percent from a year earlier. Its full-year net loss, meanwhile, nearly halved to $111.7 million.
Goldman Sachs & Co LLC, J.P. Morgan, Deutsche Bank Securities, Allen & Company LLC and Bank of America Merrill Lynch are among lead underwriters to the offering.
( Reporting by Liana Baker in New York and Nikhil Subba in Bengaluru; Additional reporting by Diptendu Lahiri in Bengaluru, Salvador Rodriguez in San Francisco and Joshua Franklin in New York; Editing by Dan Burns, Sriraj Kalluvila and Lisa Shumake r) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,973 | 2,018 | "Dropbox launches extensions so you can work with Adobe, Autodesk, DocuSign, and other apps | VentureBeat" | "https://venturebeat.com/2018/11/06/dropbox-launches-extensions-so-you-can-work-with-adobe-autodesk-docusign-and-other-apps" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dropbox launches extensions so you can work with Adobe, Autodesk, DocuSign, and other apps Share on Facebook Share on X Share on LinkedIn Dropbox Extensions Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Dropbox has announced plans to make its platform more extensible, enabling you to work with myriad third-party applications and tools directly from within the cloud-storage service.
With Dropbox Extensions, the San Francisco-based company has partnered with a range of others, including Adobe, Nitro, Vimeo, HelloFax, DocuSign, AirSlate, HelloSign, Pixlr, and SmallPDF, to enable Dropbox users to carry out more productivity-focused tasks without leaving Dropbox.
For example, integrations with DocuSign, Adobe Sign, and HelloSign mean that you can share PDF or Word documents externally from within Dropbox and request an eSignature, with the final signed file automatically saved back into your Dropbox account.
Above: Adobe Sign in Dropbox Additionally, with Nitro, SmallPDF, and AirSlate on board, Dropbox users can synchronize and edit PDF documents from within Dropbox, and they can use HelloFax to send an electronic fax from Word or PDF documents.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! So by combining extensions, users can effectively edit a contract from start to finish, request a signature, and digitally fax the signed contract — without ever leaving Dropbox.
Elsewhere, cloud-based image-editing platform Pixlr will now let you touch up your photos while in Dropbox, Autodesk will let you view and edit .DWG files, and Vimeo will let you invite collaborators to review and annotate video-based projects.
Wooing business Dropbox in its original guise was more of a consumer-focused product, but over the years it has doubled down on the enterprise as it seeks to monetize through subscriptions.
As such, Dropbox has been pushing for tighter integrations with other business-focused services, including a notable tie-up with Salesforce to make it easier to share files through Quip. Dropbox also recently revealed a partnership with Google to allow G Suite users to store their files in Dropbox.
“We want to empower people to choose the best tools for their work by removing the friction between them,” said Dropbox SVP of engineering, product, and design Quentin Clark. “So we’re making it seamless for users to connect with partners that offer the right tools for the task at hand.” The inaugural set of Dropbox Extensions will be available to everyone — including those on the free consumer plan — from November 27, and the company added that it expects to grow the third-party integrations into other services in the future.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,974 | 2,019 | "Microsoft Teams has 13 million daily active users, beating Slack | VentureBeat" | "https://venturebeat.com/2019/07/11/microsoft-teams-has-13-million-daily-active-users-beating-slack" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft Teams has 13 million daily active users, beating Slack Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Microsoft Teams, which launched worldwide in March 2017 , has 13 million daily active users and 19 million weekly active users. This is the first time the company has released daily and weekly usage metrics for Teams. Microsoft also announced some new features for Teams, specially targeting health care organizations and firstline workers.
Teams is the company’s Office 365 chat-based collaboration tool that competes with Google’s Hangouts Chat, Facebook’s Workplace, and Slack. Back in March, Microsoft shared that Teams is used by 500,000 organizations , just two years after launch. For months, Microsoft had called Teams its fastest-growing business app ever, but it refused to share how many individuals were using Teams — until today.
We have guessed for a long time that Microsoft Teams was bigger than Google’s and Facebook’s offerings. Google launched Hangouts Chat in February 2018 , when 4 million businesses paid for G Suite, and it still hasn’t shared how many organizations use it. In February, Workplace by Facebook passed 2 million paid users.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But we assumed Slack was bigger, and that Microsoft would share user numbers once that had changed. As of January, Slack had 10 million daily active users.
It’s safe to say Microsoft Teams is now the most-used chat-based collaboration tool.
New features In addition to the usage reveal, Microsoft Teams is also getting a slew of new features. They are rolling out now, this month, next month, or “soon.” Here is a quick rundown: Now: Announcements allow team members to highlight important news in a channel and are a great way to kick off a new project, welcome a new colleague, or share results from a recent marketing campaign.
Now: The new time clock feature in the Teams Shifts module allows workers to clock in and out of their work shifts and breaks right from their Teams mobile app. Managers have the option to geo-fence a location to ensure team members are at the designated worksite when clocking in or out.
The Teams client is now available to existing installations of Office 365 ProPlus on the Monthly Channel.
July: Priority notifications alert recipients to time-sensitive messages, pinging a recipient every two minutes on their mobile and desktop until a response is received.
July: Read receipts in chat displays an icon to indicate when a message you have sent has been read by the recipient.
July: Channel moderation allows moderators to manage what gets posted in a channel and whether a post accepts replies.
August: Targeted communication allows team owners to message everyone in a specific role at the same time by @mentioning the role name in a post. For example, you could send a message to all cashiers in a store or all nurses in a hospital.
August: A Teams trial offering will allow Microsoft 365 partners to initiate six-month trials for customers.
Soon: Channel cross posting allows you to post a single message in multiple channels at the same time.
Soon: Policy packages in the Microsoft Teams admin center enable IT admins to apply a predefined set of policies across Teams functions, such as messaging and meetings, to employees based on the needs of their role.
Whether you use Microsoft Teams daily or just once a week, you’ll probably end up using at least one of these.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,975 | 2,019 | "Slack remodels workplace apps and expands SaaS workflow actions | VentureBeat" | "https://venturebeat.com/2019/10/22/slack-remodels-workplace-apps-and-expands-saas-workflow-actions" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Slack remodels workplace apps and expands SaaS workflow actions Share on Facebook Share on X Share on LinkedIn Slack logo at Slush 2018 conference in Helsinki, Finland Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Slack today introduced a range of upgrades that change how workplace apps function, further stepping away from the traditional bot to offer more advanced workplace apps capable of functioning with buttons and menus, even outside conversations in Slack channels.
App home is designed to act as a home page for apps, where updates can be shared without the need to send messages to a channel. One of the first apps to receive the App home feature is Google Calendar , which shows a stream of your upcoming meetings and lets you join a call, reschedule, or RSVP for an appointment.
Above: App home “It’s so much easier to use now than it was when this was just a stream of messages notifications saying you have a meeting in five minutes,” Slack director of developer relations Bear Douglas told VentureBeat in a phone interview.
There’s also modal windows, which can pop up within Slack to share detailed information for more complex uses cases that require receiving input from users across multiple steps.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! App home and modal windows are designed with Slack’s app design interface Block Kit and are available today. App home is available in open beta and will be generally available for app builders in the coming months.
Above: Slack modal windows Block Kit was made available for developers to quickly build apps in February and first introduced at the first Spec conference last year. Block Kit can make apps that display photos, use drop-down menus, collect data with cards, and utilize things like buttons and lists instead of completing tasks with words alone.
Changes to how Slack apps can function are scheduled to be announced today at Spec, Slack’s second annual developer conference in San Francisco. Slack’s App Directory may only contain 1,800 automated bots, but Slack dev platforms have been used to create 500,000 custom bots to share info or streamline workflows, from bots made to tell coworkers about each other’s birthdays to a bot HSBC uses to help new hires learn company acronyms and jargon.
Today most app interactions are seen in the bottom left in Slack, where you get messages from bots, but App Home will give developers and businesses making custom apps a way to serve up content in tabs and consolidate updates in a single stream.
Finally, to encourage app discovery, the Slack app will soon get a centrally located place to find SaaS integrations and apps, called the app launcher. Available in the coming months, the app launcher will share the most commonly used apps at launch but may expand to personalize recommendations based on the individual user.
Above: App launcher The first things users will see in the app launcher will be apps an administrator approved for an employee to use or apps that are already installed, addressing a pain point for average Slack users.
“They may not even be aware of the apps that are already in their team,” Douglas said.
Slack has a history of being slow to bring app discovery opportunities into the app , but the new App launcher may grow to include Personalized results based on your role within a company or industry.
“You can imagine in the future it might be something aligned to what your role in a company is so if you’re on the finance team here are the 5 apps that people tend to use,” she said.
Also new today: the expansion of Actions.
Actions first launched at Spec last year to make it easy to do things like create an Asana task or Zendesk ticket attached to any message within Slack, but actions are now being expanded to become prominently available and no longer require a connection to messages.
Administrators will be able to assign actions to specific channels, and admin control of default actions based on a person’s role within a company is a possibility in the future, Douglas said.
“Creating a task may come from a conversation you’re having with a colleague or it might come from something that just occurred to you while you’re talking to a colleague that has nothing to do with your conversation, and so we want to create a surface for people to start interacting with apps in that way,” she said.
Above: Actions from Anywhere Actions will also soon become available in the Slack search area so you can search “file an expense” or other tasks employees frequently need to do their jobs.
Actions from Anywhere is available in closed beta today and will be made generally available in the coming months.
In other recent Slack news, a revamped Salesforce app was recently introduced, and last week, Slack launched the Workflow Builder to make it easy to create a workflow or work from a pre-made template to do things like make a help desk request or praise a teammate.
Slack now has 12 million daily active users and 6 million paid users.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,976 | 2,018 | "iPhone XR, iPhone XS, and iPhone XS Max: What Apple changed | VentureBeat" | "https://venturebeat.com/2018/09/12/iphone-xr-iphone-xs-iphone-xs-max-specs" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages iPhone XR, iPhone XS, and iPhone XS Max: What Apple changed Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
At Apple’s Gather Round event in Cupertino today, the company announced its iPhone XR , iPhone XS, and iPhone XS Max , successors to the iPhone 8, iPhone 8 Plus , and the iPhone X.
Apple’s 12th-generation phones are now official.
Preorders for the iPhone XS and iPhone XS Max begin September 21, and the devices will begin shipping September 28. Like with the iPhone X, you’ll have to wait longer for the iPhone XR: until October 19 to preorder and until October 26 for it to ship.
Before you get your credit card ready, and assuming you’re not enthused about Android 9.0 Pie (that’s really your only option, as Windows 10 Mobile and BlackBerry are dead), you might want to see exactly what you’re getting. The tables below show you what Apple has changed — we’ve decided to compare the iPhone 8 and iPhone XR, iPhone X and iPhone XS, as well as the iPhone 8 Plus and iPhone XS Max, at the time of each of their respective launches.
iPhone 8 iPhone XR Price $699, $849 $749, $799, $899 Storage 64GB, 256GB 64GB, 128GB, 256GB Display 4.7-inch, 1334×750, 326 ppi 6.1-inch, 1792×828, 326 ppi Contrast ratio 1400:1 1400:1 Processors A11 Bionic 64-bit, M11 motion A12 Bionic 64-bit Identification Touch ID Face ID Rear camera 12MP, ƒ/1.8 12MP, ƒ/1.8 Video recording 4K at 60fps, 1080p at 60fps 4K at 60fps, 1080p at 60fps Front camera 7MP photos, 1080p video 7MP photos, 1080p video FaceTime Over Wi-Fi or cellular Over Wi-Fi or cellular Assistant Siri Siri Navigation GPS and GLONASS GPS, GLONASS, Galileo, and QZSS Connectivity Bluetooth 5.0, NFC Bluetooth 5.0, NFC Talk time Up to 14 hours Up to 25 hours Internet use Up to 12 hours Up to 15 hours Video playback Up to 13 hours Up to 16 hours Audio playback Up to 40 hours Up to 65 hours Height 5.45 inches (138.4 mm) 5.94 inches (150.9 mm) Width 2.65 inches (67.3 mm) 2.98 inches (75.7 mm) Depth 0.29 inch (7.3 mm) 0.33 inch (8.3 mm) Weight 5.22 ounces (148 grams) 6.84 ounces (194 grams) SIM card Nano-SIM Nano-SIM Connector Lightning Lightning Colors Silver, Gold, Space Gray Red, Yellow, White, Coral, Black, Blue If you’re trading in an iPhone 8 for an iPhone XR, you’re paying more for a bigger phone, larger display, and heavier device, with better specs.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Here’s the iPhone X versus the iPhone XS: iPhone X iPhone XS Price $999, $1,149 $999, $1,149, $1,349 Storage 64GB, 256GB 64GB, 256GB, 512GB Display 5.8-inch, 2436×1125, 458 ppi 5.8-inch, 2436×1125, 458 ppi Contrast ratio 1,000,000:1 1,000,000:1 Processors A11 Bionic 64-bit, M11 motion A12 Bionic 64-bit Identification Face ID Face ID Rear camera 1 12MP, ƒ/1.8 12MP, ƒ/1.8 Rear camera 2 12MP, ƒ/2.4 12MP, ƒ/2.4 Video recording 4K at 60fps, 1080p at 60fps 4K at 60fps, 1080p at 60fps Front camera 7MP photos, 1080p video 7MP photos, 1080p video FaceTime Over Wi-Fi or cellular Over Wi-Fi or cellular Assistant Siri Siri Navigation GPS and GLONASS GPS, GLONASS, Galileo, and QZSS Connectivity Bluetooth 5.0, NFC Bluetooth 5.0, NFC Talk time Up to 21 hours Up to 20 hours Internet use Up to 12 hours Up to 12 hours Video playback Up to 13 hours Up to 14 hours Audio playback Up to 60 hours Up to 60 hours Height 5.65 inches (143.6 mm) 5.65 inches (143.6 mm) Width 2.79 inches (70.9 mm) 2.79 inches (70.9 mm) Depth 0.30 inch (7.7 mm) 0.30 inch (7.7 mm) Weight 6.14 ounces (174 grams) 6.24 ounces (177 grams) SIM card Nano-SIM Nano-SIM Connector Lightning Lightning Colors Silver, Space Gray Gold, Silver, Space Gray The iPhone XS isn’t much of an upgrade over the iPhone X. You’re getting a slightly more powerful phone, with one-hour worse battery life — thankfully you don’t have to pay more for it. No wonder Apple killed off the iPhone X — it really has no place in the lineup.
And now finally, the Plus/Max comparison: iPhone 8 Plus iPhone XS Max Price $799, $949 $1,099, $1,249, $1,449 Storage 64GB, 256GB 64GB, 256GB, 512GB Display 5.5-inch, 1920×1080, 401 ppi 6.5-inch, 2688×1242, 458 ppi Contrast ratio 1300:1 1,000,000:1 Processors A11 Bionic 64-bit, M11 motion A12 Bionic 64-bit Identification Touch ID Face ID Rear camera 1 12MP, ƒ/1.8 12MP, ƒ/1.8 Rear camera 2 12MP, ƒ/2.8 12MP, ƒ/2.8 Video recording 4K at 60fps, 1080p at 60fps 4K at 60fps, 1080p at 60fps Front camera 7MP photos, 1080p video 7MP photos, 1080p video FaceTime Over Wi-Fi or cellular Over Wi-Fi or cellular Assistant Siri Siri Navigation GPS and GLONASS GPS, GLONASS, Galileo, and QZSS Connectivity Bluetooth 5.0, NFC Bluetooth 5.0, NFC Talk time Up to 21 hours Up to 25 hours Internet use Up to 13 hours Up to 13 hours Video playback Up to 14 hours Up to 15 hours Audio playback Up to 60 hours Up to 65 hours Height 6.24 inches (158.4 mm) 6.20 inches (157.5 mm) Width 3.07 inches (78.1 mm) 3.05 inches (77.4 mm) Depth 0.30 inch (7.5 mm) 0.30 inch (7.7 mm) Weight 7.13 ounces (202 grams) 7.34 ounces (208 grams) SIM card Nano-SIM Nano-SIM Connector Lightning Lightning Colors Silver, Gold, Space Gray Silver, Gold, Space Gray This is probably the least obvious comparison, but that’s largely because the iPhone XS Max is supposed to stand out as in its own class. And if you want the best iPhone, as always, it will cost you.
One last thing: Touch ID is not available in any of these new phones. Keep that in mind if you’re not a fan of Face ID.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,977 | 2,018 | "Huawei smartphone shipments jumped a third in 2018 to over 200 million units | VentureBeat" | "https://venturebeat.com/2018/12/24/huawei-smartphone-shipments-jumped-a-third-in-2018-to-over-200-million-units" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Huawei smartphone shipments jumped a third in 2018 to over 200 million units Share on Facebook Share on X Share on LinkedIn Huawei Mate 20 Pro: Rear view Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Huawei has had a tough year on a number of fronts, with the U.S and other countries leading efforts to ban its networking equipment on cybersecurity grounds. But the Chinese giant hasn’t had a bad year in the consumer realm.
Today, Huawei revealed it has shipped more than 200 million smartphones in 2018, including its Honor sub-brand, confirming an estimation it made back in August.
This represents an increase of roughly one-third from 2017, when it shipped 153 million.
According to Huawei, there are now 500 million Huawei phones in active use across more than 170 countries.
This year, Huawei also launched a number of well-received smartphones, including the flagships Huawei P20 Pro and Mate 20 Pro , which sported impressive cameras and nifty AI-powered features.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That Huawei has broken the 200 million units barrier in a single calendar year should perhaps come as little surprise. Indeed, in Q2 2018 Huawei displaced Apple to become the number two smartphone maker globally, the first time in seven years that Samsung and Apple hadn’t held the top two positions. This was more than a blip, with Huawei maintaining the second spot into Q3.
It’s also worth looking at the company’s growth over a longer time frame — in 2010, Huawei shipped 3 million smartphones, and by 2015 it had passed the 100 million mark.
Above: Huawei smartphone growth: 2010 – 2018 Battle While Samsung hasn’t revealed its number of shipments for 2018, a look at figures for Q2 and Q3 reveals that Samsung shipped on average at least 20 million more units than Huawei, so we might expect the Korean juggernaut to have dispatched somewhere in the region of 300 million units this year.
It’s not clear who will be the overall number two smartphone vendor in 2018, given that Apple shifted 10 million more units than Huawei in Q1, but based on the current trajectory it is more than likely Huawei will cement the position for 2019.
In fact, Huawei Consumer CEO Richard Yu said back in August that his company could possibly surpass Samsung by the end of 2019.
“I think it’s no problem that we become the global number two next year,” he said. “In Q4 next year, it’s possible we become number one.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,978 | 2,019 | "Samsung Galaxy A80 features a slide-up rotating camera | VentureBeat" | "https://venturebeat.com/2019/04/10/samsung-galaxy-a80-features-a-slide-up-rotating-camera" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Samsung Galaxy A80 features a slide-up rotating camera Share on Facebook Share on X Share on LinkedIn Samsung Galaxy A80 Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Smartphone manufacturers are going all-out to maximize screen real estate without compromising on features. And it’s against that backdrop that Samsung today unveiled the Galaxy A80, which sports a gargantuan 6.7-inch display and a camera that slides and rotates up from the rear.
Over the past couple of years, mobile phone makers have tried a variety of methods to grow their screens without increasing the size of the device itself. One of these ways has been to eat into the top bezel with a notch — Apple’s iPhone X in 2017 wasn’t the first to include the bezel cutout, but it certainly kick-started a craze, with many Android devices following suit.
The notch allows companies to increase their screen-to-body ratio while retaining the selfie camera, but it comes with trade-offs such as reduced access to alerts in the notification bar and, well, questionable aesthetics. The notch has proved unpopular with many people, and this has led to alternative solutions such as in-screen selfie cameras that reduce the notch to more of a punch-hole, while sliders have resurfaced alongside pop-up cameras that go some way toward concealing the front-facing camera.
The Samsung Galaxy A80 has another solution for some of these aforementioned trade-offs.
The full-screen display shows no signs of a selfie camera — until you slide a rear panel upward, which reveals the front-shooter.
Above: Samsung Galaxy A80 Now, this is where the really novel aspect of the Galaxy A80 kicks in. The front-facing camera actually doubles as the rear-facing camera — it swivels around to face the front when the slider is activated.
The upshot of this is that the rear- and front-facing cameras are exactly the same quality, featuring a 48-megapixel main lens, an 8-megapixel ultra-wide lens, and a 3D depth lens.
Above: Samsung Galaxy A80: Slide and rotate It’s worth noting here that Samsung isn’t the first company to bring a rotating mobile phone camera like this to market — Oppo, for example, released its N1 device back in 2013 with a similar rotating camera.
However, as the world’s biggest mobile phone maker, Samsung is in a good position to make the promise of triple-lens, high-quality smartphone photography a reality — for selfies and everything in between.
Other specs worth highlighting include the on-board 8GB of RAM, 128GB of internal storage, a 3,700mAh battery, 25-watt supercharging, and an in-screen fingerprint reader. There is no 3.5mm headphone jack on the Galaxy A80, which will undoubtedly upset many, but this seems to be the direction Samsung is heading — the recently announced Galaxy Fold has no headphone port either, while the upcoming Galaxy Note 10 is also rumored to be ditching the jack.
The electronics giant hasn’t revealed a price as of yet, but it did provide a launch date of May 29.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,979 | 2,019 | "Samsung launches mid-range Galaxy A90 5G smartphone | VentureBeat" | "https://venturebeat.com/2019/09/03/samsung-launches-mid-range-galaxy-a90-5g-smartphone" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Samsung launches mid-range Galaxy A90 5G smartphone Share on Facebook Share on X Share on LinkedIn Samsung A90 5G Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Samsung has announced a new mid-range addition for its 5G device lineup.
The Korean tech titan had already introduced the world to a bunch of 5G devices this year, including the $1,300-plus Galaxy S10 5G , the $2,000-plus Galaxy Fold 5G , and the recently revealed Galaxy Note 10-series, which features two 5G versions. The latter pair start at $1,049, though the cheaper one is exclusive to South Korea — those in the U.S. will pay upwards of $1,300. Now Samsung has thrown 5G at one of its mid-range A series devices in the form of the Samsung Galaxy A90 5G, which will debut in the company’s native market tomorrow (September 4), with an international launch to follow.
Rumors of the Samsung Galaxy A90 5G have circulated for months, with leaker Evan Blass teasing a bunch of very official-looking marketing materials as recently as this past weekend. Now, however, Samsung has officially confirmed most of the A90 5G’s specifications, and it is a bit of a beast despite its lowly A series status. Sporting a giant 6.7-inch full HD+ display (1080 x 2400), the A90 5G has three rear cameras (including a 48-megapixel main lens), a 32-megapixel selfie camera, a 4,500 mAh battery, and 25-watt fast charging. It also has the same Snapdragon 855 chipset as other 5G devices on the market.
Above: Samsung Galaxy A90 5G Samsung isn’t, of course, the only company to push 5G devices this year — OnePlus now offers the $840 OnePlus 7 Pro 5G , while Huawei has the Mate 20 X 5G and Xiaomi the more affordable $680 Mi Mix 3 5G.
LG, meanwhile, offers the $1,000-plus V50 ThinQ 5G.
There are other 5G devices available, too, but as an affordable, mass-market device from the biggest smartphone manufacturer in the world, the Galaxy A90 5G could help truly kickstart the 5G revolution. Samsung hasn’t revealed pricing for the Galaxy A90 5G, even in its domestic market where it goes on sale tomorrow, but as a mid-range device it could weigh in at comfortably less than $1,000.
* For now, those in the U.S. have limited options in terms of 5G devices — only the Samsung S10 5G, Galaxy Note 10 Plus 5G, OnePlus 7 Pro 5G, and LG V50 ThinQ 5G are available. If Samsung can come in with a $700-$800 5G device, for example, it could help propel the adoption of 5G — assuming the Galaxy A90 5G is made available to buy in the U.S.
VentureBeat has reached out to Samsung for further details around pricing and market launches for the Galaxy A90 5G, and we will update here when we hear back.
* Samsung has now confirmed the local price for the Galaxy A90 5G will be 899,800 KRW, which translates to around $743. The company has yet to confirm when, or whether, the device will make it to other markets such as the U.S.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,980 | 2,017 | "AWS launches Greengrass IoT service out of preview | VentureBeat" | "https://venturebeat.com/2017/06/07/aws-launches-greengrass-iot-service-out-of-preview" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AWS launches Greengrass IoT service out of preview Share on Facebook Share on X Share on LinkedIn Amazon Web Services (AWS).
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Amazon Web Services dove deeper into providing services for the Internet of Things today with the official launch of its Greengrass service.
The new offering helps customers handle processing of data on edge devices and communication from those devices to the AWS cloud.
Greengrass lets customers write functions that can be deployed on compatible devices and run in response to triggers from local hardware or the AWS cloud. Using those functions, it’s possible to handle data processing without a network connection, which is key for IoT devices. Greengrass also handles secure connections between embedded hardware and Amazon’s cloud, so it’s possible for customers to pass data back and forth.
The service, which AWS announced at its re:Invent conference in Las Vegas last year, helps the cloud provider compete on a key battlefield. All of the major cloud providers have some set of IoT services to help enterprises take advantage of connected devices. However, AWS is one of the first among its cohort to provide generally available first-party services for edge processing, which could be a key benefit for attracting customers and hardware partners.
In particular, Greengrass has beaten Microsoft’s Azure IoT Edge service to general availability, which gives the company an advantage, according to Dima Tokar, the cofounder and CTO of MachNation.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “For prospective customers evaluating cloud platforms, the general availability of Greengrass will give AWS temporary advantage over Azure, as it provides a production-ready set of capabilities for edge devices,” he said in an email. “For hardware vendors evaluating cloud platforms, Amazon shipping Greengrass means vendors can take a generally available product from a leading cloud vendor and begin integrating it into their hardware.” One of the other key benefits of Greengrass is that the functions users run on their edge devices are built using AWS Lambda, the company’s “serverless” processing service. In theory, it’s possible for developers to write one Lambda function that will run both in Amazon’s cloud and on edge devices.
There are three laws that AWS thinks about when building edge computing capabilities, according to Dirk Didascalou, the cloud provider’s vice president of IoT.
It’s not possible to transfer data faster than the laws of physics allow, and sending data over a network isn’t fast enough for some applications. The laws of the land can require certain security or privacy practices that make cloud or IoT use more difficult, and the laws of economics can make it harder to send large volumes of data from edge devices to the cloud in a cost-effective manner. AWS Greengrass is aimed at addressing those problems.
As part of the launch, Intel announced a new IoT Dev Kit and Greengrass compatible gateways to help hardware manufacturers and enterprises adopt the new services. Qualcomm also said that its DragonBoard 410c development kit now supports Greengrass, and showed customer hardware that integrated with the new service.
Starting Tuesday, the Greengrass cloud service will be generally available out of AWS’ Northern Virginia and Oregon data centers. The service will be made available from the cloud provider’s Frankfurt and Sydney regions in the coming weeks, Didascalou said. Greengrass-capable devices can be deployed worldwide.
Updated at 12:35 Pacific Time with comments from Dima Tokar.
Correction June 7, 2017: We initially said the DragonBoard 410c was new. It is not. The DragonBoard 410c was released in March 2015. We have updated the article accordingly.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,981 | 2,017 | "Amazon unveils DeepLens, a $249 camera for deep learning | VentureBeat" | "https://venturebeat.com/2017/11/29/amazon-unveils-deeplens-a-249-camera-for-deep-learning" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon unveils DeepLens, a $249 camera for deep learning Share on Facebook Share on X Share on LinkedIn AWS general manager of AI services Matt Wood tests DeepLens' object identification and sentiment analysis abilities onstage at AWS re:Invent in Las Vegas.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Amazon Web Services today unveiled DeepLens, a wireless video camera made for the quick deployment of deep learning. The camera will cost $249 and is scheduled to ship for customers in the United States in April 2018.
DeepLens comes pre-loaded with AWS Greengrass for local computation and can operate with SageMaker , a new service to simplify the deployment of AI models, as well as popular open source AI services such as TensorFlow from Google and Caffe2 from Facebook, according to an AWS blog.
“DeepLens runs the model directly onto the device. The video doesn’t have to go anywhere. It can be trained with SageMaker and deployed to the model,” said AWS general manager of AI services Matt Wood during the keynote address today at AWS re:Invent conference being held this week in Las Vegas.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The camera can run pre-trained or custom AI models to perform computer vision tasks such as facial recognition, sentiment analysis, or object identification. Pre-trained models included with the camera can also recognize a variety of activities such as brushing teeth or playing the guitar.
“You can program this thing to do almost anything you can imagine so you can imagine programming the camera with computer vision models so if you recognize a license plate coming into your driveway it will open a garage door, or you can program an alarm if your dog gets on the couch,” said AWS CEO Andy Jassy onstage.
DeepLens includes a 4-megapixel camera for 1080p video and two microphones, as well as 8GB of RAM and 16GB of storage space for videos, pre-trained models, and code.
DeepLens made its debut Wednesday together with a barrage of computer vision and natural language processing services from AWS to help developers deliver AI services in the cloud.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,982 | 2,018 | "Amazon unveils AWS Inferentia chip for AI deployment | VentureBeat" | "https://venturebeat.com/2018/11/28/amazon-unveils-aws-inferentia-chip-for-ai-deployment" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon unveils AWS Inferentia chip for AI deployment Share on Facebook Share on X Share on LinkedIn AWS Inferentia Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Amazon today announced Inferentia, a chip designed by AWS especially for the deployment of large AI models with GPUs, that’s due out next year.
Inferentia will work with major frameworks like TensorFlow and PyTorch and is compatible with EC2 instance types and Amazon’s machine learning service SageMaker.
“You’ll be able to have on each of those chips hundreds of TOPS; you can band them together to get thousands of TOPS if you want,” AWS CEO Andy Jassy said onstage today at the annual re:Invent conference.
Inferentia will also work with Elastic Inference, a way to accelerate deployment of AI with GPU chips that was also announced today.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Elastic Inference works with a range of 1 to 32 teraflops of data. Inferentia detects when a major framework is being used with an EC2 instance, and then looks at which parts of the neural network would benefit most from acceleration; it then moves those portions to Elastic Inference to improve efficiency.
The two major processes for what it requires to launch AI models today are training and inference, and inference eats up nearly 90 percent of costs, Jassy said.
“We think that the cost of operation on top of the 75 percent savings you can get with Elastic Inference, if you layer Inferentia on top of it, that’s another 10x improvement in costs, so this is a big game changer, these two launches across inference for our customers,” he said.
The release of Inferentia follows the debut Monday of a chip by AWS purpose-built to carry out generalized workflows.
The debut of Inferentia and Elastic Inference was one of several AI-related announcements made today. Also announced today: the launch of an AWS marketplace for developers to sell their AI models, and the introduction of the DeepRacer League and AWS DeepRacer car , which runs on AI models trained using reinforcement learning in a simulated environment.
A number of services that require no prior knowledge of how to build or train AI models were also made available in preview today, including Textract for extracting text from documents, Personalize for customer recommendations, and Amazon Forecast, a service that generates private forecasting models.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,983 | 2,019 | "Amazon unveils Echo Studio with subwoofer and 3D Dolby sound | VentureBeat" | "https://venturebeat.com/2019/09/25/amazon-unveils-echo-studio-with-subwoofer-and-3d-dolby-sound" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon unveils Echo Studio with subwoofer and 3D Dolby sound Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Amazon today unveiled the Echo Studio, a smart speaker that works with AI assistant Alexa that includes 3D Dolby Atmos to optimize sound based on room metrics. The Echo Studio includes 3 mid-range speakers — left, right, and top — with directional tweeter and a 5-inch subwoofer for bass and bigger sound. Users can also stream sound from Amazon Fire video content to Echo Studio speakers.
The Echo Studio is available for preorder today and costs $199. Also introduced today: an Echo Dot with a clock, a new line of Echo speakers, and Echo Show 8. The news was announced today at an event at Amazon headquarters in Seattle.
The Echo Studio, announced in an invite-only event in Seattle at Amazon headquarters today, follows the introduction of Echo Amp and Echo Sub to enhance existing Echo speakers one year ago.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Echo Studio will give Amazon a device to compete with Google Home Max and a range of flexible smart speakers introduced this year from high-end audio companies like Bose and Sonos that can speak with either Alexa or Google Assistant. Unlike those more flexible devices, Echo Studio will speak exclusively with Alexa and Microsoft’s Cortana.
On Tuesday, Amazon and a group of dozens of other companies, including Salesforce, Bose, Sonos, and Microsoft, created the Voice Interoperability Initiative for a multi-assistant world.
At the same event last year, Amazon introduced new Echo Dot and Echo Plus smart speakers and a second-generation Echo Show smart display.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,984 | 2,019 | "Amazon Sidewalk's success is anything but assured | VentureBeat" | "https://venturebeat.com/2019/09/26/amazon-sidewalks-success-is-anything-but-assured" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon Sidewalk’s success is anything but assured Share on Facebook Share on X Share on LinkedIn Amazon announced a new protocol for internet of things devices at an event in Seattle. It's called Sidewalk.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Amazon unveiled a metric ton of new products this week during an event in Seattle, including a beefed-up Echo and a pair of Alexa-imbued smart glasses.
But the practical implications of perhaps the most intriguing reveal — that of Amazon Sidewalk — won’t be borne out for months, and possibly years. If ever.
Amazon SVP of devices and service Dave Limp described Sidewalk as a low-cost, low-power, and low-bandwidth wireless protocol that uses the 900MHz frequency band to link smart lightbulbs, sensors, and other internet of things (IoT) devices together. It’s capable of securely sending data to devices up to a mile away, and it natively supports features like secure over-the-air (OTA) updates, as well as limited geolocation via triangulation.
Amazon said Sidewalk has already been tested by “hundreds” of Ring employees in the L.A. Basin, and that the first Sidewalk-enabled device — a dog-tracking collar accessory called Fetch — will launch early next year. But Sidewalk faces formidable barriers to adoption and deployment, chief among them a lack of manufacturers on board.
Show me the devices It’s no mistake Amazon settled on 900MHz — also known as the 33-centimeter band — as its radio spectrum of choice for Sidewalk. While signal propagation on it depends on the transmitting and receiving antenna’s line of sight, its small wavelength enables it to penetrate buildings more easily than low-frequency bands.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Moreover, it has regulatory backing. The European Commission signaled its intention last year to designate the 900MHz band as the default for applications related to smart cities, smart homes, smart farming logistics, transport, and industrial production. And back in 2017, the FCC opened a Notice of Inquiry examining the 900MHz band for low-bandwidth IoT use cases.
Given Ring’s involvement with the preliminary Sidewalk rollout, it’s a safe bet that Amazon’s gunning for coast-to-coast connectivity complemented by Sidewalk-powered Ring doorbells, security cameras, outdoor lights, and sensors. But with history as a guide, when it comes to new standards — even those well-documented and made available in open source — ubiquity is anything but guaranteed.
Consider Thread, an IPv6-based, low-power mesh networking technology for IoT products intended to be secure and future-proof. Even despite its technological superiority and backing from brands like Google, Samsung, Arm, Qualcomm, Silicon Labs, OSRAM, Tyco International, and most recently Apple, it’s failed to make headway against incumbent standards like Bluetooth LE Mesh, Zigbee, and Z-Wave.
HaLow (802.11ah), a specification spearheaded by the Wi-Fi Alliance, the nonprofit organization that certifies products for conformity to interoperability standards, suffered much the same fate. It was originally conceived as an extension of the Wi-Fi suite of standards into the resource-constrained world of battery-powered products like sensors and wearables, but that vision didn’t come to pass. A year after the certification program kicked off, and three years since its 2016 CES debut, support is practically nonexistent save forthcoming chips from startups Morse Micro, Newracom, and Radiata.
Too many standards Absent documentation, it’s difficult to suss out Sidewalk’s technical novelty. But whatever the case turns out to be, it joins an overabundance of low-power wide-area network (LPWAN) technologies targeting the exact challenges Amazon outlined in its keynote.
For example, the Nb-Fi Protocol, which operates in the unlicensed ISM radio band, can transmit data to devices up to 30 kilometers away and theoretically ensures up to 10 years of battery life.
As for LoRa, it leverages sub-gigahertz radio bands to beam packets from end-nodes to multiple gateways up to kilometers away. (A cloud-based networking layer — LoRaWAN — performs security checks and manages the network, and then forwards data to application servers.) The LoRa Association already counts 500 companies among its membership, including heavy hitters like IBM, Orange, and Cisco. And as of the end of 2018, there were over 100 LoRaWAN network operators in more than 100 countries.
French global network operator Sigfox is another competitor vying for a slice of the IoT connectivity pie. Its ultra-narrowband network can pass through solid objects, and it already covers over 5 million square kilometers in 65 countries.
There’s also Dash7, an open source wireless sensor protocol funded in part by the Department of Defense that provides multi-year battery life, a range of up to 2 kilometers, encryption, and support for thousands of base stations. The alliance charged with developing and maintaining it (the Dash7 Alliance) has more than 50 members.
Not to be outdone, the comparable Weightless standard (which has a 10-kilometer range) boasts over 100 participating companies hailing from over 40 countries, the bulk of which manufacture industrial and medical equipment, smart electric meters, health monitors, and vehicle-tracking sensors.
The 3rd Generation Partnership Project (3GPP), a standards organization that develops protocols for mobile telephony, put forth its own solutions in Narrowband IoT (which focuses on indoor coverage and long battery life) and Cat-M (a machine-to-machine and IoT applications solution). Both use a subset of the LTE standard, and subsequently, there’s typically a fee associated with their usage. But that hasn’t prevented hundreds of telecoms from deploying and launching them to date.
Playing the long game So what’s Amazon’s endgame with Sidewalk, given the stiff competition it faces out of the gate? Paid services atop Sidewalk certainly seem likely. At this week’s event, Amazon touted the spec’s support for background OTA updates, which appear to be akin to features already offered through platforms like Microsoft’s Azure IoT Hub, Google Cloud IoT, and Amazon’s own AWS IoT.
Assuming forthcoming products like the Fetch aren’t cross-compatible with, say, Zigbee or Bluetooth, Amazon stands to profit enormously from Sidewalk ecosystem lock-in. It will presumably be the first to offer a software-as-a-service (SaaS) complement to the hardware, and there’s nothing preventing it from baking in licensable features and functionality over time.
On the device side of the equation, Amazon might exact fees from manufacturers who participate in some sort of certification process, much like what Apple demands from MFi participants.
And perhaps someday, it might launch a fully managed service wherein OEMs pay for the privilege of outsourced device maintenance, troubleshooting, and upgrades.
Although Sidewalk’s motivations are rather obvious, its success is by no means a sure thing — particularly considering that it will require consumers to purchase new base stations and smart devices. It’s early days, of course, and much about Sidewalk remains shrouded in mystery. But what is certain is that Amazon has its work cut out for it.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,985 | 2,019 | "Google's Coral AI edge hardware launches out of beta | VentureBeat" | "https://venturebeat.com/2019/10/22/googles-coral-ai-edge-hardware-launches-out-of-beta" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s Coral AI edge hardware launches out of beta Share on Facebook Share on X Share on LinkedIn Google's Coral Camera Module, Dev Board, and USB Accelerator.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Last March, Google took the wraps off of Coral, a collection of hardware development kits and accessories intended to bolster the development of machine learning models at the edge. It launched in select regions in beta , but the tech giant today announced that it’s graduating to a “wider” and global release.
All Coral products — including the $150 Coral Dev Board, the $74.99 Coral USB Accelerator, and the $24.99 5-megapixel camera accessory — are available for sale at electronics retailer Mouser and for large-volume sale through Google’s sales team. The company says that by the end of the year, it’ll expand distribution into new markets including Taiwan, Australia, New Zealand, India, Thailand, Singapore, Oman, Ghana, and the Philippines.
Coinciding with Coral’s general availability, the Coral website — which now lives at Coral.ai — has been revamped with better organization for docs and tools, testimonials, and “industry-focused” pages. Additionally, it links to a new set of examples aimed at providing solutions to common AI problems, such as image classification, object detection, pose estimation, and keyword spotting.
Lastly, Google says it’ll soon release a new version of the Mendel operating system that updates the system to the latest version of Debian (Buster), and it says it’s hard at work on updates to the Edge TPU compiler and runtime that’ll improve the model development workflow.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We’ve received a lot of feedback over the past six months and used it to improve our platform,” wrote Coral product manager Vikram Tank. “Coral is also at the core of new applications of local AI in industries ranging from agriculture to health care to manufacturing … [It’s] already delivering impact across industries, and several of our partners are including Coral in products that require fast ML inferencing at the edge.” For the uninitiated, the Coral Dev Board is a miniature computer featuring a removable system-on-module with one of its custom tensor processing unit (TPU) AI chips. As for the Coral USB Accelerator, it’s a USB dongle designed to speed up machine learning inference on existing Raspberry Pi and Linux systems.
TPUs are application-specific integrated circuits (ASICs) developed specifically for neural network machine learning. The first-generation design was announced in May at Google I/O, and the newest — the third generation — was detailed in May of last year.
The TPU inside the Coral Dev Board — the Edge TPU — is capable of “concurrently execut[ing]” deep feed-forward neural networks (such as convolutional networks) on high-resolution video at 30 frames per second, Google says, or a single model like MobileNet V2 at over 100 frames per second. It sends and receives data over PCIe and USB, and it taps the Google Cloud IoT Edge software stack for data management and processing.
Edge TPUs aren’t quite like the chips that accelerate algorithms in Google’s data centers — those TPUs are liquid-cooled and designed to slot into server racks, and have been used internally to power products like Google Photos, Google Cloud Vision API calls, and Google Search results. Edge TPUs, on the other hand, which measure about a fourth of a penny in size, handle calculations offline and locally, supplementing traditional microcontrollers and sensors. Moreover, they don’t train machine learning models. Instead, they run inference (prediction) with a lightweight, low-overhead version of TensorFlow that’s more power-efficient than the full-stack framework: TensorFlow Lite.
Toward that end, the Dev Board, which runs a derivative of Linux dubbed Mendel, spins up compiled and quantized TensorFlow Lite models with the aid of a quad-core NXP i.MX 8M system-on-chip paired with integrated GC7000 Lite Graphics, 1GB of LPDDR4 RAM, and 8GB of eMMC storage (expandable via microSD slot). It boasts a wireless chip that supports Wi-Fi 802.11b/g/n/ac 2.4/5GHz and Bluetooth 4.1, a 3.5mm audio jack, and a full-size HDMI 2.0a port, plus USB 2.0 and 3.0 ports, a 40-pin GPIO expansion header, and a Gigabit Ethernet port.
The Coral USB Accelerator similarly packs an Edge TPU and works at USB 2.0 speeds with any 64-bit Arm or x86 platform supported by Debian Linux. In contrast to the Dev Board, it’s got a 32-bit Arm Cortex-M0+ microprocessor running at 32MHz accompanied by 16KB of flash and 2KB of RAM.
On the subject of the camera, which is manufactured by Omnivision, it has a 1.4-micrometer sensor with an 84-degree field of view, 1/4-inch optical size, and 2.5mm focal length, and it connects to the Dev Board over a dual-lane MIPI interface. In addition to automatic exposure control, white balance, band filter, and blacklevel calibration, it features adjustable color saturation, hue, gamma, sharpness, lens correction, pixel canceling, and noise canceling.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,986 | 2,018 | "Apple Watch Series 4 can detect falls, take ECGs, and lead you through breathing exercises | VentureBeat" | "https://venturebeat.com/2018/09/12/apple-watch-series-4-can-detect-falls-take-ecgs-and-lead-you-through-breathing-exercises" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple Watch Series 4 can detect falls, take ECGs, and lead you through breathing exercises Share on Facebook Share on X Share on LinkedIn Taking an ECG with the Apple Watch Series 4.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Apple’s health initiatives date back to 2014, which marked the launch of its HealthKit and the Health app for iOS. Since then, it’s partnered with institutions including the Mayo Clinic and Johns Hopkins through its ResearchKit platform to conduct large-scale studies using data collected from iOS device users.
Today, it gave an update on a few of those and other health efforts today during an event in Cupertino, California.
The new Apple Watch Series 4 shows nutritional information at a glance. A watchface — Breathe — guides you through one of three breathing exercises. And thanks to next-gen accelerometer and gyroscope units that measure forces up to 32 g-forces, it can automatically detect when you fall.
“One of the most common injuries is falls,” said Apple chief operating officer Jeff Williams.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Apple Watch Series 4 delivers an alert when it detects you’ve made contact with the ground, and surfaces a one-tap notification that can place a call to emergency services. If you’re immobile for more than one minute, it automatically calls 911 and sends a message containing your location to loved ones via Apple’s SOS feature.
The Apple Watch Series 4 also has three new heart monitoring features.
One sends a notification if your heart rate is determined to be too low. The second screens for heart rhythm irregularities that appear to be atrial fibrillation (AFib), which is caused by poor blood flow and can increase the risk of cardiovascular conditions such as stroke and heart attack. (With watchOS 5, it intermittently analyzes heart rhythms in the background.) And the third allows you to record electrocardiograms (ECGs) with the Watch’s crown and sapphire glass back.
Recordings take less than 30 seconds and are saved in PDF format for easy exporting. Apple claims the Apple Watch Series 4 is the first ECG product offered over the counter directly to consumers and the first of its kind to receive clearance (De Novo classification) from the U.S. Food and Drug Administration. (It’s worth noting that AliveCor’s EKG-reading KardiaBand was cleared by the FDA in 2017.) It’s not without caveats. ECGs aren’t meant to be taken by people younger than 22 or those who have known heart conditions, the FDA noted in a summary document , and it warns that the Apple Watch Series 4 “is not intended to provide a notification on every episode of irregular rhythm suggestive of AFib and the absence of a notification is not intended to indicate no disease process is present.” Still, FDA chief Scott Gottlieb expressed enthusiasm about the new feature.
“The FDA worked closely with the company as they developed and tested these software products, which may help millions of users identify health concerns more quickly,” he said in a statement.
The Apple Watch Series 4 retains a photoplethysmography (PPG) sensor, which estimates beats per minute (BPM) by measuring the rate at which green light is absorbed by the blood flowing through your wrist. But ECGs are considered to be more accurate.
As ever, health data collected by the Apple Watch is encrypted on-device and in the cloud. AFib detection and ECG will be available later this year in the U.S., and roll out to other regions in the coming months.
Today’s announcements follow on the heels of Apple Watch-related health news in June, at Apple’s 2018 Worldwide Developers Conference. The Cupertino company demoed new workout types, improved activity tracking, and automatic workout detection in watchOS 5.
One thing that wasn’t announced today: a new in-house chip for biometrics. In August, CNBC uncovered job postings from Apple’s Health Sensing hardware team that hinted at a processor for health data management. If today’s event is any indication, the chip — if there is such a chip — won’t make it into this year’s Apple Watch.
HealthKit, at its core, is a repository for health data — metrics like weight, steps, blood pressure, and heart rate. One of Apple’s earliest HealthKit partners was Nike, which worked to integrate its health and fitness apps with the platform.
In November 2017, Apple partnered with Stanford on a study that sought to predict defect AFib in patients using the Apple Watch’s heart rate sensor. The study invited wearers who may have an irregular heart rate to attend a free medical consultation and be fitted with an ECG patch for continued monitoring.
“Through the Apple Heart Study, Stanford Medicine faculty will explore how technology like Apple Watch’s heart rate sensor can help usher in a new era of proactive health care central to our Precision Health approach,” Lloyd Minor, dean of Stanford University School of Medicine, said at the time.
iOS 11.3 brought with it an electronic health record initiative (via an updated Health app) that attempts to unify medical data from multiple hospitals, clinics, and providers in one place. (Those on board included Johns Hopkins Medicine, Cedars-Sinai, Penn Medicine, and more than 70 other institutions.) The new and improved Health app shows notifications when data about allergies, conditions, immunizations, lab results, medications, procedures, and vitals are updated, and adheres to FHIR (Fast Healthcare Interoperability Resources), a standard for transferring electronic medical records.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,987 | 2,019 | "Google rolls out updates to AI Platform Prediction and AI Platform Training | VentureBeat" | "https://venturebeat.com/2019/10/25/google-rolls-out-updates-to-ai-platform-prediction-and-ai-platform-training" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google rolls out updates to AI Platform Prediction and AI Platform Training Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google’s AI Platform, a cloud-hosted service facilitating machine learning and data science workflows, today gained a new feature in backend models that tap powerful Nvidia graphics chips. In related news, Google debuted a refreshed model training experience that allows users to run a training script on any range of hardware.
For the uninitiated, AI Platform enables developers to prep, build, run, and share machine learning models quickly and easily in the cloud. Using built-in data labeling services, they’re able to annotate model training images, videos, audio, and text corpora by applying classification, object detection, and entity extraction. A managed Jupyter Notebook service provides support for a slew of machine learning frameworks, including Google’s TensorFlow, while a dashboard within the Google Cloud Platform console exposes controls for managing, experimenting with, and deploying models in the cloud or on-premises.
Now, AI Platform Prediction — the component of AI Platform that enables model serving for online predictions in a serverless environment — lets developers choose from a set of machine types in Google’s Compute Engine service to run a model. Thanks to a new backend built on Google Kubernetes Engine, they’re able to add graphics chips like Nvidia’s T4 and have AI Platform Prediction handle provisioning, scaling, and serving. (Online Prediction previously only allowed you to choose from one or four vCPU machine types.) Additionally, prediction requests and responses can now be logged to Google’s BigQuery, where they can be analyzed to detect skew and outliers.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As for AI Platform Training — which allows data scientists to run a training script on a variety of hardware, without having to manage the underlying machines — it now supports custom containers, letting researchers launch any Docker container so that they can train a model with any language, framework, or dependencies. Furthermore, AI Platform Training gained Compute Engine machine types for training, which allows for the piecemeal selection of any combination of CPUs, RAM, and accelerators.
“Cloud AI Platform simplifies training and deploying models, letting you focus on using AI to solve your most challenging issues … From optimizing mobile games to detecting diseases to 3D modeling houses, businesses are constantly finding new, creative uses for machine learning,” wrote Cloud AI Platform product manager Henry Tappen in a blog post. “With more inference hardware and training software choices, we look forward to seeing what challenges you use AI to tackle in the future.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,988 | 2,018 | "Google Lens in Pixel's Assistant can now recognize famous people | VentureBeat" | "https://venturebeat.com/2018/03/09/google-lens-in-pixels-assistant-can-now-recognize-famous-people" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Lens in Pixel’s Assistant can now recognize famous people Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google Lens in Pixel smartphones is now able to recognize photos and depictions of well-known people like actresses, celebrities, and politicians.
It’s not clear when the ability to recognize celebrities was added to Lens in Google Assistant. The feature was identified during Lens tests conducted by VentureBeat this week. Google did not respond to further questions about the sorts of people Lens is able to recognize.
The computer vision-powered service first became available last November as a premiere feature for Pixel smartphones. Lens in Assistant on a Pixel smartphone still appears to provide the most advanced version of Google’s visual search tool, even as the company expands its availability beyond its flagship smartphone to all Android smartphones.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Two weeks ago, Google announced plans to bring Lens to Google Photos for Android and iOS as well as Assistant for Android.
On Tuesday, Lens was added to Google Photos for Android devices , but has yet to show up in Google Photos for iOS or Assistant in Android phones (Pixel aside, of course). In the Photos app, Lens brings the ability to scrape text from business cards to create new contacts on Android phones, plus recognize works of art, landmarks, text on billboards, plants, and animals.
Bringing Lens to more Android smartphones leads us to ask whether the visual search tool will work the same on all devices. Lens in the Photos app and Lens in Assistant already offer different experiences.
For example, if you snap a photo of a celebrity, like comedian Dave Chappelle, and scan that photo with Lens in the Google Photos app, a prompt will appear that says “Lens doesn’t recognize people.” Do the same with Lens in Google Assistant with a Pixel 2 and you will be shown suggested search results related to Chappelle’s movie career and former TV show, and even his net worth.
The experience of using Lens in the Photos app versus Lens in Pixel’s Assistant is innately different — Photos scans images of pictures you already took, while Lens in Assistant operates with the camera lens open so it only requires a tap on your screen. Lens in Assistant can also be tasked with remembering visual searches and translating text.
Lens in Pixel’s Assistant started out with similar features as those now available in Photos, but can now recognize when it sees currency denominations and objects like street signs or window curtains. Lens in Assistant is also able to recognize items of clothing and make some shopping recommendations.
Recognition of things like money and produce is in line with improvements forecast last fall by Assistant engineer Behshad Behzadi at Google Developer Days Europe.
A Google spokesperson did not answer questions about Assistant’s ability to recognize people or Lens capabilities beyond primary use cases introduced last fall for Pixel smartphones and earlier this week for the Photos app.
After scanning the face of a celebrity, suggested replies appear below their name and vary depending on the individual in question, though virtually all celebrities’ search results included a net worth suggestion. A Google spokesperson did not answer questions about how Google Assistant determines suggested replies for each celebrity.
Though Lens often gets things right, some false results can be amusing. An attempt to identify a cat from a distance brought back results like caterpillar, hamster, and chinchilla.
Google is extending the power of its computer vision-powered Lens at a time when other major mobile providers are also increasing their computer vision chops.
Samsung’s Galaxy S9 and Galaxy S9+ and LG V30S ThinQ made their debuts last month at Mobile World Congress, and both focused on changes that make their device’s camera capable of deploying computer vision. Samsung began to use visual search from companies like Amazon and Pinterest in the Galaxy S8 last year.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,989 | 2,017 | "Big data platform Dataiku raises $28 million for international expansion | VentureBeat" | "https://venturebeat.com/2017/09/06/big-data-platform-dataiku-raises-28-million-for-international-expansion" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Big data platform Dataiku raises $28 million for international expansion Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Dataiku has scored its second round of funding, a $28 million investment led by Battery Ventures that the French company says will be used to increase hiring and marketing around the world.
The funding follows a first round of $14 million raised in 2016 that was led by returning investor FirstMark. Serena Capital and Alven also returned for this round, after having originally backed Dataiku with a $3.6 million round of seed funding in 2015.
Founded in 2013 by a group of French entrepreneurs, the company has since moved its headquarters to New York City, though the bulk of its development team remains in Paris.
Dataiku has created a cloud-based platform designed to make the huge amounts of data being gathered by companies more accessible to both data scientists and non-engineers. The idea is to simplify the interface around the data gathered by companies to allow more team members to analyze and dissect it.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In addition, the platform uses machine learning to help users tap into the insights that are often promised by big data but that can prove elusive, according to Dataiku cofounder and CEO Florian Douetteau.
“We have significant customers on both sides of the Atlantic,” Douetteau said. “And we have a product that has the potential to be one of the leaders in this software category. ” The company now has about 100 employees and plans to use the new funding to double that headcount. In addition to New York and Paris, Dataiku also now has an office in London.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,990 | 2,018 | "Data analytics startup Incorta raises $15 million from Microsoft's M12 and Telstra Ventures | VentureBeat" | "https://venturebeat.com/2018/10/18/data-analytics-startup-incorta-raises-15-million-from-microsofts-m12-and-telstra-ventures" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data analytics startup Incorta raises $15 million from Microsoft’s M12 and Telstra Ventures Share on Facebook Share on X Share on LinkedIn Incorta Homepage Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Data analytics software startup Incorta has raised $15 million in a series B extension round of funding from Microsoft’s venture capital (VC) arm M12 and Telstra Ventures, which closes the round at a total of $30 million.
This latest investment tranche comes a year after Incorta first announced a $15 million series B funding round , with investors including Kleiner Perkins and Alphabet’s VC division GV. And a year previous to that, GV led a $10 million series A round into the San Mateo-based startup. So with this news, it’s interesting to see both Microsoft and Google’s parent company now invested in the same startup.
The funding announcement was also timed to coincide with the company’s Fall ’18 Incorta platform release.
Hyperconverged Founded in 2013, Incorta’s hyperconverged analytics platform helps companies garner insights from crunching vast swathes of data from across their myriad cloud-based applications. Incorta meshes ETL (extract, transform, load), a common process for transferring data from its source to a data warehouse, with business intelligence (BI), data warehousing, and visualization tools.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Incorta’s core underlying promise is that it can help IT departments deliver new reports “within minutes as opposed to weeks,” according to the company.
“Business agility is severely hamstrung when queries take hours to respond despite running the analytical and reporting applications on very expensive and highly engineered appliances,” noted Incorta CEO Osama Elkady. “Within weeks, our customers are able to replace these appliances with the Incorta platform that scales horizontally on commodity hardware, on-premises or in the cloud, while delivering orders of magnitude faster query response times.” Big business The global cloud data warehouse market is expected to emerge as a $20 billion industry by 2020, up 40 percent from the $14 billion it was reportedly worth last year, according to IDC. And this, coupled with a growing demand for broader business intelligence services, is why we’re seeing a spike in investments in the space.
A few weeks back, San Mateo-based data warehousing company Snowflake Computing raised a chunky $450 million in funding at a $3.5 billion valuation. And a few months back, Palo Alto-based Yellowbrick Data emerged from stealth with $44 million in funding from big-name investors including GV.
Incorta’s clients include Broadcom, Henkel, Wadi Group, and a host of other Fortune 100 companies.
“Incorta’s approach to analytics fundamentally changes how quickly data is turned into insights at massive scale,” added M12 partner Rashmi Gopinath. “We’ve invested in Incorta because of the amazing list of Fortune 100 companies that have bet big by deploying Incorta technology in their most strategic data initiatives.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,991 | 2,019 | "Unravel raises $35 million to manage and optimize big data apps | VentureBeat" | "https://venturebeat.com/2019/05/14/unravel-raises-35-million-to-manage-and-optimize-big-data-apps" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Unravel raises $35 million to manage and optimize big data apps Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Big data , which in most contexts refers to large data sets that are analyzed algorithmically to reveal patterns and relationships, is marching steadily toward ubiquity. According to a report published last year by Dresner Advisory Services , big data adoption in enterprise steeply climbed from 17% in 2015 to 59% in 2018. Concurrent with this trend, global market revenue projections have inched upward, with firms like Wikibon anticipating an increase from $42 billion last year to $103 billion in 2027.
Just because big data apps and systems are popular doesn’t mean they’re easy to manage, though. That’s where Unravel comes in. The Palo Alto, California-based company — the brainchild of CEO Kunal Agarwal, a Sun Microsystems veteran, and Duke University computer science professor Shivnath Babu — offers a full-stack data operations platform that addresses everything from data ingestion and migration to processing and transforming. It’s an eminently successful one — annual recurring revenue grew 500% year-over-year in 2018 — and it’s poised to expand substantially in the coming year.
Unravel today announced that it’s raised $35 million in an oversubscribed series C financing led by Point72 Ventures, with participation from Harmony Partners, Menlo Ventures, GGV Capital, and M12 (Microsoft Ventures). The cash infusion comes after a $15 million series B round in January 2018 and a $7 million series B round in September 2016, and it brings Unravel’s total raised to $37.2 million.
“Every business is becoming a data business, and companies are relying on their data applications such as machine learning, [internet of things], and customer analytics for better business outcomes using technologies such as Spark, Kafka, and NoSQL,” said Agarwal. “We are making sure that these technologies are easy to operate and are high performing so that businesses can depend on them. We partner with our customers through their data journey and help them successfully run data apps on various systems whether on-premises or in the cloud.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: A graphic illustrating how Unravel monitors and optimizes systems and apps.
Unravel’s endgame, Agarwal explained, is to reduce the complexity of delivering stable and reliable apps for engineers, developers, and network architects alike. Toward that end, the company’s eponymous platform tracks and triages performance issues across customers’ systems and the apps running on these systems, namely by applying optimization libraries and fixes to failures, bottlenecks, and resource-wasting apps.
Its agentless, low-overhead microsensor design — which enables Unravel to be deployed on-premises, in the cloud, or in hybrid cloud environments such as Cloudera, Google Cloud, Oracle DBA, Quoble, Amazon Web Services, Hortonworks, Microsoft’s Azure, Cassandra, Databricks, and Mapr — offers per-cluster and per-node visibility into code, configurations, containers, resource constraints, and schedulers. It maps dependencies among apps, services, storage (both hot and cold), and users in a single dashboard with metrics and visualizations, and it leverages machine learning to provide plain-language configuration recommendations that take into account future app needs.
For instance, in a Spark framework, Unravel keeps tabs on status, duration, data I/O, stages, partitioning, garbage collection, and more, along with lowdowns, failures, killed jobs, and resource consumption. Unravel automatically resolves errors and auto-tunes Spark apps, and it serves up “context-sensitive” suggestions covering topics like RDD caching, CPU resource contention, and container resource utilization. Plus, it shows representations of SQL query plans and insights into how and when they are executed, and streamlines tasks like spawning processes, running queries, moving services to other queues, updating external databases, and killing apps that threaten the performance of other apps.
Perhaps best of all, Unravel boasts a robust API that integrates with workflow engines, messaging systems, and other DevOps solutions like Spark, Kafka, Hadoop, Tez, Slack, Pagerdubut, Apache HBase, and more to deliver proactive alerts about unexpected performance degradations. And for customers transitioning to the cloud, Unravel has an extensive set of app migration tools that identify the best app candidates, reveal the seasonality and the ideal time of day to take advantage of the best services prices, and measure baseline performance pre- and post-move.
“CIOs in our network told us story after story of traditional application monitoring tools failing in a big data context because those tools were designed for the world of the past. And we didn’t just hear this problem from third parties, we were seeing it at Point72 as well,” said Point72 chief market intelligence officer Matthew Granade. “This new architecture requires a different product, one built from the ground up to focus on the unique challenges posed by big data applications. Unravel is poised to capture this emerging big-data APM market.” Unravel’s current customers include Kaiser Permanente, Autodesk, YP.com, Adobe, Deutsche Bank, Wayfair, and Neustar, plus software startups, Fortune 100 financial services, airlines, supermarket retailers, multinational telecom groups and telecom providers, and global banks. Data Elite Ventures and AppDynamics founder Jyoti Bansal are among the company’s previous investors.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,992 | 2,019 | "Datameer raises $40 million for data set prep and analysis tools | VentureBeat" | "https://venturebeat.com/2019/10/29/datameer-raises-40-million-for-data-set-prep-and-analysis-tools" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Datameer raises $40 million for data set prep and analysis tools Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Datameer , a decade-old San Francisco-based company developing an end-to-end platform for data prep and analytics lifecycle management, today revealed that it’s secured a $40 million funding round led by ST Telemedia (STT). This latest tranche saw contributions from Redpoint Ventures, Kleiner Perkins, Nextworld Capital, Citi Ventures, and Top Tier Capital Partners, and it brings the company’s total venture capital raised to roughly $140 million following a $14.7 million venture series in April 2017 and a $40 million series E in 2015.
CEO Christian Rodatus said the influx of funds will accelerate the development of Neebo, a software as a service (SaaS) product designed to enable teams to create, use, and share analytics assets. It formally launches today with customers including Deutsche Bank, Scotiabank, BMO, Akbank, CloudCover, Siemens, Anthem BlueCross, Optum, and RBC.
“Neebo virtualizes these complex landscapes, without moving data and other assets, and helps a community of analytics professionals create and find assets, combine them, and publish them to any BI or data science tool,” said Rodatus. “Neebo is a SaaS solution that enables analytics professionals to kickstart a project in minutes and helps them immediately decrease time-to-insight. Further, AI-enabled discovery and blending builds a searchable repository of trusted assets to further boost analyst performance.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Datameer’s premiere schema- and code-free offering, Datameer X, runs on-premises or in the cloud and draws on a collection of over 70 connectors, an SDK, and an API to ingest data from enterprise corpora. It’s able to transform, blend, and enrich that data while filtering out corrupt entries using a spreadsheet-like tool with over 270 prebuilt functions, the results of which it optionally exports to data warehouses and on-premises databases.
Datameer X boasts built-in functions for data extraction, text analytics, and geo-location mapping in addition to de-duping and custom functions coded in Java. It generates an initial representative data sample that’s constantly resampled based on inputs, which together with proprietary algorithms help to identify patterns within data like behavior groupings and relationships.
The suite spotlights data attributes like dependencies, shape, quality, and clusters through visualizations such as histograms, and it supports monitoring and job scheduling on an hourly, daily, and weekly basis. Basic governance features like roles, permissions, and single sign-on are in tow, as well as tracking and reporting capabilities courtesy data lineage and audit trails.
As for Neebo, it similarly connects data sources like business apps, data warehouses, and local storage to imbue files with AI-driven searchability and discoverability. It offers a point-and-click workflow with data combining, cleaning, and appending functions for creating analytic assets that can be shared to business intelligence and data science tools, as well as a distributed query optimizer that ensures high throughput and fast response time to interactive queries.
Datameer pitches its products as robust solutions for marketing campaign optimization, customer segmentation, fraud detection, product optimization, and other such applications across segments such as financial services, retail, telecom, and health care. It claims that Datameer X alone leads to a 25 times boost in engineering efficiency and three times faster analytics cycles on average.
Those bold claims position it favorably against competitors like Dataiku , a startup developing a cloud-based big data analysis platform that’s raised nearly $150 million to date; Unreal , which offers a full-stack data operations platform that addresses everything from data ingestion and migration to processing and transforming; and Incorta , the provider of a hyperconverged analytics platform that pulls in data from various cloud-based applications. It’s anticipated that the big data as a service market will be worth $51.9 billion by 2025, assuming the current trend holds.
“With our focus on hyperscale data centers and solutions that promote workload migration to the cloud, we are thrilled to see a product like Neebo come to market, that will bridge analytics in the on-premises and cloud worlds to drastically increase analyst productivity,” said CEO of STT Stephen Miller. “We are also proud that the Datameer team is innovating at a rapid pace and has already filed five patents for Neebo to bring a highly differentiated product to market.” CloudCover CEO Vishal Parpia, an early Neebo customer, added, “Neebo saves [our customers] a costly migration exercise by unifying disparate data sources without having to copy and duplicate data. It allows us to focus on higher value services and our customers love us for it.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,993 | 2,018 | "Google's DeepMind wants AI to spot kidney injuries | VentureBeat" | "https://venturebeat.com/2018/02/22/googles-deepmind-wants-ai-to-spot-kidney-injuries" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s DeepMind wants AI to spot kidney injuries Share on Facebook Share on X Share on LinkedIn Meet DeepMind Health.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google subsidiary DeepMind announced today that it’s working with the U.S. Department of Veterans Affairs to use machine learning in an attempt to predict when patients will deteriorate during a hospital stay.
Deterioration (when a patient’s condition worsens) is a significant issue, since care providers can miss warning signs for potentially lethal conditions that arise as part of other treatment. DeepMind and the VA aim to tackle Acute Kidney Injury (AKI), which, as the name implies, occurs when a person’s kidneys temporarily stop working as well as they should. That can mean kidney failure, or just injury that reduces kidney function. AKI can be fatal if untreated.
DeepMind’s goal is to improve algorithms used to detect AKI so that doctors and nurses can treat patients more quickly. Dominic King, the clinical lead for DeepMind Health, said in a blog post that the two organizations are already familiar with the condition, which is why they’re choosing to tackle it first.
The company will have access to more than 700,000 medical records that have been stripped of any identifying personal details. Using those records, DeepMind and the VA aim to determine whether it’s possible to predict the onset of AKI — and patient deterioration more broadly — using machine learning.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! One of the key things to note here is that it’s possible our current crop of machine learning algorithms, the data available, and other factors may make this a task that can’t ultimately be solved through the application of AI.
DeepMind is also treading on sensitive territory by working with health data. The company ran afoul of regulators last year when the U.K.’s data protection watchdog said a deal the company had struck with that country’s National Health Service to access Britons’ anonymized health records failed to comply with the law.
The VA also had a run-in with data protection concerns. In 2016, the agency canceled a deal with AI startup Flow Health, which was supposed to use veterans’ medical information to predict diseases. That was an abrupt end to what was meant to be a five-year contract empowering the startup to use veterans’ genetic data and medical records.
All of that said, applications of machine learning have shown early promise in the realm of medicine. For example, a Google Brain team showed off its use of computer vision to detect heart disease earlier this week. DeepMind’s work could help mitigate a major cause of hospital deaths.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,994 | 2,018 | "DeepMind expands AI cancer research program to Japan | VentureBeat" | "https://venturebeat.com/2018/10/04/deepmind-expands-ai-cancer-research-program-to-japan" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind expands AI cancer research program to Japan Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
DeepMind is furthering its cancer research efforts with a newly announced partnership. Today, the London-based Google subsidiary said it has been given access to mammograms from roughly 30,000 women that were taken at Jikei University Hospital in Tokyo, Japan between 2007 and 2018. It’ll use that data to refine its artificially intelligent (AI) breast cancer detection algorithms.
Over the course of the next five years, DeepMind researchers will review the 30,000 images, along with 3,500 images from magnetic resonance imaging (MRI) scans and historical mammograms provided by the U.K.’s Optimam (an image database of over 80,000 scans extracted from the NHS’ National Breast Screening System), to investigate whether its AI systems can accurately spot signs of cancerous tissue.
The collaboration builds on DeepMind’s work with the Cancer Research UK Imperial Center at Imperial College London, where it has already analyzed roughly 7,500 mammograms.
“The involvement of the Jikei University Hospital in a global research partnership will help take us one step closer to developing technology that could ultimately transform care for the millions of people who develop breast cancer around the world every year,” said professor Ara Darzi, director of the Cancer Research UK Imperial Center.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Dominic King, clinical lead of DeepMind Health, noted in a blog post that training DeepMind’s AI system on datasets from other countries would help mitigate algorithmic bias that might inadvertently crop up.
“Bias can occur when you train an AI system on data which doesn’t accurately reflect the people it is being designed for, and it’s a serious problem,” he said. “In the field of mammography … there can be considerable variations in breast density between ethnic groups. Bias in our AI system could therefore result in breast cancers being misidentified or even missed altogether if the technology is not set up to reflect these differences.” DeepMind is involved in several health-related AI projects, including an ongoing trial at the U.S. Department of Veterans Affairs that seeks to predict when patients’ conditions will deteriorate during a hospital stay. Previously, it partnered with the U.K.’s National Health Service to develop an algorithm that could search for early signs of blindness.
And in a paper presented at the Medical Image Computing & Computer Assisted Intervention conference last month, DeepMind researchers said they’d developed an AI system capable of segmenting CT scans with “near-human performance.” They plan to deploy the model, which they say has the potential to reduce the time to diagnosis, in a clinical environment in the next year.
Google has more broadly invested heavily in AI health care applications. This spring, the Mountain View company’s Medical Brain team said they’d created an AI system that could predict the likelihood of hospital readmission and that they had used it in June to forecast mortality rates at two hospitals with 90 percent accuracy.
In February, scientists from Google and Verily Life Sciences, its health-tech subsidiary, created a machine learning network that could accurately deduce basic information about a person, including their age and blood pressure, and whether they were at risk of suffering a major cardiac event like a heart attack.
Verily, a research division within Google parent Alphabet, is also developing automated systems to tackle sleep apnea, pharmaceutical drug discovery, blood collection, and health insurance.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,995 | 2,018 | "How messaging, AI, and bots are rescuing customer service | VentureBeat" | "https://venturebeat.com/2018/10/25/how-messaging-ai-and-bots-are-rescuing-customer-service" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored How messaging, AI, and bots are rescuing customer service Share on Facebook Share on X Share on LinkedIn Presented by Helpshift Consumers want messaging-based customer service channels. Companies that transition from voice- and email-based customer support see a 25 percent greater annual growth in revenue — plus an 8.6 percent increase in average profit margin per customer.
For customers, it’s the convenience of using the channel they use most in their daily lives. For agents, it’s the opportunity to work at a higher level on more complex issues while messaging takes care of routine tasks.
And it doesn’t take a team of developers and deep pockets to add messaging to your customer service channels.
It’s time to get customer service right.
Read how right here.
Sponsored posts are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,996 | 2,019 | "Alexa Presentation Language is now generally available | VentureBeat" | "https://venturebeat.com/2019/09/25/alexa-presentation-language-is-now-generally-available" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Alexa Presentation Language is now generally available Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Following a press event at its Seattle headquarters this morning, Amazon announced general availability of Alexa Presentation Language (APL), the toolset designed to make it easier for developers to create “visually rich” skills for Alexa devices with screens — such as Amazon’s Echo Show, Fire TV, Fire Tablet, and Echo Spot. Its rollout comes after the launch of APL 1.1 in beta in July and coincides with the preview of two new developer tools — skill personalization and the Alexa Web API for Games.
“We believe that the emergence of voice user interfaces isn’t an incremental improvement to existing technology; it marks a significant shift in human-computer interaction,” wrote Alex Skills Kit senior product manager Arunjeet Singh. “That’s why APL is designed from the ground up for creating voice-first, multimodal Alexa skills.” Amazon Presentation Language As a refresher, APL — a JSON-based HTML5 language — consists of five core elements: Images, text, and lists Layouts, styles, and conditional expressions Speech synchronization Slideshows Built-in intents by ordinal App designers can specify text color, size, and weight; make text and image responsive; or use both vertical and horizontal scrollbars to show continuous lists of choices. APL ships with preconfigured headers, footers, and dialog boxes, and app layouts, voice responses, and other visuals can be tailored to the device shapes and types. Also in tow are commands that change the audio or visual presentation of on-screen content, built-in intents that enable selection by ordinal (for example, a user can say “Select the second one” when a list is on-screen), and slideshows of images and other content.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Here’s how it works: Developers create JSON files — “documents,” in APL parlance — that get invoked by Alexa skills and downloaded to target devices. Those devices import images and other data from the document and render the experience.
Skills using APL include a CNBC stock organizer, Big Sky’s weather forecast app, public transit schedule tracker NextThere, travel app Kayak, and Food Network’s recipe sorter.
Notably, Facebook’s Portal and Portal+ devices incorporate elements of APL for hands-free visual content; their weather forecasts, shopping lists, and calendar events screens were designed with Amazon’s toolkit. And LG and Sony smart TVs and Lenovo tablets support APL through the Alexa Smart Screen and TV Device SDK.
Amazon says that in the coming months it will improve APL with support for shadow effects, noise filters, and play and pause buttons on TV devices.
Personalized Alexa skill experiences Alongside the latest version of the Alexa Presentation Language, Amazon took the wraps off skill personalization in the Alexa Skills Kit , which enables developers to create personalized skill experiences using voice profiles captured by the Alexa app. Voice apps leveraging skill personalization can address preferences, remember settings, differentiate between household members, and more.
A developer could personalize a game based on who’s playing, for example, or offer a customized exercise routine tailored to individual fitness goals. Moreover, voice-personalized Alexa skills can be combined with app-to-app account linking to help users discover skills, link accounts, and deliver flows unique to their voice.
Skill personalization is available in preview for existing and new skills. Interested developers are required to fill out and submit a brief survey , after which they’ll be notified if they’re selected.
Alexa Web API for Games Today also marks the debut of the Alexa Web API for Games , which Amazon describes as a collection of tech and tools for creating visually rich and interactive voice-controlled game experiences. Using the new API, developers can build voice apps using HTML, Canvas 2D, WebAudio, WebG, JavaScript, and CSS, starting with Echo Show devices and Fire TVs.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,997 | 2,019 | "DoD awards TwoSense $2.42 million contract for behavioral biometric authentication | VentureBeat" | "https://venturebeat.com/2019/02/07/dod-awards-twosense-2-42-million-contract-for-behavioral-biometric-authentication" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DoD awards TwoSense $2.42 million contract for behavioral biometric authentication Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Artificial intelligence (AI) seems almost tailor-made for automating mundane office tasks and identifying seizure types , but it might be equally well-suited to security — including biometric security. That’s what startup TwoSense is banking on, and it’s got a government contract to show for it.
The Brooklyn, New York-based company, which was cofounded by Dawud Gordon, John Tanios, and Ulf Blanke in 2014 and has raised $850,000 to date, today announced that it was awarded a $2.42 million contract in October 2018 by the U.S. Department of Defense’s (DoD) Defense Information Systems Agency (DISA). While the particulars remain under wraps, TwoSense revealed that the project is aimed at replacing the DoD’s physical ID cards — the Common Access Card (CAC) — with both traditional and AI-driven biometric authentication, as part of DISA’s wide-ranging Assured Identity Initiative.
“Both DISA and TwoSense believe that continuous authentication is the cornerstone of securing identity,” said Gordon, who serves as CEO. “Behavior-based authentication is invisible to the user; therefore, it can be used continuously without creating any extra work.” TwoSense’s productized AI can authenticate a person from their movements, interactions, and mannerisms, as measured through both smartphones and workstation PCs. (One of the techniques in its system-as-a-service arsenal is ballistocardiography, which graphically represents the muscle movements caused by blood as it’s ejected into vessels by the heart.) TwoSenses’ systems also take into account gait, in addition to things like on-body phone location, hand pressure, proximity, and typing cadence.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Machine learning algorithms running in TwoSenses’ cloud learn the behavior of each user — how they walk, interact with their phones, commute to work, and (a bit creepily) where they spend their time. It’s sort of like Google’s On-Body Detection , an Android feature that prevents a phone’s lock function from activating when it’s on-person or in-hand — albeit more sophisticated.
A few employees might object to that granularity of observation. But from employers’ perspective, says TwoSense, it’s an attractive alternative to PINs, passwords, and SMS-based forms of two-factor authentications. The company’s biometric approach is continuous, shrinking hackers’ windows of opportunity and eliminating attack vectors. And employees, for their part, are spared the inconvenience of having to fish for an ID card or check their phone for texted two-factor codes — in exchange for a bit of privacy, of course.
TwoSense has a point. Security experts have pointed to weaknesses in SMS-based 2FA, citing the risk of interception by attackers who manage to spoof phone numbers. It’s one of the reasons 28 percent of people have never used two-factor authentication on any device or service, according to a Sophos survey — which is really worrisome, considering that three out of four people use duplicate passwords and 21 percent of people use codes that are over 10 years old.
To be clear, behavioral biometric authentication isn’t a new idea. Startups like Israel-based BioCatch, which recently raised $30 million, and Simility , which was acquired by PayPal in June, leverage AI and hundreds of parameters — such as the way a user moves their cursor and holds their phone — to build a profile for what constitutes “normal” behavior, and subsequently to perform authentication and catch fraudsters in the act.
But TwoSense is betting its particular solution will hasten DISA’s move away from other authentication solutions, like Purebred, a prototypical platform that relies on DoD mobile devices to provide one-time passwords. Last year, the agency said it was piloting AI-driven mobile and desktop systems that, like TwoSense, can identify users by behavioral features such as gait, and prevent and respond to the misuse of credentials.
“Keeping our overall objective in mind, we know that CAC as a form factor doesn’t perform well in the mobile environment,” Jeremy Corey, chief of the agency’s of Cyber Innovation Division, said during a presentation at the 2018 Armed Forces Communications and Electronics Association’s Defensive Cyber Operations Symposium in Baltimore last May. “We want to ensure that we retain the equivalent assurances of the secure elements that are on the card as we begin to potentially use mobile devices for authentication and access. We aim to achieve sufficient authentication assurance to facilitate a single platform for use in day-to-day operations, and potentially provide a capability to utilize one device for multiple networks.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,998 | 2,019 | "A massive biometric breach is only a matter of time | VentureBeat" | "https://venturebeat.com/2019/03/16/a-massive-biometric-breach-is-only-a-matter-of-time" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion A massive biometric breach is only a matter of time Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
I signed up for CLEAR about 2 years ago while catching a flight to DC from Seattle.
CLEAR is like “TSA PRE for VIPs.” Basically, you pay $100 annually and, at many — but not all-airports — you can jump the TSA PRE line. You also don’t have to show ID as your biometrics are scanned into the system, proving identity.
I had noticed, in the years before that, that more and more people were gaining TSA PRE. Same for Global Entry. I alone probably got 50 people to sign up for the former and maybe 20 for the latter. They are huge time savers. I’d say that CLEAR saves me 10-20 minutes on average on every other flight I take. Given how much I used to fly, it was worth it when I signed up.
Whenever I use the CLEAR line, it takes, on average, 1-2 minutes to get to the x-ray screening area.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Until Monday, March 11th.
When I got to Dulles at 7:10am for my 7:35am boarding/8am departure flight, I was confronted with a CLEAR line that was, at least, 45 people long.
That has NEVER happened to me.
I asked about it and, apparently, the Monday morning rush is like this. I just don’t travel on Monday mornings that often.
Still, other passengers made the same remark that volume was unusually high.
I immediately jumped to the conclusion that CLEAR was experiencing massive growth and that, eventually, we’d see an even higher priced, more exclusive service. Effectively, we will be privatizing security at airports based on price points.
Then, I caught myself and realized that it may, in fact, be just one data point. A Monday morning rush hour for business people makes a ton of sense. So, maybe this was an anomaly.
Either way, as I stood in line, I had a terrifying thought.
The reason CLEAR is able to process people so much faster is that they, like Global Entry, use biometric (retina and fingerprint) to identify people.
Now, when I signed up for the service, I acknowledged that I was signing away some privacy in favor of convenience. A trade-off we are increasingly making as a society.
I did it anyway.
But when I saw the number of people who had made the same decision, I realized that the databases for both CLEAR and Global Entry were becoming a biometric Equifax.
Equifax, as you know, was breeched and the personal and financial data of over 100 million Americans was stolen.
What the impact of that theft will be over the long term remains to be seen.
As these two biometric databases become more popular, they will become a bigger target for hackers. There’s no question there’s a black market for individual biometric data.
Any security expert will tell you a hack is not a question of “if,” just a question of “when.” Ultimately, if it hasn’t happened already, both of these databases will be compromised. Similarly, the consequences will be unknown and far-reaching.
For something as critical as individual biometric identity, a centralized system, vulnerable to compromise, should be a non-starter. Ultimately, we will see decentralized blockchain systems such as Everest take hold because of the way they uniquely address the 2 biggest components of Identity: Authentication and Authorization. (Everest doesn’t have an airport offering like CLEAR, but its technology — and the tech of similar blockchain-based startups — addresses similar use cases. Other players in the space include Civic and uPort.
) Everyone understands “Authentication.” “Are you really who you say you are?” However, it is “Authorization” that is actually more important. In this scenario, the owner of the Identity is the owner of the data, and they have full control over with who they share with as well as what they share.
With this type of control, individuals are free to start building a history of transactions and ultimately a true “credit score.” In Everest, for example, the transactions are performed via EverWallet and immutably written to the EverChain ledger, which is the source of the “credit score.” Everest is already working with several government ministries across multiple Asian countries.
As governments and citizens become increasingly aware of privacy issues, there will be increased willingness to consider alternatives. Simultaneously, blockchain technology will continue to grow in terms of security, robustness, adaptability to new threats, and scalability. When it does, it will enable solutions that are orders of magnitude superior and will more effectively balance the need for biometric data as unique identifiers with the need for maximum security of that biometric data to protect individuals.
Eventually, global personal identity systems will live only on blockchain-based systems.
It’s pretty clear to me, at least.
Jeremy Epstein is CEO of Never Stop Marketing and author of The CMO Primer for the Blockchain World.
He currently works with startups in the blockchain and decentralization space, including OpenBazaar, Zcash, ARK, Gladius, Peer Mountain and DAOstack.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,999 | 2,018 | "Japan bans Huawei and ZTE 5G networking hardware; will Canada be next? | VentureBeat" | "https://venturebeat.com/2018/12/10/japan-bans-huawei-and-zte-5g-networking-hardware-will-canada-be-next" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Japan bans Huawei and ZTE 5G networking hardware; will Canada be next? Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
If 5G equipment bans from the United States , Australia , and New Zealand weren’t enough this year, Huawei will end 2018 on an even worse note: Kyodo News reports that Japan’s government has decided to block the Chinese company and its smaller rival ZTE from network hardware procurement. Not coincidentally, Canadian authorities are publicly discussing a similar ban following last week’s arrest of Huawei’s CFO.
Japan’s decision comes less than a month after the United States reportedly lobbied several overseas allies to block Chinese cellular hardware from their wireless networks, in part due to concerns over monitoring of U.S. military base communications. According to reports, the U.S. floated the prospect of financial subsidies for compliant countries, alongside the threat of reduced assistance to non-compliant ones.
Kyodo reports that the Japanese government complied, and is coordinating with top cellular providers to remove Huawei and ZTE hardware from their networks. Three carriers have agreed to stop using Chinese 4G equipment and not introduce new 5G hardware into their networks. A soon-to-be-launched fourth carrier has also said it will not use Chinese networking gear.
“It’s extremely crucial not to procure equipment that embeds malicious functions including information theft and destruction,” said Yoshihide Suga, Japan’s Chief Cabinet Secretary, noting that the country is now studying what to do with already purchased Chinese hardware. Top carrier Softbank has indicated that it will replace Chinese 4G cellular products with U.S. and European alternatives, while rivals NTT Docomo, KDDI, and Rakuten will avoid using Huawei and ZTE networking hardware in their 5G infrastructures.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Notably, none of the Japanese carriers will stop selling consumer devices such as phones and tablets from Huawei or ZTE, as they are not believed to impact core network security. That’s unlikely to change in the immediate future, giving users the ability to keep purchasing comparatively inexpensive Chinese products — albeit with potential security risks.
Huawei has strongly denied accusations that its products constitute any form of security risk, and continues to offer its 5G networking hardware to carriers in South America, Africa, and Asia. ZTE was nearly forced to stop doing business entirely after a brief but hastily modified ban by the U.S. government, and actively turned its attention to pitching Japanese cellular companies , apparently without success.
China’s government has responded forcefully to each of the international bans, most recently defending Huawei and ZTE in a statement (via Google Translate) ahead of the Japanese government’s decision. But the protestations have generally fallen on deaf ears, and U.S. officials have continued to lobby friendly intelligence agencies across the world.
Northern neighbor Canada could be the next major U.S. ally to block Huawei from its communications networks.
U.S. lawmakers lobbied Canadian Prime Minister Justin Trudeau for a ban on Huawei 5G gear in October, but the government was largely quiet until shortly after Huawei CFO Meng Wanzhou — daughter of the company’s founder, Ren Zhengfei — was arrested in Canada last week on charges of violating U.S. sanctions against Iran.
Though Canadian authorities have described the arrest as non-political, it brought longstanding issues with Huawei to greater attention in the Canadian media. Shortly after The Globe and Mail published a scathing opinion piece on Huawei, former Prime Minister Stephen Harper called for the company to be banned , suggesting that western allies needed to hold China accountable for “rule breaking” that imperiled its trade relationships with partners. “I obviously note that the United States is encouraging western allies to essentially push Huawei out of the emerging 5G network,” Harper said, “and my personal view is that that is something western countries should be doing in terms of our own long-term security issues.” Soon thereafter, the Toronto Sun built upon Harper’s comments in an anti-Huawei editorial, saying that the company couldn’t be trusted to participate in Canada’s 5G network. And in a separate interview today, Canada’s Infrastructure Minister Francois-Philippe Champagne told the National Post that the country is relying upon input from its intelligence services in deciding whether to ban Huawei, putting national security first in the decision.
While there’s no timetable yet for Canada’s decision on Huawei, and Champagne has said that the issue is too important to be rushed, time is running out if the country hopes to deploy 5G over the next year. Huawei participated in 5G testing with Canadian carrier Telus in February, but by March, Canadian authorities began to question the wisdom of deploying Huawei 5G hardware.
The first live Canadian 5G network, dubbed ENCQOR, is scheduled to begin serving business customers in early 2019.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,000 | 2,019 | "At Qualcomm, 5G is headed everywhere and into virtually everything | VentureBeat" | "https://venturebeat.com/2019/09/13/at-qualcomm-5g-is-headed-everywhere-and-into-virtually-everything" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis At Qualcomm, 5G is headed everywhere and into virtually everything Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
It was easy to call 5G inevitable. Whatever it was, some next-generation cellular technology was certain to follow 2G, 3G, and 4G into the mainstream of global life.
It was harder to predict that 5G will be omnipresent. Up until two years ago, the very idea that cellular technology would soon make its way into virtually everything — including cars, factories, and mixed reality glasses — was at best an engineers’ fantasy, months if not years away from being accepted by people as science fact rather than science fiction.
The hardest part was engineering the components and systems — tangible and intangible — that will actually make 5G ubiquitous over the next five years. It’s difficult to quantify the exact combination of vision, moxie, historical expertise, and human talent necessary to execute on such a strategy, say nothing of doing so on time or ahead of schedule , but I can guarantee you that if there were five companies in the world capable of such a thing, one would be at the very top of the list.
That’s the reason I visited Qualcomm yesterday, spending an entire day meeting with its development teams and seeing some of their latest research projects. As the 5G era truly kicks off, I wanted to gauge the breadth of what’s coming soon for myself, so that I could be prepared to cover it all.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! After seven hours of demos and discussions, I was left with two impressions. What’s about to become available is profoundly exciting: more performant than expected and far better than what you’re accustomed to, in most regards. And no one person can be prepared to cover it all in depth. The scope of Qualcomm’s 5G work alone touches so many industries and calls for such broad expertise that only a company with multiple, coordinated leaders and tens of thousands of employees could address everything.
Here’s a high-level look at what’s currently going on.
Mobile devices What a difference three years makes. In the photo above, you can see how much 5G modems have advanced through 2017, 2018, and 2019: Last year, the gigantic, heavy server rack unit on the right was miniaturized into the early Snapdragon X50 modem that fit inside a brick-like black testing unit. Then, the parts shrunk into a clear-backed 5G smartphone reference design Qualcomm showed publicly at CES early this year — complete with radio antennas shorter than matchsticks.
Just as was the case with 4G, the first round of 5G phones have primarily been targeted at early adopters — a point both Qualcomm and carriers carefully messaged from day one of the devices’ rollouts. If you were considering jumping on board with Samsung’s Galaxy S10 5G , its caveats were made clear before it even was identified by name: Early network coverage would be limited even within cities, it wouldn’t work on all of a given carrier’s towers , and there would be some teething pains. In the U.S., carriers presented it as a way for early users to be first to experience mobile 5G, warts and all, and that’s exactly what it delivered: peak speeds of over 1Gbps, otherwise falling back hard on LTE. If the gap bothers you, that’s because the difference is huge.
While some of the concerns about early devices such as the S10 5G are entirely valid, my view is that others have been sensationalized. Regardless, between software updates for existing hardware and new devices with new chips, we’re about to move into 5G round two, where high-speed performance is going to become more common.
More towers, new tower software, dynamic 4G/5G spectrum sharing of sub-6GHz frequencies, and more capable phones are all coming over the next few months. That means 5G devices are going to spend more time delivering crazy fast download speeds. At least two of the top three U.S. carriers have committed to nationwide 5G in 2020 , with the third expecting at least 50% population coverage, so the question isn’t whether you’ll have 5G in your city — it’s how soon.
The two key areas to watch for mobile 5G are advancements in indoor and outdoor performance. Twelve stories up, overlooking San Diego from the rooftop of Qualcomm’s headquarters, the company is currently testing a next-generation 5G radio system called Project Pentari, capable of using beamforming to deliver outrageously fast speeds using 3.5GHz spectrum. Rather than spreading radio signals in a flashlight-style wide arc of 120 or 130 degrees, it directs 6.5-degree beams to specific receiving devices at roughly 1-mile distances, enabling multiple individual users to hit speeds ranging from 750Mbps to 3Gbps depending on the system’s configuration.
Qualcomm also continues to be bullish on even higher-performance millimeter wave 5G, which it’s now advancing with longer-range antenna capabilities — a topic I’ll address in the next two sections.
Indoor mobile 5G Indoor venues hosting masses of people — transportation hubs, public venues, and office buildings — are the other key areas where Qualcomm is conspicuously working to improve mobile 5G performance. The goal is to effectively blanket your favorite subway stations, airports, stadiums , and other gathering spaces with strong 5G coverage, and to that end, the company has found that simply placing millimeter wave cells in the same indoor locations as Wi-Fi base stations yields solid coverage.
As the image above shows, that strategy enables comprehensive 5G service across a 42,750-square-meter convention center hall in Barcelona with zero dead zones, while capitalizing on the Fira Gran Via’s existing wired backhaul infrastructure. Similar deployments in an underground Beijing subway station and an indoor office environment are yielding nearly 98% coverage while providing total network download throughput in the 15Tbps range — more than enough for dozens of employees or hundreds of people to share, each enjoying super-fast performance.
Home and office broadband Several years ago, the idea of relying on a cellular connection for home broadband was all but laughable — between the likelihood of low LTE speed and the vagaries of connections in some areas, you’d have a better chance of losing in Russian roulette than winning in using a 4G connection for home or office internet access. Your odds are much, much better with 5G.
Speeds are dramatically higher, rivaling or exceeding what people get from typical wired broadband packages, and the modem hardware is getting a lot more powerful.
The reference design above illustrates how Qualcomm will be advancing home and office cellular broadband over the next year. That’s a 5G outdoor unit, designed to be mounted outside your home as the replacement for what cable companies call “last mile” wired connectivity — the painful process of actually showing up at each house and physically grafting a new cable onto the existing wired network. Instead, this wireless box just gets installed outside akin to a (small) satellite dish, and you connect your Wi-Fi, Ethernet, or other access point so all your existing devices can share the 1Gbps+ 5G connection.
Qualcomm’s biggest news on this front is that it has just dramatically boosted the range of the millimeter wave 5G antennas in these devices, enabling connectivity somewhere between a kilometer and a mile at normal elevations, depending on common obstructions; even longer distances are possible at greater, unobstructed heights.
While much of millimeter wave’s early focus has been on population-dense urban environments and gathering places, the company strongly believes that the hardware can serve rural customers , as well. People on rock-laden or otherwise challenging land could benefit from high-bandwidth point-to-point wireless connections — assuming that carriers (including smaller tier 2 names) work to bring that quality of connectivity to their customers. Qualcomm has the technology ready for companies that are ready to step up and adopt it.
VR/AR/XR The best thing I can say about the current state of Qualcomm’s mixed reality hardware is something that may sound less impressive than it actually is, but if you hear it in the voice of Apple’s late founder Steve Jobs, you’ll get my meaning: It just works.
We’ve been teased for years by the “coming soon” prospect of wireless mixed reality that is indistinguishable from wired mixed reality, and in turn, barely distinguishable from actual reality. No one knew how and when all the technology pieces would come together, but within the XR industry, there wasn’t any doubt that it would happen — incrementally but eventually.
For VR at least, the time is now. Qualcomm demonstrated a live, completely wireless XR reference headset that was streaming low latency, high-resolution video direct over Wi-Fi from a server elsewhere in the building. You can’t look inside the lightweight goggles as I did, but the split-screen display above shows you basically what I was seeing: PC-caliber stereoscopic 3D graphics that were noticeably better than what’s possible with the just-released Oculus Quest , while similarly freed from any cables for tracking or power.
There’s significant technology behind this: There was no obvious artifacting or degradation in the video, no perceptible lag, and a tight integration between the Snapdragon-based client headset’s positional data and the server’s transmission of data. While 60GHz Wi-Fi appears to be the current wireless technology of choice for XR, the next move is to 5G.
Qualcomm’s Hiren Bhinde told me that the company’s support for OEMs interested in its reference headset is so comprehensive — from software stack to component sourcing and manufacturing partners — that a product can go from reference design to market in as little as four months, assuming the OEM hits the ground running. It’s up to the OEM to provide the content, experience, and third-party developer support for the device, but if your favorite company wanted to offer an XR headset, it could go from zero to something in less than a year.
The company has been steadily iterating on the core elements of the extended reality experience since it decided to pursue the business, and realized over time that AI — too often an industry buzzword — will actually have significant roles in every part of the XR experience. Computer vision and machine learning empower a device’s spatial awareness for scene mapping and user tracking, while AI also plays roles in audio processing for voice input and output, among other features.
Going forward, Qualcomm is expecting that AI will also power visible Alexa-like virtual assistants within apps and agents in VR games, as well as enabling the sort of real-time AR language translation and keynote presentation-caliber avatar capabilities Microsoft recently showed off in its HoloLens 2 demo. I’m personally quite excited for the next stage of Snapdragon-powered AR glasses , but that’s a topic for another time.
Automotive and C-V2X Without diving too deep into this topic, I’ll note that what Qualcomm’s working on for connected cars is equally fascinating and exciting — though as the company’s Carl Ormond told me, it’s largely a question of when the technologies I saw will make it into cars, not whether they’re coming. The automotive industry currently operates on a five-year lag, such that a solution developed today won’t start showing up in production vehicles until 2024. So even though futuristic instrument panels, vehicle-to-vehicle communication, and next-level autonomous driving were already underway a year or two ago, several more years may pass before we see them into cars.
What’s coming? Imagine unlocking and starting your car with a 3D face scan and fingerprint rather than a key. Picture 5G smartphone-style apps for communication and tablet-style maps directly in your field of view — perhaps even embedded in your steering wheel — rather than on a center console screen or in your pocket. Then understand that everything from mobile communications to your interactions with the road and other vehicles will be tied together wirelessly, using a mix of vehicle-to-vehicle, vehicle-to-infrastructure, vehicle-to-network, and vehicle-to-pedestrian protocols.
One of Qualcomm’s demos showed a car using human body recognition and machine learning to predict when people would cross a crosswalk, conveying that information both to the driver and nearby connected cars. Armed with predictive and actual information, the closest car could automatically stop before hitting a person, while the cars behind it could know to slow down or stop rather than causing a chain collision. Another demo showed a car sharing turn signal intent and pedestrian crossing information with a crosswalk, enabling the driver and nearby traffic signal to know when it was safe for people or vehicles to move.
I don’t believe that car accidents will become a thing of the past any time soon. But as advance warning technologies make their way into cars, and cars become capable of sharing those warnings automatically with other cars, roads are going to be a lot safer for drivers, passengers, and pedestrians than they’ve been in decades.
Industrial IoT Another topic I’ll only briefly touch upon is one that is overwhelmingly large: the use of 5G in industrial settings. Last year, Qualcomm said that it was working on 5G solutions for factory automation that would enable robots to operate wirelessly rather than requiring Ethernet cables. It sounds easy until you realize that the robots operate at superhuman speeds as parts of assembly lines, such that milliseconds and even individual data packet-level mistakes actually matter, so manufacturing companies aren’t willing to accept “good enough.” They demand 99.9999% reliability, ideally such that if there are packet loss errors, they never come two in a row. That level of precision means that a factory won’t need to shut down its production lines, a level of failure that manufacturers won’t tolerate with any frequency. To accomplish that, Qualcomm is using multiple 5G radios within industrial settings to guarantee that robots will receive orders at all times, regardless of obstructions that might be moving around inside the work areas. These radios can operate not only for industrial automation, but also to process data for security cameras, AR headsets, edge computers, handheld terminals, guided vehicles such as forklifts, and sensors operating within the space.
The chart above shows how an ultra-reliable 5G system will work. A two-base station system can achieve over 99.95% reliability, such that 37,000 data packets out of 86.4 million will be lost in a 24 hour period, while a more redundant four-base station system will be over 99.9999% reliable, losing only 50 of the 86.4 million packets in the same time. Critically, none of those 50 packets will be consecutive, so they’re not show-stoppers that require the factory to pause production.
Industrial IoT isn’t sexy, and its importance can be hard to explain. But when a company that once took six months to reconfigure its production lines due to cabling and related physical considerations can now switch everything up in six days with portable wireless devices, that means new things you want can get made faster. And better.
Licensing No two images I could share with you illustrate the point I’m about to make better than the ones immediately above and below these words. While Qualcomm is largely known for chips that power increasingly important products, its chip development is backed by a combination of engineering R&D and patent licensing — both for products that made it to market and succeeded, as well as ones that you’ve never heard of but paved the way for future innovations.
For average users like us, 5G is a done deal — a basically solved challenge inside a glass and metal box that can be enjoyed without knowing any of the work or parts that make it possible. But Qualcomm exists specifically for the purpose of thinking about and solving technical challenges years before most people even realize that there were problems worth addressing.
As I looked at testing rigs and demo areas, I couldn’t help but think about the human labor and ingenuity involved in developing them. Consider just the big blue box above: Putting aside the magic that’s happening inside, look at the back, and you’ll see that someone needed to physically attach each of the cables to support its massive array of antennas. That’s just one box at one site, and who can count the number of such boxes, tests, failures, and small successes even a single technology needs before it’s ready to fit in your pocket? “That’s the thing about research,” Qualcomm senior VP of engineering Durga Malladi told me. “When a certain technology, even though it stands on its own merit … still doesn’t take off, the reason for that is very likely that it’s ahead of its time. And it takes a long time.” One example: Peer-to-peer device communication was created and offered to cellular carriers over a decade ago, but no one adopted it. After an extended period of reflection, Malladi said, “we were thinking about it, and we said, maybe it’s not yet ready for devices. But what if instead of devices, we call these devices ‘vehicles?’ That was the origin of V2X, vehicles communicating with each other” and more.
Flash forward to today, and “there’s lots of activity in that space, large scale trials” with Ford and so much of the automotive industry on board, Malladi said. Years after the concept seemed destined for obscurity, cars are being tested with the feature, and even the original vision is apparently generating interest.
“So you’ve just got to keep the faith. If there’s one thing that you learn in research, it’s patience.” The labor for such endeavors is measured in years and tens of thousands of people, while costs are measured in the billions of dollars. Qualcomm isn’t the only company (see: Huawei and Nokia ) that does this R&D or works to develop cellular standards, but its contributions are qualitatively significant, and its investments of personnel, money, and time have been massive. Consequently, it collects licensing fees from companies that want to use its innovations in their products, generally as a fixed percentage of the product’s sales, but capped such that it doesn’t make twice as much royalty on an $800 phone as it does on a $400 phone — instead, the royalty’s the same in each case.
Licensing has been a somewhat controversial part of the Qualcomm story because a few licensees — most notably Apple — have publicly balked at letting licensing fees cut too much into their own profit margins, even though that’s just a cost of using innovations someone else spent the labor, money, and time to develop. Yet my sense is that recouping R&D costs through licensing has enabled Qualcomm to look forward and enter new markets without the fear that its only compensation will be for chips that can become commoditized.
Based on my visit yesterday, it’s clear that 5G isn’t just going to touch a lot of markets over the next few years; it is indeed going to transform them. And while it’s certainly worth watching what other wireless and system-building companies are going to do in the 5G space, my belief is that Qualcomm is the one best positioned to lead it for the foreseeable future.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,001 | 2,019 | "NTT Docomo moves up Japan-wide 5G to June 2020, offers early access | VentureBeat" | "https://venturebeat.com/2019/09/19/ntt-docomo-moves-up-japan-wide-5g-to-june-2020-offers-early-access" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages NTT Docomo moves up Japan-wide 5G to June 2020, offers early access Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Shortly after smaller rival Softbank said it is a ccelerating its Japanese 5G network deployment by two years, targeting 60% population coverage by 2023, top carrier NTT Docomo confirmed that it’s also pushing up its 5G rollout plans. It now plans to have 5G base stations in all 47 Japanese prefectures by the end of June 2020, hitting 10,000 total base stations by spring 2021, and will begin offering limited public 5G tests in several cities on September 20.
“We are marking Friday as the day we begin our full-fledged 5G services,” said NTT Docomo president Kazuhiro Yoshizawa (via Japan Times ), though calling its initial offering “full-fledged” is more than a stretch. Commercial 5G service for paying customers is still expected to commence in spring 2020, just ahead of July’s Summer Olympic Games in Tokyo.
Starting tomorrow, the carrier will make three types of 5G test devices available at four NTT Docomo stores in Tokyo, Nagoya, and Osaka, as well as Tokyo Stadium, enabling some visitors to experience the Rugby World Cup match between Japan and Russia from multiple viewing angles. Tests later this year are expected to cover live music events and remote golf lessons, the latter using artificial intelligence to analyze 5G-captured footage of users’ swings.
NTT Docomo’s early public testing coincides with the launch of the iPhone 11 family, which is expected to be Apple’s last flagship phone without 5G. iPhones have proved to be extremely popular in Japan, singlehandedly buoying rivals SoftBank and KDDI before NTT Docomo obtained rights to sell the phone. The top carrier’s move to guarantee some 5G coverage throughout every state-like prefecture next year suggests it will be ready to offer next year’s models nationwide — assuming they launch as expected in September 2020.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,002 | 2,019 | "Qualcomm shares a practical, compelling vision for 5G's first 5 years | VentureBeat" | "https://venturebeat.com/2019/09/25/qualcomm-shares-a-practical-compelling-vision-for-5gs-first-5-years" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis Qualcomm shares a practical, compelling vision for 5G’s first 5 years Share on Facebook Share on X Share on LinkedIn Qualcomm's John Smee spotlights many of the new technologies that will come to 5G over the next five years.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Qualcomm executives and engineering teams offered countless details at a day-long “Future of 5G” event yesterday, but there was one overarching theme: As the first year of 5G comes to an end, the second year will be all about expansion, and exciting plans are already in place for the following few years. In other words, buckle up.
Without miring you in the heaviest technical details, here are 10 of the key trends you can expect over the first five years of commercial 5G.
1. Continued improvements in smartphone speeds and reliability In the short term, Qualcomm, carriers, and device makers will be working together to improve the reliability of 5G connectivity, ironing out software kinks that have made some devices fall back to 4G networks rather than smoothly moving from 5G tower to 5G tower. The company is also aggressively working to dispel the myth that millimeter wave (mmWave) 5G only works in a line of sight from a small cell tower, showing live demos of a Samsung Galaxy S10 5G receiving data at 1.1Gbps despite being in a separate room located physically behind a mmWave antenna.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Going forward, 5G performance is only going to become more impressive. Current-generation Qualcomm chips are already promising up to 7.5Gbps speeds with optimally configured networks, but in subsequent updates to the 5G standard, the company expects industry support for denser data encoding, even larger numbers of antennas, and multi-SIM connectivity.
As mentioned in our prior coverage of the company’s Project Pentari , Qualcomm is also developing sub-6GHz cellular tower hardware with grids of beam-forming antennas. The antennas can precisely locate users and target signals at them, enabling each user to achieve super-high data rates rather than enjoying a small fragment of the tower hardware’s total throughput.
2. 8K videos, video conferencing, and low-latency games Another “sooner rather than later” development will be the use of 5G to deliver high-bandwidth video services — both for broadcasts and bi-directional communications. Thanks to the high bandwidth offered by 5G, some carriers will likely begin to transmit 8K videos at standard (30fps) frame rates over the next year, with ultra-wideband mmWave 5G thereafter enabling up to 120fps 8K videos with 10-bit color in the future.
Advanced cloud game and app streaming will also be possible in the very near future, beginning with reliably sub-16-millisecond lag for 4K streaming at 60 frames per second — enough to deliver an excellent gaming experience. The consistently smooth, high-definition video will enable developers to stream content to users without risking the security of the underlying code or assets, while delivering user experiences that will feel like natively installed apps.
3. 5G-powered wearables and sensors Don’t hold your breath for 5G wearables in the immediate future, but they’re coming, Qualcomm suggests. A future release of the 5G standard, release 17, is likely to include support for 5G NR-Light, which will take a third step forward in reducing 5G’s power consumption so that the chips can work in tiny wearables and long-running sensors.
Apart from standard chip manufacturing advances that are expected over the next two years, cutting power requirements for ever-smaller chips, the 5G standard will likely add a technique to wake a sleeping modem to receive data — a way to keep the modem from idling and consuming power. 5G NR-Light could also include other optimizations for the comparatively small data needs of IoT sensors and wrist-worn wearables, neither of which may need to stream massive quantities of data … or will they? 4. 5G-aided or -powered AR/XR headsets “Augmented reality is the next mobile platform,” Qualcomm said at the event, and it’s going to touch many industries: emergency response, the military, education, health care, manufacturing, and engineering, plus consumer retailing, marketing, and advertising. Partners are already working on 5G-optional AR and XR headsets using Qualcomm hardware: Nreal’s wide-FOV, bright-screened Light AR glasses rely on a Qualcomm smartphone for processing and data, while Microsoft’s latest HoloLens uses a Qualcomm processor and can be fed data with a 5G-to-Wi-Fi hotspot.
As a big fan of XR technologies, I can’t say that I was blown away by the specific “5G” demos that were being shown at the event, as Nreal’s affordable, cool glasses were only serving as a mirrored 2D display for a racing game on a LG V50 ThinQ 5G’s screen, and the HoloLens demo had an employee walking inside a not-quite-believable 3D boat. Even so, the “very nearly there” form factor of the Nreal glasses makes me optimistic that the future is bright for AR — and that actually compelling real-world applications are just around the corner.
5. Improved 5G location services, down to centimeter accuracy The image above doesn’t really reflect the consumer impact of what will be happening with 5G over the next five years, but the upshot is that the next 5G standard update (release 16) is pushing forward the use of radio signals to accurately determine users’ locations — not for surveillance, but to deliver superior cellular signal quality. Over the next year, the push is to determine a device’s location within 3 meters indoors or 10 meters outdoors 80% of the time, while future 5G releases 17 and beyond will aim for even higher accuracy: a centimeter.
For carriers, this will enable some of the higher-bandwidth signaling strategies mentioned above — being able to direct a 5G radio signal right at a receiving device means superior performance. Beyond better speeds and lower latency, users could also benefit from emergency services that can more quickly get to their locations, and more accurate navigation using on-device maps.
6. Car-to-car 5G communication I previously detailed some of the C-V2X (cellular vehicle-to-everything) advancements Qualcomm is working on; this event focused on both what can generally be done with C-V2X and what is coming in the future. The big message continues to be that a car can do far more to anticipate environmental and human hazards when sharing data with other cars than it can gather on its own: One car’s sensors may be able to see a handful of other vehicles and pedestrians, but C-V2X cars will be able to know about multiple vehicle and human positions at any time.
Going forward, 5G C-V2X is expected to help connected cars make better navigation decisions, saving time, fuel, and possibly lives in the process — each equipped car will serve as a mobile radio station that broadcasts information to other vehicles in the immediate area. While the groundwork for C-V2X was laid in late-release 4G standards, it will really come of age and see commercialization in the 5G era, as additional bandwidth, higher reliability, and lower latency will enable cars to share more data faster.
7. More 5G spectrum (sub-6 and 52.6-114.25GHz) Without getting deep in the weeds on this topic — and it’s an important one — there are a lot of different initiatives moving forward to expand the domestic and international availability of radio spectrum for cellular purposes. The list of acronyms and concepts would melt your brain, but it suffices to say that there are projects in various states of progress ranging from “just launched” to “launching soon” to “still under negotiation,” with the status of each varying on a country-to-country basis.
In a nutshell, these initiatives are going to let 5G devices access more and wider swaths of radio spectrum, akin to letting your car’s FM radio tuner expand above 107.9 to 599.9, while enabling each station to deliver a stronger, better-quality signal. Over the next two years, you can expect more use of well-established sub-6GHz frequencies, while future 5G releases will begin to take advantage of even higher-frequency millimeter wave spectrum in the 52.6GHz to 114.25GHz range.
8. 5G-backed private/enterprise networks Most people don’t care about enterprise and industrial applications of 5G, but they’re going to be very big deals behind the scenes for companies everyone relies upon. Over the next year, Qualcomm expects 5G to start popping up in PCs, enabling immediate access to shared cloud storage and computing resources, as well as tetherless mobile office functionality: From out in the field, you will be able to access a company’s full slate of resources or bring yourself into the office using virtual telepresence.
One of the elements of this will be private enterprise networks using 5G technology, either run by the company itself or a third party, based on the same or different spectrum owned by cellular carriers. Going forward, these private 5G networks will be paired with Wi-Fi to deliver multi-gigabit speeds, supplying enterprises with everything from traditional employee broadband to AR and surveillance streaming, factory automation, inventory management, and shared computing assets.
9. Ultra-reliable, low-latency 5G factories Another topic I covered in greater detail previously is the transition from wired control of factory assets to equally reliable 5G wireless alternatives. In the past, Qualcomm has offered demo videos of how this will work, but this time it had a sample factory line that was doing real-time AI-assisted identification of parts on a conveyor belt, demonstrating how quickly and reliably such a system can operate.
The upshot here is that industrial components such as monitoring cameras and robotic arms can be wirelessly coordinated using 5G small cells connected to a 5G core network and vision processing controller, all moving so rapidly that production proceeds apace despite a remote computer analyzing literally every part that passes by the camera. Without the optimized 5G system in place, wireless communications would lead to missing numerous parts.
Qualcomm also reiterated that multiple small cells will be working together to guarantee the 99.9999% level reliability demanded in factory settings — the key way to prevent temporary physical obstructions from interfering with wireless signals. Synchronized sharing of radio spectrum will be vital to ultra-reliable factory communications, the company says, using various segments of sub-7GHz spectrum that may be differently allocated from country to country.
10. Fully integrated, end-to-end 5G solutions We’re probably going to see more of a push on this point from Qualcomm in the future, and it’s a big deal: tighter integration of all of the parts, from the antenna to the 5G modem. The idea is that customers will see obvious benefits from buying the end-to-end Qualcomm 5G solution rather than building a system on their own, including superior 5G coverage, battery life, and international roaming support.
This makes a great deal of sense for Android OEMs that want to sign an agreement, get a complete 5G module that can pop right into a device, and focus on the look and feel of the device’s exterior. It could create further challenges for companies such as Apple that historically play part vendors off of each other to reduce commodity component costs. In essence, Qualcomm is hoping to de-commoditize its parts.
Even beyond Qualcomm — think Samsung, MediaTek, and Huawei — the clear trend for major companies is bringing as many chips together as possible, achieving energy, memory, and other performance efficiencies in the process. Very few companies will be able to do the full 5G antenna-to-modem integration, then the full 5G-to-system-on-chip integration, but Qualcomm is positioning itself well for that future.
Eventually, real 6G planning “Here we have a slide with the number 6G on it,” said Qualcomm vice president of engineering John Smee, “and we debated even internally, should we have a slide with 6G on it? Well, the point is, what we want you to understand is that there’s a whole decade of evolution in 5G … There’s a huge roadmap ahead to that continued evolution.” The roadmap includes the “first half of 5G” — 3GPP standard releases 15, 16, and 17, which should take us through 2023 or 2024 — followed by releases 18, 19, and 20 as the “second half of 5G” that will take place over the subsequent five years. Realistically, 6G isn’t coming anytime soon. As Smee put it, 6G is more of a buzzword right now in terms of, “oh, is 6G going to come next?” Sure, obviously, that’s the next number in the G cycle. But the point is that there’s a huge amount of evolution in 5G itself. There’s tons of more gas in the tank, and at Qualcomm, we’re creating that gas every day. … What’s going to come in release 18, 19, and 20? Well, that’s what we’re already researching.
So there’s a lot to look forward to over the next decade of 5G, and even the next two years will bring some major developments in speed, reliability, and the types of devices with 5G connectivity. It’s refreshing to see a technology company sharing such a long-term development roadmap, as well as a quantifiable track record of prior accomplishments — they collectively make the company’s ambitious vision feel a lot more tangible and achievable.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,003 | 2,017 | "3 years after Indiegogo launch, social robot Jibo is available for preorder | VentureBeat" | "https://venturebeat.com/2017/10/25/3-years-after-indiegogo-launch-social-robot-jibo-is-available-for-preorder" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 3 years after Indiegogo launch, social robot Jibo is available for preorder Share on Facebook Share on X Share on LinkedIn Jibo, dubbed "the world's first social robot." Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
More than three years after the start of Jibo’s crowdfunding campaign, the social robot that raised more than $3.5 million on Indiegogo is coming to market within the next several weeks. Today, the company published a press release stating that Jibo is now available for public preorder at $899, with a ship date of November 7.
“Today’s announcement marks a major milestone for our company and the history of robotics, and we’re so excited for consumers to welcome this charming, helpful, and humble little guy into their homes,” CEO Steve Chambers stated in a press release. The announcement also states that Jibo’s software developer kit will be available in 2018 and will allow developers to “create applications that go beyond the traditional flat screen into a unique 3D environment, leveraging Jibo’s body movement, screen animations, and voice.” Jibo, dubbed “the world’s first social robot for the home,” uses speech and facial recognition to remember both your face and your voice. And many of the skills advertised on Jibo’s website — the ability to tell jokes, stories, remember your likes and dislike, and ask you questions — reinforce the idea that Jibo is meant to be a companion, not just a robot assistant.
Founded by Cynthia Breazeal, director of the Personal Robots Group at the MIT Media Laboratory, Jibo had a projected ship date for backers of fall 2015. In a 2016 Indiegogo update to backers , Chambers listed a number of issues the team was still working to address, such as latency (the fact that Jibo would crash after being configured with some routers) and difficulty some users had expressed in communicating with Jibo, particularly when it came to understanding what skills Jibo was enabled with. A Jibo spokesperson told VentureBeat that ” this program is complex, and building and hardening the platform of Jibo…simply took more time to meet a standard we’d set for consumers.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Indiegogo comments , Jibo started shipping units to early campaign backers in September, and the company tells VentureBeat that more than 2,500 backers have already received their units.
Correction, 6:16 a.m.: Updated with the correct year Jibo’s SDK will be available in — 2018.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,004 | 2,017 | "Amazon launches in-home delivery service called Amazon Key, powered by a camera and smart lock | VentureBeat" | "https://venturebeat.com/2017/10/25/amazon-launches-in-home-delivery-service-called-amazon-key" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon launches in-home delivery service called Amazon Key, powered by a camera and smart lock Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Amazon has announced a major new development in the way it delivers goods to customers’ houses. With Amazon Key , the ecommerce giant will allow Prime subscribers to request in-home deliveries, in addition to giving remote access to guests and visitors without anyone having to leave a key under the doormat.
The service actually works in conjunction with a new product from Amazon called Amazon Cloud Cam , which is basically a $120 home security camera. But to enable full home access for third-parties, such as couriers, homeowners will have to purchase the Amazon Key In-Home Kit for $250, which includes a smart lock from companies like Yale or Kwikset.
Amazon Key is launching in 37 cities across the U.S. from November 8, though the company says more markets will be arriving “over time.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Amazon Key gives customers peace of mind knowing their orders have been safely delivered to their homes and are waiting for them when they walk through their doors,” noted Peter Larsen, vice president of delivery technology at Amazon. “Now, Prime members can select in-home delivery and conveniently see their packages being delivered right from their mobile phones.” Though you can install the lock and camera yourself, Amazon said it can do so on your behalf for free.
Unlocking new markets While this is one more perk for Amazon Prime subscribers — in addition to the free deliveries, video- and music-streaming, cloud storage, and more included in the $99 annual fee — it also points to Amazon’s growing focus on providing ultimate convenience. No longer do you need to collect parcels from neighboring properties or wait around for delivery — you can now allow the driver to leave the package directly in your home.
Shoppers can monitor everything through a dedicated Amazon Key app, view real-time notifications, and watch the delivery live or view it afterward — to ensure a courier hasn’t helped themselves to a banana on the way out the door.
Amazon said that it uses an “encrypted authentication process” to ensure that the correct driver has access to a customer’s home at the specified time. The Amazon Cloud Cam kicks into action ahead of the drop-off, and the lock opens automatically without the need for any interaction from the driver.
Homeowners can also set up access for friends and family and specify a length of time they’re allowed access to the property. In the future, Amazon Key will integrate with thousands of other third-party providers, such as house cleaning platforms like Merry Maids.
Amazon has been rumored to be launching an in-home delivery service for a while, and now that we have full details, it’s clear the company is taking on incumbents in the smart security realm.
Indeed, home security has emerged as big business with the advent of the internet of things (IoT). San Francisco-based August, which built a home-access system that uses smartphones to lock and unlock doors, was acquired last week by Swedish lock giant Assa Abloy. Then there’s the heavily funded Ring, which has slowly evolved beyond its video doorbell roots to become a fully fledged home security company.
It’s clear there is great demand for home security systems that harness smartphones and general connectivity, and this is the sphere Amazon is now entering. But the ecommerce giant is going a step further — it’s leveraging its position as one of the world’s preeminent retail and logistics companies to cross-sell other services and make deliveries as convenient as possible. And by making in-home delivery exclusive to Prime members, it’s giving you one more reason to jump inside its ecommerce vortex.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,005 | 2,018 | "5 big features Amazon’s home robot will likely have | VentureBeat" | "https://venturebeat.com/2018/04/23/5-big-features-amazons-home-robot-will-likely-have" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 5 big features Amazon’s home robot will likely have Share on Facebook Share on X Share on LinkedIn Amazon's Lab126 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Under codename Project Vesta, Amazon’s Lab126 is reportedly making a home robot. Tests in employees’ homes could begin in the coming months, and sales of an Amazon robot may begin as early as 2019.
Anonymous sources speaking with Bloomberg today provided no details about what the domestic robot will look like or how it will function, but Amazon has made a series of big investments in AI in recent years, and if that same toolbox is being used for Project Vesta, there are certain features and machine intelligence we can expect from an Amazon robot.
Emotional intelligence In late 2017, Alexa chief scientist Rohit Prasad spoke with VentureBeat and other news outlets about the evolution of Alexa and the company’s future plans.
At that time, Prasad talked about how Amazon is experimenting with the detection of a user’s emotional state based on analysis of voice recordings collected each time you have an exchange with an Alexa-enabled device. Amazon will begin with identification of frustration so Alexa knows whether or not she succeeded in completing a task, but will later branch out into detection of other emotions.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Today, you can tell Alexa “I feel sad,” and you will get an automated response back. In the future, Alexa will sense a deviation from the baseline of what your voice typically sounds like, then use this to inform your experience with Alexa-enabled devices.
In a robot, this knowledge could come to express itself in its tone of voice or the gestures, movement, or expression used.
For example, Alexa in your car or bedroom could pick up hints of stress or agitation in your voice, and in response, your Amazon robot could have a more welcoming or comforting expression on its face when it looks at you the rest of the day. The same signal could be used to recommend certain types of music or other activity based on your actions the last time you were stressed and agitated.
This intelligence is important not just because it can inform advertisers or lead to personalized experiences, but because understanding how an exchange with humans went will help transform interactions from simple commands communicated to Alexa to interaction that feels more natural, potentially with the back-and-forth volley humans call conversation.
As Affectiva CEO Rana el Kaliouby says, robots and assistants like Alexa can tell you a joke today, but they don’t know if you found it funny. Emotional intelligence that uses voice or face analysis to verify your reaction will allow Alexa to understand if you laughed at her joke , then respond by telling you another joke.
Facial recognition Any robot Amazon brings to market will most likely have the ability to recognize users faces and deploy the kinds of services found in Amazon Cloud Cam.
Released last fall, Cloud Cam uses motion detection and facial recognition to send users alerts on their smartphones. When combined with two-way audio capabilities, the camera — and likely soon an Amazon robot — can send you an alert so you can yell at your dog to get off the couch or greet your kid when they return home from school. AWS also released real-time facial recognition last fall at AWS Re:invent.
Facial recognition can also be used to record photos and videos around the house, à la Google Clips or Mayfield Robotics’ Kuri.
Facial recognition can be used to include or exclude certain members of the house from being recorded.
As deployments by companies like Face++ and Baidu in China make plain, your face could also be used to verify purchases or make payments.
Fashion tips and object detection Beyond the use of cameras to scan your face, Amazon’s robot could also deploy computer vision to help you with shopping or style.
Amazon released the Echo Look to make Alexa a fashionista by invite only last spring. It uses computer vision to recognize clothes and recommend outfits.
In tandem with Amazon’s visual search or DeepLens , your robot may be able to do a lot more than identify the members of your household — picking up on clothing brands, works of art, books, and millions of products for sale.
Delivery assistance Fire TV and Echo Dots may be among the most popular items Amazon currently sells, but Amazon is still just a service that wants to sell you absolutely everything.
To that end, Amazon’s new robot could work with Amazon’s many delivery services, whether a drone or Amazon Key, the in-home delivery service launched last fall.
You might feel more comfortable about letting the plumber into your home or having a large item like a couch delivered if you can see it all go down, from the moment the person enters your home to the moment they leave. The combination of a smart lock and Amazon Cloud Cam are central to this service today, but a mobile Amazon robot could make this a lot easier.
More trust on this front could help Amazon expand into the sale of more home furniture or appliances, other items that require installation, or pieces that cannot be simply dropped off at your doorstep.
Companionship features If investments Amazon has made are any indication, the robot could attempt to incorporate lessons learned from Embodied Robotics, a company currently running in stealth mode that received Alexa Fund backing in 2016.
Embodied Robotics was cofounded by University of Southern California professor Maja Mataric. Like her work as director USC’s Robotics and Autonomous Systems Center, at Embodied Robotics she will focus on making machines that can socialize with humans and be an assistant.
In the past, Mataric has made robots, for example, to help rehabilitate a person who had a stroke and to interact with children on the autism spectrum. A robot named Maki was created to soothe children before they receive an injection at a hospital.
A robot made with a knowledge of how to effectively interact with, and augment, humans could, as Mataric has said in the past , become “a companion that fills the gap of what’s missing.” Pair this with the personality Alexa attempts to demonstrate today — she’s a Seahawks fan, a feminist , and routinely does things like predict Oscar winners — and Amazon’s robot could be endowed with intelligence that makes it not just effective at getting things done, but also well versed in the challenges a machine can encounter when dealing with humans.
Years from now an Amazon robot might be able to do the range of things Mataric has focused on, but in the short term that probably just means being able to carry on with chit-chat, perhaps to combat a loneliness epidemic.
What would be really cool is if, once these robots are made available, users could pick their robot’s personality. Depending on your home, chipper Alexa may be fine, but a little sass like Alice from The Jetsons or morbidity like Marvin from The Hitchhiker’s Guide to the Galaxy may also be in order.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,006 | 2,018 | "Moxi is a hospital robot with social intelligence | VentureBeat" | "https://venturebeat.com/2018/09/18/moxi-is-a-hospital-robot-with-social-intelligence" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Moxi is a hospital robot with social intelligence Share on Facebook Share on X Share on LinkedIn Moxi the robot Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Diligent Robotics social intelligence-endowed robot Moxi made its debut today in pilot programs at hospitals in Texas. Moxi has a face, head, and arm and gets around on four wheels. The robot can use its hand to do things like grab and store medical supplies and deliver them to nurses or doctors.
Trials begin this week at a number of hospitals, including Texas Health Dallas, University of Texas Medical Branch, and Houston Methodist Hospital. By reducing trips to supply rooms, Diligent believes Moxi can fight fatigue in hospital settings and reduce staff turnover.
“As a friendly, sensitive, and intuitive robot, Moxi not only alleviates clinical staff of routine tasks but does so in a non-threatening and supportive way that encourages positive relationships between humans and robots, further enhancing clinical staffs’ ability to and interest in leveraging AI in the health care industry,” CEO Dr. Andrea Thomaz said in a blog post today.
Thomaz is former director of the Georgia Tech Socially Intelligent Machines Lab and prior to that led the experimental RUBI project to teach Finnish children with a robot.
Thomaz is also known for training robots like Curi and Simon.
Thomaz is currently a member of the faculty in the robotics department at the University of Texas, Austin.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When addressed by a nurse, Moxi moves the gaze of its LED eyes and head movement in that person’s direction, and it can even display heart emoji eyes or rainbow eyes when it wants to celebrate.
A combination of Velodyne lidar and cameras in the head and base of the robot are used to guide its movement and help it avoid hitting people.
Moxi says “Hello there” to people it passes in the hall, but it shouldn’t be confused with a conversational bot. Moxi doesn’t play whale sounds or answer random questions like Alexa and Google Assistant do. In fact, Moxi doesn’t do much talking at all when it wants to signal it has entered a room or is changing direction.
“There’s lots of more beeps and whistles that are used for particular context than natural language, and that’s really to communicate that right now, Moxi’s not a chatbot,” Thomaz told VentureBeat in a phone interview.
In trials at hospitals in Texas, Diligent is experimenting with both touchscreen and voice input options for nurses.
“Part of what we’re learning in our pilot deployments over the next several months is how exactly a support task robot like Moxi would best fit into an existing workflow, because every hospital you go to, nurses have a particular way that they do things,” she said.
Moxi is an upgrade from Diligent’s first robot, a prototype named Poli whose initial trials were paid for with funding from the National Science Foundation Small Business Innovation Research program.
Moxi was developed thanks to a $2.1 million funding round in January led by True Ventures. In the redesign, Diligent’s hospital robot was made smaller, its arm was repositioned, and it was given a face.
“There’s this kind of immediate connection that people have with something that has a face, with eyes, and that is the kind of connection that I envision people having with robot teammates. You want this to be a trusted member of the team, so that was our main reason for going more explicitly social,” Thomaz said.
Diligent Robotics was created in 2016 by Thomaz and cofounder Vivian Chu.
Beyond robots that make deliveries in hospital — like Moxi and those from Savioke — robots are entering hospitals in a variety of ways, like helping comfort kids with cancer , training medical professionals , and assisting in surgeries.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,007 | 2,018 | "12 ways Alexa is getting smarter | VentureBeat" | "https://venturebeat.com/2018/09/20/12-ways-alexa-is-getting-smarter" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 12 ways Alexa is getting smarter Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In a presentation atop its biodome in Seattle today, Amazon debuted nearly a dozen new products, ranging from next generation versions of popular speakers like the Echo Show and Echo Dot to a microwave, smart plug, and Echo subwoofer, to Echo Auto for cars.
Alongside all that, a whole lot of new features were introduced — both for consumers and developers creating experiences for Echo-enabled devices.
Here’s a quick rundown of features you might have missed.
New music release notifications You will soon be able to ask Alexa to follow an artist to get notifications when they drop a new album or single.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Among all the device news it’s understandable if you missed this new feature, but this may very well be my favorite software update of the day. As simple as it is, telling people when the new album drops is exciting, and since playing music continues to be the most popular way to use a voice-enabled smart speaker, it’s an example of the kind of proactive notification people won’t mind receiving.
Routines for kids In April, Amazon introduced Echo Dot for Kids and the FreeTime service for parents to configure a speaker for their children. Today routines — customizable commands to carry out multiple actions with a single utterance — were also introduced for kids, so parents can select music, timers, personalized messages, and other actions for bedtime or routines before or after school.
Routines for kids follows the introduction last month of new skills for kids and the Alexa Gadgets Toolkit for companies to incorporate Alexa into their toys.
Alexa Guard Alexa Guard harnesses the listening power of Echo speakers to alert people when the sound of breaking glass or smoke or carbon monoxide alarms are heard. Alexa Guard is also able to connect with existing home security systems from Ring and ADT in order to send alerts to them too.
Alexa Skills Kit Music Skill Up until now Alexa has worked with a limited number of the most popular music streaming services, but the smart speaker in large part responsible for more people listening to music will open its doors to any music streaming service with the ASK Music Skill.
The ASK Music Skill launches first with Jay-Z’s music streaming service Tidal.
Enhanced video control Today Amazon introduced partnerships with companies like Hulu and NBC as well as the Fire TV Recast, its first dedicated digital video recording device that plays television through antennas around the home.
The new services mean you can now say “Alexa, tune to ESPN on Hulu” or “Alexa, show me my recordings” to see the items you have recorded on your Fire TV Recast.
Recast can also go mobile with apps for Fire tablets or iOS or Android smartphones.
Both services are a big improvement on the first-generation Echo Show’s video abilities, which were limited to things like Twitch and Food Network videos, undercut still more when, a month after launch, Google rescinded support for YouTube.
Video support was also made available for additional hands-free help cooking dinner.
New video options will be necessary for Amazon to capably compete with Google, which has outsold Amazon in global smart speaker sales worldwide for the past two quarters, according to Canalys numbers.
Alexa Presentation Language The Alexa Presentation Language (APL) will allow developers to build skills that connect with televisions using Fire TV, Fire tablets, Echo Show, and Echo Spot in the coming months, as will the Alexa Smart Screen and TV Device SDK. Sony smart TVs and Lenovo tablets have also announced support for visual experiences with Alexa.
APL is a JSON-based language and include text, audio, video, and HTML 5. The APL is available today in preview.
Doorbell API You will soon be able to answer your doorbell and have two-way communications with people at your front door using the Alexa Doorbell API.
The Doorbell API will be part of the Smart Home Camera API and will be made available for manufacturers later this year.
Multiroom music The new Hub, Link and Link Plus were introduced today to enhance people’s listening experience with an Echo speaker, but with an upgrade to multiroom music support, now speakers with Alexa inside, such as those from third-party manufacturers like the Polk Command Bar and Harman Kardon Alert, can also respond when you ask Alexa to play music in a specific room or as part of a group.
Stereo sound between multiple speakers was also introduced today.
Alexa is becoming a better conversationalist Multistep requests are coming soon so Alexa will understand when you say “Alexa, add nacho cheese, tortilla chips, and chili to my shopping list” or “Alexa, play Pandora at volume 8.” Routines will also be able to do more, with location-based triggers so as you pull into your driveway, the lights come on and your podcast starts to play.
Smarter routines were introduced just a week after Siri Shortcuts made their debut for iOS 12.
Alexa will also soon be able to tell you about activity in your email, so you can say “Alexa, do I have emails from Karen?” Hunches This is also a pretty cool way Alexa is getting smarter. With Hunches, Alexa will begin to tell people information based on what it knows from connected sensors or devices in your home. For example, if you say “Alexa, good night,” in response Alexa might say “By the way, your living room light is on. Do you want me to turn it off?” Sometime earlier this year, Alexa started to roll out Briefs, a way to hear a simple affirmative sound instead of more talking. Alexa is getting better about being conversational with features like follow-up, but being smart about when to speak up is key to a better experience. Hunches seems to be going down the right path.
Whisper mode Coming soon: Alexa will be able to tell when you’re whispering, and will in turn whisper back. That’s awfully helpful in those moments when you’re trying to speak to your intelligent assistant but your kid or spouse is asleep nearby.
Amazon and Alexa skills developers have been able to make the AI assistant whisper for some time now with SSML tags, but Amazon VP of device and services David Limp said the new feature is powered by deep neural nets to understand when a person is whispering.
Skype video calls One major takeaway from today’s event is that the new 10-inch Echo Show will be able to play a lot more video, and will be able to make phone or video calls with Skype.
This builds upon the partnership between Microsoft and Amazon, which brought Alexa to Windows 10 PCs and Cortana to Echo speakers last month. Skype has been downloaded more than a billion times, and last year Microsoft reported the service had more than 300 million monthly active users. That’s a big improvement on the DropIn feature in the Alexa app that started from zero in the first-generation Echo Show. Pretty simple math: When more people have the software for video calls, it just makes it that much more likely that people will actually utilize the service.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,008 | 2,018 | "Temi raises $21 million for a home robot that can follow you around | VentureBeat" | "https://venturebeat.com/2018/12/19/roboteam-raises-21-million-for-temi-a-home-robot-that-can-follow-you-around" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Temi raises $21 million for a home robot that can follow you around Share on Facebook Share on X Share on LinkedIn Roboteam's Temi home robot.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Temi, a robotics startup headquartered in New York, is betting big on its eponymous Temi telepresence product — and it’s taking new investors along for the ride.
The company — which also has a research and development lab in Tel Aviv and a manufacturing location in Shenzhen, China — today revealed that it has secured $21 million in series B financing led by John Wu, Alibaba’s former chief technology officer and a longtime existing investor, with participation by Generali Investments and Hong Kong internet of things and wellness giant Ogawa. Ogawa has agreed to partner with Temi to establish a marketing and distribution plan, with the ambitious goal of making the startup’s products available in over 180,000 points of sale around the world.
The upcoming Consumer Electronics Show in January will kick off Temi’s official launch, following limited availability at 15 locations in the U.S. and online. Temi’s series B comes after its parent company, Roboteam, raised $50 million from FengHe Investment Group and Wu three years ago and a $12 million round in 2015, and brings Temi and Roboteam’s total comined raised to $83 million.
“This investment aligns with our company’s strategy to broaden our global business with worldwide partnerships and to transition seamlessly and quickly from R&D phase to sales mode,” Yossi Wolf, Temi’s CEO and founder, said. “Temi incorporates unparalleled technology at a reachable price, making it the ultimate way for people to communicate … It will dramatically transform the way we conduct business and connect with loved ones from afar.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Temi is something of a pivot for Roboteam, Temi’s parent company, which for the better part of eight years developed all-terrain robotic systems for military deployments. It’s a lucrative business — Temi’s defense contracts have generated over $100 million in revenue — but the way Wolf tells it, a consumer robot was always the endgame.
Above: Temi, along with its companion app.
“I’m an ’80s kid. I grew up dreaming of robots, like in the Jetsons ,” Wolf said. “It’s always been a desire of mine to make robots for the consumer and home space.” Temi made its world premiere at a press event during IFA 2018 in Berlin, which was attended by VentureBeat. The 3-foot-tall, 4-wheeled robot sports a 10.1-inch QHD (2,560 x 1,600 pixels) IPS LCD touchscreen connected to a motorized bracket; a hexa-core ARM processor that powers an Android-based operating system; Wi-Fi, Bluetooth, and cellular connectivity; a battery that lasts eight hours on a charge; and myriad sensors including four omnidirectional microphones, two infrared depth cameras, two RGB cameras (one 13MP sensor and one 5MP wide-angle sensor), five proximity sensors, six time-of-flight linear sensors, and an inertial measurement unit.
Leveraging the aforementioned sensors in addition to an in-house ARM-based PCB and lidar array, Temi can autonomously explores its surroundings and generate a 3D map, and in time start to “remember” particular locations. It’s even robust enough to follow a person through a room (by tracking their legs) while avoiding obstacles such as furniture, pets, and laundry.
On the software side of things, Temi is controlled in one of two ways: with an app or through voice recognition. Using the app, users dialing into its built-in teleconferencing software can speak through Temi’s color display and 20-watt Harman Kardon speakers. The voice recognition — powered by a homegrown assistant that currently supports English and Chinese — lets users within earshot instruct it to play music, pull up YouTube videos, search for restaurants, or get a weather forecast for the week ahead.
Temi supports virtually any Android app via a software development kit (SDK). And eventually, the robot will integrate more closely with Temi’s companion smartphone app — the team is working on an “activity history” timeline that shows past video sessions and voice search results, as well as a feature that lets video chat participants tap on a person to have Temi follow them around.
Temi will start at $1,500 — less than a typical MacBook, Wolf points out. Temi plans to ramp production up to as many as 30,000 units a month by the end of December, when preorders are expected to begin shipping in earnest.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,009 | 2,019 | "Sweeping changes: How iRobot evolved from military robots to autonomous vacuums | VentureBeat" | "https://venturebeat.com/2019/06/18/sweeping-changes-how-irobot-evolved-from-military-robots-to-autonomous-vacuums" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Feature Sweeping changes: How iRobot evolved from military robots to autonomous vacuums Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Perhaps no company is as synonymous with robot vacuums as iRobot. To date, the Bedford, Massachusetts-based firm has sold more than 25 million units to customers around the world, and it has an estimated 88% share of the robot vacuum market. But beyond the iconic Roomba, iRobot counts in its sprawling portfolio Mira, a pool-cleaning robot; RP-VITA, a medical robot that can plug into diagnostic devices like otoscopes and ultrasound machines; and Seaglider, a long-range dual-role autonomous underwater vehicle. In truth, the Roomba didn’t emerge publicly until over a decade after iRobot’s founding — and years after the company had devoted substantial resources to robotics contracts with the U.S. military.
For a glimpse into iRobot’s fascinating history and the Roomba’s origin story, we spoke with iRobot cofounder and CEO Colin Angle and chief technology officer Chris Jones. The pair shed light on the company’s current product lineup and offered insights into the barriers standing in the way of versatile, affordable, and highly capable home robots.
Baby steps iRobot has its roots at MIT, where Angle studied as an undergraduate. In 1990, he founded iRobot with two partners: Australian roboticist and MIT professor Rodney Brooks, who’s credited with advancing the idea that robots’ cognitive abilities must be based on actions within the environment, and NASA Jet Propulsion Laboratory researcher Helen Greiner.
The small team architected Genghis and Ariel, robots designed for space exploration and mine disarmament, respectively. Genghis, a six-legged robot with an 8-bit microprocessor and 256 bytes of RAM, was inspired by insects; Angle noted in his undergraduate thesis that flies could navigate environments with incredibly basic neural pathways. This crystalized what Angle and colleagues dubbed “subsumption architecture,” where processes subsume one another in sequences of hundreds, together demonstrating emergent behavior like climbing over terrain and progressing toward an endpoint.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The pioneering work led to the creation of NASA’s Mars Sojourner rover, and soon after iRobot’s services were in demand. The company worked with Johnson Wax to build a product that autonomously cleaned floors in supermarkets and shopping centers, and iRobot collaborated with Hasbro on toys like My Real Baby, which employed animatronic facial expressions meant to mimic those of a human infant. In 1998, it nabbed one of its first research contracts from DARPA, the branch of the U.S. Department of Defense that spearheads R&D of emerging technologies for military applications.
“What we were really trying to say with iRobot was ‘Forget all these preconceptions of what things are supposed to be — how can we actually solve [hard] problems [in robotics]?'” said Angle. In DARPA’s case, that problem was stairs. The agency wanted a machine that could climb steps on its own, without human guidance.
Above: iRobot’s PackBot 510.
“We won a contract for $120,000 to write a proposal on how to build a stair-climbing robot, and we were [competing] against JPL, Northrop Grumman, Boeing, Lockheed — a bunch of huge defense companies against little iRobot,” Angle said. “We got to go present, and it was like this big conference room and all these [people] lined up in evaluation panels.” Their prototype would later become PackBot, a sensor-packed robot that’s been deployed in Iraq and Afghanistan. It was used to search the debris of the World Trade Center after 9/11, and it helped scientists assess the damaged Fukushima nuclear power plant in the aftermath of the 2011 Tōhoku earthquake and tsunami. Its large caterpillar track treads, which human pilots control with video game-style joysticks and buttons, enable it to traverse mud, rocks, stairs, and other obstacles (including shallow pools of water) and climb an up to 60-degree incline. It can do all of this while carrying payloads weighing more than 40 pounds.
The selection committee wasn’t impressed by the team’s description. “They told us, ‘We gave you a chance with this robot, and there’s no way this design would actually work,'” recalls Angle. Anticipating their skepticism, iRobot’s engineers brought in a proof of concept — Urbie — constructed from lightweight, machined aluminum, with compartments for LEDs and cameras and a handheld remote control with an LCD screen.
“Everybody wanted to have a robot that would climb up the stairs like a human, but that costs 1,000 times more and is 10 times slower than what we did with treads. We showed [the panel how it] drove up the stairs, and that was the moment,” said Angle. “It was a $4 million [follow-on] contract, and this was a turning point for the company.” In some respects, PackBot — now the domain of iRobot’s defense division, which the company sold to Arlington Capital Partners for $45 million in February 2016 — became a symbol of iRobot’s commitment to practical robots engineered with bankable applications in mind. “Being practical at the end of the day and focusing on a task you’re trying to accomplish is going to give you the best overall result,” said CTO Jones, who was involved in robotics research and development at the Artificial Intelligence Lab at the University of Zurich prior to joining iRobot in 2005. “This is as opposed to trying to build something that does many things.” Sweeping carpets with AI Enter Roomba. “When we started with Roomba, people didn’t even think it was a robot. It was an automatic vacuum,” explained Angle. “The mental image of how robots are going to vacuum was a humanoid pushing a manual upright vacuum, and that’s so profoundly wrong on many levels. It’s just about the most complicated, expensive way of creating a robot vacuum you can possibly imagine.” Roomba’s form factor might not resemble Rosie from The Jetsons , but Angle asserts it’s ideally suited for ducking around tables and chairs because of its small size. The two-wheeled, disc-shaped autonomous vacuum can detect the presence of obstacles and sense steep drops (with cliff sensors) to keep it from falling down stairs or off tall balconies. Most models have a pair of brushes rotating in opposite directions and a horizontally mounted side-spinning brush that sweeps against walls, followed by a vacuum that directs airflow through a narrow slit.
Second- and third-generation Roomba models (the 400 series and 500 series) have a self-charging, bin-emptying home base that they seek out at the end of each cleaning session via embedded infrared beacons. First-generation Roomba units lacked it, which, according to Angle, severely inhibited their autonomy. “Prior to the home base, the idea that the robot had to go out [and] do this job successfully every single time was nice, but not quite what we’d pictured in our heads,” he said. “Turning off the robot or letting it die is killing it — you have to go to it, pick it up, and move it back to its place. When you add the home base, you change the game.” Early Roomba models were also relatively static in their approach to sweeping. They relied on iRobot’s in-house iAdapt Responsive Cleaning Technology, which stemmed from Brooks’ research: a set of heuristics and single algorithms like spiral cleaning, room crossing, wall-following, and random-walk angle-changing triggered by collisions with walls and furniture. As a result, primitive Roomba units covered some areas more frequently than others and took several times longer to clean rooms than a human would.
Subsequent Roombas (the 600 series, 700 series, and 800 series) expedited dust busting with a forward-looking, obstacle-detecting infrared sensor. A new imaging sensor used odometry to infer distance traveled from wheel turns, and internal sensors identified particularly dirty spots on floors.
The most substantial hardware upgrade to date arrived in the Roomba 900 series , which added a visual simultaneous localization and mapping (vSLAM) system that generated pixel maps of unknown rooms and tracked the robot’s location within those rooms. iRobot wasn’t the first to market with a vSLAM-capable vacuum, but it laid the groundwork for improvements to come.
Smart vacuums The advent of modern AI techniques has accelerated the pace of robotics innovation, particularly in computer vision, according to Jones. It’s the subfield that deals with how computers gain high-level understanding from images or videos, and it seeks to replicate digitally what the human visual cortex can do naturally.
“There’s been a step change of improvement … with deep learning,” said Jones, referring to the neural networks at the heart of most AI systems today. Deep neural networks consist of “neurons,” or mathematical functions loosely modeled after biological neurons, that are arranged in layers and connected by “synapses” that transmit signals to other neurons. Those signals — the product of input data — travel from layer to layer and slowly “tune” the network by adjusting the synaptic strength (weights) of each connection. Over time, the network extracts features from the data set and identifies cross-sample relationships, eventually learning to make predictions.
“The ability of the robots to visually perceive the world really forms the basis of how they’re able to kind of carry out the tasks you’re seeing [them complete] today,” continued Jones. “You can have a robot clean your kitchen autonomously or have a team of robots mop and vacuum your entire house. [Deep learning is] key to the innovations we’re showing in our latest products.” To that end, iRobot’s Roomba i7, which debuted in late 2018, features an upgraded imaging sensor that retains a memory of the room maps it generates and an understanding of the spatial relationships among subroom visual markers. Owners can command the i7 via iRobot’s mobile app or an intelligent voice assistant (like Amazon’s Alexa or Google Assistant) to clean any room, and it’s able to localize itself even if you pick it up and move it to another place.
Getting the i7 to reliably figure out where it is in space was deceptively difficult, said Angle, because of home environments’ dynamic nature. For instance, the Roomba engineering team had to deal with changing light conditions, which still throw some Roomba units for a loop — according to iRobot, the 900 series’ vision-based system needs at least some light for autonomous navigation, and their range is severely limited in very low light.
“It [takes] a lot of AI to get the robot to figure out where [it] is in space, but it’s so important,” explained Angle. “Success isn’t cleaning a home twice in a row or even three times in a row. Success is doing it 20, 50, or 100 times in a row.” With positional awareness more or less solved, the Roomba team moved to address another longstanding problem plaguing the Roomba lineup: a lack of interconnectivity with complementary products. The Roomba s9+ and the Braava jet m6 , which were announced in May, tap iRobot’s Imprint Link Technology to communicate with each other wirelessly over the internet. First, the s9+ (or i7+, which is also supported) vacuums the floor in selected areas, and when it’s finished and docked, the Braava jet m6 mops those same floors with an electrostatic pad.
The s9+’s other innovations are a 3D sensor that helps it find its way around large spaces, along edges, and deep into corners, and an anti-allergen system that traps and locks pollen and mold allergens to keep them from escaping the robot or its dock. As for the Braava jet m6, it features a wet mopping mode designed to tackle sticky messes, grime, and kitchen grease.
Beyond robots that mop and sweep up dust from tiled floors, iRobot is dipping its toes into the autonomous lawnmower market with Terra , which features “state-of-the-art” mapping techniques borrowed from the company’s Roomba line. Using Imprint Smart Mapping, it’s able to “remember” its location in any yard it’s seen before and cut the grass in parallel back-and-forth lines, like a human would. And unlike most robot lawnmowers on the market today, which require boundary wires, Terra is able to suss out its location from beacons placed strategically around the yard.
Like its vacuuming counterparts, Terra automatically returns to a home base to recharge before completing mowing assignments if the battery dips below a certain threshold. It sports a ruggedized exterior designed to protect against inclement weather, and you can adjust things like precision, grass height, and time-of-day preferences from the iRobot Home App.
As previously announced, Angle notes that Terra will be available in Germany and as part of a beta program in the U.S. sometime later this year.
Home robot stasis What lies beyond lawnmowers, mops, and vacuums for iRobot in the future? Angle declined to share specifics but said that dollars and cents will largely dictate the company’s product roadmap, as has been the case historically. “Ultimately, what the robotics industry needs most isn’t more roboticists, but better business models,” he said. “In addition to trying to solve hard problems … companies [like ours] need a consumer central to what [they’re] trying to do.” Speaking to VentureBeat in an interview last year, Misty Robotics CEO Tim Enwall predicted that every home and office will have a robot within 20 years. But realists like Ken Goldberg, a professor at the University of California, Berkeley, anticipate that it’ll be 5-10 years before we see a mass-produced home robot that can pick up after kids, tidy furniture, prep meals, and carry out other domestic chores.
The consumer robotics market appears to be in a kind of stasis, in any case, exemplified by Bosch’s decision to dissolve home robot startup Mayfield Robotics; Honda’s cancellation of its Asimo program; and the shuttering of Anki Robotics, which had raised nearly $200 million in venture capital from big-name backers like J.P. Morgan and Andreessen Horowitz. In more bad news, Jibo , a startup developing a robot with a custom built-in assistant, shut down earlier this year, and seven months ago industrial robotics company Rethink Robotics closed its doors after trying unsuccessfully to find a buyer.
Robotics isn’t an easy pursuit at commercial scale, said Jones, who described it as an “art.” Despite the emergence of powerful robotics software development platforms like Microsoft’s Robotics Developer Studio and AWS RoboMaker, he says formidable supply chain challenges stand in the way of companies’ success.
“You have electrical, mechanical, software … and all that has to come together in a practical package that actually does something valuable, and getting those to work together efficiently and effectively is a challenge,” he said. “Every home is different — people interact with robots differently. It’s a tall order, and that’s why staying focused on practicality really matters.” iRobot didn’t arrive at this philosophy overnight. Angle points to the Ava 500 , a roving telepresence robot the company created in partnership with Cisco, as one example of an early misfire. Ava didn’t want for features — it sported a 21.5-inch HD screen, an iOS companion app that enabled operators to direct it to rooms or employees on a map, and integration with Cisco’s video collaboration platform — but according to Angle, its marketing failed to properly convey its capabilities.
“We went through a long list of all the things that it could do, which we quickly learned is a terrible way to sell a product,” he said. “Even if it did two out of the 10 things or even five out of the 10, people wouldn’t buy it because it wouldn’t do 10 out of the 10 things. There’s a whole separate, non-technologically driven side to help people understand what robots are meant to do.” A ‘family’ of robots That’s why both Angle and Jones believe a single home robot capable of doing it all — the sort that has dominated science fiction for decades — won’t be feasible in the foreseeable future. They instead predict that a family of machines will work together to perform individual chores like folding clothes, washing the dishes, and assisting older or disabled family members.
“The home can handle several different types of robot. You’re going to be able to buy them incrementally, each specialized to do a purpose really well, and there’s going to be some things where combining functionality into one robot makes sense,” explained Angle.
Paired with spatial knowledge gleaned from mapping data, these innovations could usher in robots whose behaviors take into account the presence or state of furniture, fixtures, electronics, carpets, and even people, according to Angle. “I don’t want the robot to come in when I’m watching TV — the robot needs to understand where the TV is, whether it’s on or off,” he said. “The robot should be able to discover that.” Angle, perhaps anticipating questions about data privacy, asserted that this sharing paradigm would comply with legislation such as General Data Protection Regulation (GDPR), the legal framework that requires businesses to protect the personal data and privacy of European Union citizens for transactions that occur within EU member states. Two years ago, Reuters erroneously reported that iRobot planned to sell maps to third parties like Amazon, Google, and Apple , which Angle says still isn’t in the cards. Instead, he surmises that iRobot will eventually reach a deal to share its maps for free with customers’ consent, extracting value by making its devices more useful in the home.
“Robots have this very unique and valuable position within [the home] … and it’s [with] that understanding of the home that we’re working with partners to develop insight that can improve the value of other products,” said Angle. “The [connected devices] industry is asymptotically hitting a ceiling, because so-called smart devices in the home don’t really understand anything about the home — they’re very narrow. Robots really have an opportunity to make smart homes smarter.” He believes that in order to overcome this hump, voice assistants — which are fast becoming one of the most popular ways Roomba customers interact with their vacuums — must improve in their ability to parse intent rather than continue to rely on “magical phrases” like “Roomba, clean my living room.” Angle’s not alone in this: Amazon recently began piloting AI systems that guess which apps to launch from vague commands and attempt to resolve ambiguous requests.
“‘Roomba, vacuum the kitchen’ is pretty straightforward. It’s an utterance that people can remember, and that kind of works,” said Angle. “But when you’re dealing with more than one machine, it changes on an order of magnitude the complexity of the utterances that you may need to remember. [Voice assistants] should be able to understand a phrase like, ‘Well, can you vacuum the kitchen and then mop the den after I leave to go to the party?'” He added: “[Amazon’s and Google’s] problem is that they’re trying to be everything to everyone, and what I’d like to do instead is to find a way to more richly interact with a more curated set of devices.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,010 | 2,019 | "Amazon's Alexa may soon know if you're happy or sad | VentureBeat" | "https://venturebeat.com/2019/07/08/amazons-alexa-may-soon-know-if-youre-happy-or-sad" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon’s Alexa may soon know if you’re happy or sad Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
More U.S. soldiers died during the mid-2000s troop surge in Iraq than in any other combat mission since Vietnam. As soldiers came home in flag-draped caskets, a growing number of veterans who made it back alive committed suicide. According to the VA National Suicide Data Report , veteran suicides surpassed 6,000 a year at one point.
Veterans committing suicide after returning home from Iraq — and Afghanistan — wasn’t just a hard-to-explain tragedy, but a public health crisis.
Today Rohit Prasad is best known as chief scientist for Amazon’s Alexa AI unit, a team experimenting with AI to detect emotions like happiness, sadness, and anger. Back in 2008, to help protect veterans with PTSD, Prasad led a Defense Advanced Research Projects Agency (DARPA) effort to use artificial intelligence to understand veterans’ mental health from the sound of their voice.
Among other things, the program focused on detecting distress in the voices of veterans suffering from PTSD, depression, or suicide risk by looking at informal communications or other patterns.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We were looking at speech, language, brain signals, and sensors to make sure that our soldiers when they’re coming back home — we can pick [up] these signals much earlier to save them,” Prasad said during an interview last month at Amazon’s re:Mars conference. “Because it was really hard. This is an area where there’s stigma, there’s a lot of [reliance on] self-reporting, which is noisy, and no easy way to diagnose that you’re going through these episodes.” For the project, DARPA teamed up with companies like Cogito , which worked with the U.S. Department of Veteran Affairs to alert doctors when it believed a veteran was in trouble from the sound of their voice.
“This was a team of psychologists who were super passionate [about] this problem, and this was a DARPA effort, which means we had funds,” he said, adding that the mission was to save as many lives as possible. “It’s one of the programs that I didn’t like leaving before I came here.” Amazon’s Alexa AI team is currently experimenting with ways to detect emotions like happiness and sadness, work that was published in research earlier this year. Amazon is working on a wearable device for emotion detection that people can use to understand the feelings of those around them, according to Bloomberg.
Alexa’s emotional intelligence project has been in the works for years now.
Prasad told VentureBeat in 2017 that Amazon was beginning to explore emotion recognition AI but only to sense frustration in user’s voices. Prasad was equally tight-lipped at the recent re:Mars conference.
“It’s too early to talk about how it will get applied. Frustration is where we’ve explored offline on how to use it for data selection, but [I have] nothing to share at this point in terms of how it will be applied,” he said.
How it works Amazon’s emotion detection ambitions are visible in two papers it published in recent months.
Both projects trained models using a University of Southern California (USC) data set of approximately 12 hours of dialogue read by actors and actresses. The data set of 10,000 sentences was then annotated to reflect emotion.
“Multimodal and Multi-view Models for Emotion Recognition” detects what Amazon Alexa senior applied science manager Chao Wang calls the big six: anger, disgust, fear, happiness, sadness, and surprise.
“Emotion can be described directly by numerical values along three dimensions: valence, which is talking about the positivity [or negativity] of the emotion, activation, which is the energy of the emotion, and then the dominance, which is the controlling impact of the emotion,” Wang said.
Above: A graph of how valence, dominance, and activation combine to predict human emotion The work’s multimodal approach analyzes both acoustic and lexical signals from audio to detect emotions. Acoustic looks at sonic and voice properties of speech and lexical looks at the word sequence, explained Amazon Alexa senior applied scientist Viktor Rozgic.
“The acoustic features are describing more or less the style [of] how you said something, and the lexical features are describing the content. As seen in the examples, they are both important for emotional connection. So after the features are extracted, they are fed into a model — in our case, this will be different neural network architectures, and then we finally make a prediction, in this case anger, sadness, and a neutral emotional state,” he said.
“Multimodal and Multi-view Models for Emotion Recognition” was accepted for publication by the 2019 Association for Computational Linguistics (ACL).
Above: Model performance metrics for “Multimodal and Multi-view Models for Emotion Recognition” The other paper Amazon recently shared — “Improving Emotion Classification through Variational Inference of Latent Variables” — explains an approach to achieving slight improvements in valence to predict emotion.
To extract emotion from audio recordings, human interaction in voice recordings are mapped to a sequence of spectral vectors, fed to a recurrent neural network, and then used as a classifier to predict anger, happiness, sadness, and neutral states.
“We’re feeding the acoustic features to the encoder, and the encoder is transforming these features to a lower dimensional representation from which [the] decoder reconstructs the original audio features and also predicts the emotional state,” Rozgic said. “In this case, it’s valence with three levels: negative, neutral, and positive, and the role of adversarial learning is to regularize the learning process in a specific way and make the representation we learn better.” “Improving Emotion Classification through Variational Inference of Latent Variables” was presented this spring at the 2019 International Conference on Acoustics, Speech, and Signal Processing.
Research by Rozgic, Prasad, and others published by the 2012 International Speech Communication Association conference Interspeech also relies on acoustic and lexical features.
The evolution of emotion and machine intelligence In addition to offering details about Amazon’s emotion detection ambitions, a session at re:Mars explored the history of emotion recognition and emotion representation theory, what Wang called the foundation for emotion recognition research led by schools like USC’s Signal Analysis and Interpretation Lab and MIT’s Media Lab. Advances in machine learning, signal processing, and classifiers like support vector machines have also moved the work forward.
Applications of the tech range from gauging reaction to video game design, marketing material like commercials, power car safety systems looking for road rage or fatigue, or even to help students using computer-aided learning, Wang said. The tech can also be used to help people better understand others’ emotions, like a project reportedly being developed by Amazon.
Though advances have been made, Wang said emotion detection is a work in progress.
“There’s a lot of ambiguity in this space — the data and the interpretation — and this makes machine learning algorithms that achieve high accuracy really challenging,” Wang said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,011 | 2,019 | "Alexa Guard now detects human activity inside your home | VentureBeat" | "https://venturebeat.com/2019/09/25/alexa-guard-now-detects-human-activity-inside-your-home" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Alexa Guard now detects human activity inside your home Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Alexa’s getting better at detecting potential home or apartment break-ins when you’re not around. During an event today in Seattle, Amazon introduced a new and improved Alexa Guard , the Alexa feature that sends users notifications when Echo smart speakers hear the sound of breaking glass or a smoke or carbon monoxide alarm. Now, Alexa Guard can detect human activity when it’s in Away Mode, and it can be added to Routines.
“We’ve been working on the science training the local model to understand what sounds are correlated with different activities, so … Alexa can notify you when she detected the likelihood of activity in your home,” wrote Amazon in a press release, adding that Guard integrates with Ring and ADT so that alerts can be simultaneously sent to a security provider.
Amazon previously said it trained machine learning models on hundreds of sound samples of glass breaking, which it compiled from contractors. It’s unclear what sort of data set was used for the human activity detection, but we’ll update this post once we learn more.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As before, saying a command like “Alexa, I’m leaving” immediately transitions your Echo devices into Guard Mode with a few smart alerts that, if detected, send a notification to your phone. With the new Routines integration, Guard will also lock any doors with compatible smart locks, and it can randomize compatible lights when you’re away to make it appear as if you’re there.
It’s easy as ever to set up Alexa Guard, which began rolling out to Echo devices late last year. Here’s how: Open the Alexa app on your phone.
Tap the menu button on the top left.
Select Settings.
Select Guard.
Tap Set up Guard.
Tap Add to detect smart smoke alarms and carbon monoxide detectors, activate smart lighting, and enable smart alerts fo broken glass and human activity.
Enter your zip code, so smart lighting knows when to turn on.
Tap Confirm.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,012 | 2,019 | "Sony's PlayStation business gets new president and CEO, Jim Ryan | VentureBeat" | "https://venturebeat.com/2019/02/11/jim-ryan-hmmmmm" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sony’s PlayStation business gets new president and CEO, Jim Ryan Share on Facebook Share on X Share on LinkedIn Why would anybody want to play that? Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Sony’s gaming business has a new leader stepping in to take the top job. Jim Ryan will take over as SIE president and CEO beginning April 1. He is swapping places with current president and CEO John Kodera, who will take over Ryan’s role as SIE deputy president.
Sony is noting that Kodera will continue his work toward building up the company’s gaming and content services. Ryan, meanwhile, will begin overseeing the entirety of the PlayStation business.
Ryan built a reputation after leading the the PlayStation brand to dominance in Europe. He oversaw Sony Computer Entertainment Europe for years. He continued as president of Sony Interactive Entertainment Europe before getting a promotion in 2018 to deputy president.
But PlayStation fans may know him for occasionally sticking his foot in his mouth. Like when he spoke about backward compatibility.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “I was at a Gran Turismo event recently where they had PS1, PS2, PS3, and PS4 games,” Ryan explained to Time Magazine in June 2017.
“And the PS1 and PS2 games, they looked ancient. Like, why would anybody play this?” You can purchase a PlayStation Classic with 20 PlayStation 1 games at a retailer near you right now for $100.
Why is Sony making this change? Sony is nearing the end of one its most successful consoles ever. The PlayStation 4 has surpassed 91.6 million systems sold as of January , and it is likely going to eclipse 100 million this year.
So why is Sony making a change? Sony chief executive officer Kenichiro Yoshida says he wants someone to focus on growing PlayStation as an overarching platform.
“Our Game & Network Services business has grown into the Sony Group’s largest business in terms of both sales and operating income,” Yoshida said in a statement. “Furthermore, our business in this domain holds significant importance as our growth driver going forward. At the same time, this industry is relentlessly fast-moving, and to remain the market leader, we must constantly evolve ourselves with a sense of urgency.” Yoshida said he decided to mix up the leadership team after talking with Kodera. Kenichiro said that while he wants to keep Kodera in charge of expanding network services, Ryan has the experience to ensure stability within PlayStation.
“Jim Ryan has been long committed to the growth of the PlayStation business for the last 25 years,” said Yoshida. “I believe that this new structure – where Jim will manage SIE’s overall organization and operations, and which will allow John to focus on the key mission to further develop PlayStation Network that has now grown into an immensely large platform with over 90 million Monthly Active Users worldwide – will enable SIE to accelerate its innovation and evolution even further.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,013 | 2,019 | "PlayVS raises $50 million more for high school esports platform | VentureBeat" | "https://venturebeat.com/2019/09/18/playvs-raises-50-million-more-for-high-school-esports-platform" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages PlayVS raises $50 million more for high school esports platform Share on Facebook Share on X Share on LinkedIn Delane Parnell is CEO of PlayVS.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
PlayVS has raised $50 million for its platform for high school esports.
The round comes just 10 months after the company raised a previous round of $30.5 million and 15 months after PlayVS raised $15 million. That’s $96 million in only 15 months (Update 9/18/19 7:54 a.m. Pacific time: 13 months, considering that the latest funding closed two months ago).
That’s a whirlwind fundraising record, and Delane Parnell, CEO of the company, said in an interview with GamesBeat that the reason the money keeps coming is that the Los Angeles company has executed on its plans to build a competitive gaming platform for tournaments for high school esports players. For instance, the company will expand to all 50 states with its Seasons events and platform by this fall.
Repeat investor NEA led the round, with participation from Battery Ventures , Dick Costolo and Adam Bain of 01 Advisors, Sapphire Sport, Michael Zeisser, Dennis Phelps of IVP and Michael Ovitz, co-founder of CAA.
Getting traction Above: Now you can play esports in high school with PlayVS.
So what has been so impressive about this 20-month-old company for investors? Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “There’s a lot to be excited about when you think about just the last year,” Parnell said. “We started a company in January of 2018, we started that partnership. We built the product and the team. We launched our Seasons events. Since then, we completely executed this against all of the things that we said we were going to do. We successfully completed our seasons with three games. We grew dramatically from the first season to the second season.” “We’ve had a lot of success by being focused,” Parnell said. “Other esports companies have not been focused on building really good products first. We spend our time thinking about PlayVS and our community — coaches, administrators, players, parents, and teachers. We are on the front lines having conversations with the stakeholders.” PlayVS has previously announced new game partnerships with Psyonix and Hi-Rez Studios, publishers of Rocket League and Smite respectively. It also partnered with Riot Games’ League of Legends. PlayVS has not announced any games for upcoming seasons yet.
PlayVS continues to focus on enhancing students’ experiences by adding popular game titles and more state associations in order to grow high school esports’ audience and attract players interested in other game genres. The company has partnered with nine new state associations for statewide championships since its previous announcement.
“We’ve believed in Delane’s vision for PlayVS since its founding and we’re honored to continue partnering with the company through this next phase of growth,” said Rick Yang, partner at NEA, in a statement. “It’s been tremendously exciting to witness PlayVS catalyze the growth of high school esports — and this is just the beginning! The platform has the promise and potential to shape the future of esports at scale.” The company has also grown from 16 employees at the end of 2018 to a current total of 41, with plans to double that before the end of the year.
“Battery Ventures always strives to work with companies that are seeking to define the future of an industry, and that’s exactly what PlayVS is doing with its platform,” said Roger Lee, general partner at Battery Ventures, in a statement. “By providing access to leading game titles and creating a pipeline for esports athletes, PlayVS is filling a huge gap in the market and we’re eager to see what’s next.“ The software startup’s first product, Seasons, was released in five states in October 2018 and expanded to eight states in the spring of 2019. Of the thousands of players who participated last year, 51% of students were new to the games that they played. On average, schools had 15 students participate in esports, which is more than half of the average ice hockey participation (27 students) and exactly half of the average baseball and basketball participation (30 students).
Over 13,000 schools, which represent 68% of the country, are on the waitlist to build an esports program through PlayVS. For comparison, 14,247 high schools in the US have a football program. And beyond high school, there are more avenues for the company to expand, Parnell said.
“We’re onboarding as many schools as we can, and we’ve empowered the coaches to build their teams,” Parnell said. “At the end of the spring season, we created a wait list. A lot of our investors are excited about those numbers.” Through these partnerships, students will have access to these games as part of their PlayVS league participation fee. For example, with free-to-play games, students will receive in-game perks like Champion Unlocked for League of Legends and for paid games. Publishers will provide copies of their game to every school competing in their league on the PlayVS platform. This gives students a level playing field.
PlayVS said all of these games require critical thinking and teamwork, which are valuable skills students can gain through participation in esports. Unlike traditional sports, joining a PlayVS team does not involve tryouts, cuts, or any experience – just the desire to play. Students can sign up on the PlayVS website for the inaugural season now.
Unlike traditional sports, PlayVS teams can be comprised of any students, without tryouts and regardless of experience, gender or age. There will be no limit to how many unique teams each school can have, which creates a “no-cut” environment and allows all students the chance to compete in esports at the varsity level.
Building the team Above: Rocket League in action.
PlayVS also said it has added three new executives: chief financial officer Gabi Loeb, a longtime finance executive at startups and corporations including Zefr (backed by IVP) and News Corp.; chief technology officer Neel Palrecha, Y Combinator alum with experience from Snapdocs and Headspace; and vice president of growth Robert Lamvik, previously at Headspace and Spotify, where he led the performance marketing team through the transition from paid-only to a freemium business model.
“We’ve done a good job hiring against our goals,” Parnell said. “We built our team at the intersection between esports and tech and education. And that’s allowed us to have great success just given our relentless focus on sort of executing against our vision.” The road ahead Above: League of Legends is turning 10.
The fall season starts on October 21st and runs through January 2020. Schools have until October 11th to sign up for the fall season or can opt to join in the spring season, which starts February 2020.
The participation fee is $64 per player, paid for by either the parent/guardian or the school. This cost provides access to in-game content, valued at more than $700, or the game itself, which ranges from $20 to $60.
For the first time, PlayVS will service all 50 states (including Washington, D.C.). And 15 states will compete for a championship, in partnership with their state association. These states include Alabama, Alaska, Arizona (AIA, CAA), Arkansas, California, Colorado, Connecticut, Georgia (GHSA, GISA), Hawaii, Kentucky, Massachusetts, Mississippi, Rhode Island, Virginia, and Washington D.C.
States not endorsed by their state association will compete regionally for a PlayVS Championship. Rivals High School eSports League and All-Star eSports League.
Parents and esports Above: Nubar “Maxlore” Sarafian reps Misfits’ League of Legends EU team.
I asked Parnell if parents are the enemy, as they likely would rather have their kids read books in high school instead of play video games.
He replied, “Parents want their kids to just succeed. Every parent believes their child is the next Michael Jordan and whatever talent they have. Maybe it is playing tennis or soccer. Every parent thinks their kid is the best. And they certainly want to support their kid in pursuit of that ambition. And so what we’re able to do is allow kids who are really good at playing video games, and more importantly, really passionate about video games and community, and we allow them to be recognized for their talents, to be validated for your talents. I think parents are really excited about that aspect.” Parnell said that a woman from Massachusetts said her son had never played any sport at his high school. But League of Legends changed his life. He met friends, improved his grades, and had a lot of benefits from having such a success in his life. His mother was so impressed that she flew herself and the boy out to Los Angeles to meet with PlayVS. When Parnell came out to meet her, she broke down in tears.
“We’ve had thousands of those stories sort of come in from students. But when we didn’t expect was how it touched the parent,” Parnell said. “She was overjoyed at the impact we had on her child and flew out on her own dime just to meet our team. It was one of the most surreal experiences that I personally ever had while building this company. Because in that in during that moment, she really went into intimate detail around the impact that we’ve had on her kid’s life and we’re super grateful for those stories and want to have more of that impact.” I mentioned that apps such as TeamSnap have done wonders for parents and kids in organizing sports such as soccer. As soccer parents, we live by it. Parnell said that PlayVS goes a step further, as it shows the matches, the practices, any other events, the standings, the rankings for players, profiles, match histories, and individual match statistics. Many coaches have never run teams before, so PlayVS sets up content workshops to help with that. These things are all part of making the community feel comfortable with esports.
Esports hype Above: Esports can fill stadiums for big events — but is that enough? There is a lot of hype, and some leagues won’t work out, Parnell said.
“But there are a lot of different sports and activities,” he said. “Not every sport needs to be like football. There’s room for baseball, basketball, tennis, lacrosse, and ice hockey. Not everything is going to reach the scale of League of Legends or Fortnite.” “There are 2.4 billion gamers in the world, and I’m pretty sure some large number wants to compete in esports,” he said. “But in North America, only 1,000 players actually experienced esports in its purest form. And those players play across 18 professional leagues. And the average age of those leagues is less than three years old. The space is nascent.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,014 | 2,022 | "Esports News | VentureBeat" | "https://venturebeat.com/category/esports" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Esports Joystick raises $8M to turn gamers into owners and content creators Team Liquid and Alienware create esports training facility, the Pro Lab How the metaverse could unlock sustainable revenue models for esports Community Technology is saving the live sports experience Drake Star: Q1 game deals hit $98.7B in value, more than all of 2022 Newzoo: Esports will generate $1.38B in revenue by the end of the year Legendary Play raises $4M for esports-themed mobile games for fans FaZe Clan bringing new original programming and formats to Twitch Wisdom Gaming and Riot Games announce WNS Championship location The Evo 2022 lineup has been announced — where’s Marvel? Sponsored How developers, creators, and brands can tap into the passion of esports fans Bitstamp teams up with Immortals esports team in crypto deal Blockchain.com announces partnership with Cloud9 Azarus raises $4M for League of Legends trivia game overlay Midnite raises $16M for Gen Z-focused esports betting apps Ader, NRG Esports, and Levi’s are teaming up for the NRG Winter Games PlayVS adds Hearthstone to high school esports platform VB Event What’s it going to take to get cryptocurrency widely accepted? VB Event Why play-to-earn games are the key to the marketing and monetization challenge Lessons game companies can learn from Unilever’s brand marketing strategies FaZe Clan and MoonPay offer huge prize in new “FaZe1” challenge VB Event Are you ready and registered for this year’s GamesBeat & Facebook Gaming Summit? United Esports and Dfinity Foundation create $10M blockchain game dev contest Hating and never creating | GB Decides 231 How many times is E3 gonna die? | GB Decides 230 Twitter reports gaming delivered 2.4B tweets in 2021 Xfinity and Mission Control launch City Series for gamers in northeast US VB Event GamesBeat and Facebook Gaming Summit: Keynote speakers announced The DeanBeat: Predictions for gaming in 2022 Gen.G promotes Arnold Hur to CEO of esports firm FaZe Clan partners with MoonPay on crypto and NFTs for esports 100 Thieves secures funding to the tune of $60M Complexity Stars blends celebrities and esports athletes to engage gamers Enthusiast Gaming pays $45M for League of Legends fan community U.GG Riot Games will bring League of Legends World Championship to SF Riot Games’ John Needham talks about the future of esports Nintendo partners with Panda Global for Super Smash Bros. Melee and Ultimate events Riot Games promotes John Needham to president of esports FaZe Clan cuts a sports betting partnership with DraftKings Wearable tech transforms data collection and analysis for athletes 2023 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov 2022 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2021 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2020 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2019 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2018 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2017 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2016 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2015 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2014 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,015 | 2,019 | "Lyft was valued at $24.3 billion in its IPO, and raised more than it planned | VentureBeat" | "https://venturebeat.com/2019/03/29/lyft-was-valued-at-24-3-billion-in-its-ipo-and-raised-more-than-it-planned" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Lyft was valued at $24.3 billion in its IPO, and raised more than it planned Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
( Reuters ) — Lyft was valued at $24.3 billion in the first initial public offering (IPO) of a ride-hailing startup on Thursday, raising more than it had set off to do as investors overlooked uncertainty over its path to becoming a profitable company.
Lyft’s IPO sets the stage for the stock market debut of larger rival Uber , which Reuters has reported will kick off in April.
Uber has been told by its investment bankers that it could be valued at as much as $120 billion.
The success of the IPO came despite Lyft’s steep losses, criticism of its dual-class share structure and some concerns over its strategy for autonomous driving , for fear of missing out on the company’s strong revenue growth.
“In a good market, people look beyond things. They don’t see the problems as much,” said Brian Hamilton, co-founder of data firm Sageworks, speaking before the pricing.
The ride-hailing industry is expected to grow rapidly in the coming years, as young millennials in big cities choose not to buy their own car. Yet the sector is fraught with questions about the future of automated driving, regulatory pushback and legal challenges over drivers’ pay and benefits.
Lyft’s valuation makes it the biggest company to go public since Alibaba Group in 2014. It paves the way for other Silicon Valley companies seeking to float in the stock market this year, including Pinterest, Slack Technologies and Postmates.
Lyft raised $2.34 billion in its IPO. It said it priced 32.5 million shares, slightly more that it was offering originally, at $72, the top of its already elevated $70-$72 per share target range. Lyft started its IPO investor road show earlier this month with a target range of $62-$68 per share.
The stock is set to begin trading on the Nasdaq on Friday under the symbol “LYFT”.
The IPO market had a slow start in 2019 due to volatile markets at the end of last year and the government shutdown in January blocking U.S. regulators from processing new IPO applicants.
With start-ups like Lyft staying private for longer, there is a backlog of demand to allocate more money to stocks which are considered high-growth in order to diversify away from Wall Street’s FAANG trade which is made up of Facebook, Amazon.com, Apple, Netflix and Google parent Alphabet.
Nevertheless, there are concerns among some investors that these IPOs may be coming at the peak of the market, when the benchmark S&P 500 Index has risen more than 200 percent since 2008.
“They’re buying at the top of a bull market that’s lasted for nine years,” said Roberts.
Profitability questions Lyft, which was valued at $15 billion in final private fundraising round in 2018, kicked off its 10-day IPO roadshow on March 18. The company’s executives made stops in cities such as New York, Baltimore, Kansas and Los Angeles. Reuters reported the IPO was oversubscribed after just two days.
It has now nearly 40 percent of the U.S. ride-sharing market, according to its regulatory filing.
Lyft’s revenue was $2.16 billion for 2018, double the previous year’s and far higher than $343 million in 2016. It posted a loss of $911 million in 2018 versus $688 million in 2017.
Lyft was launched in 2012 and is led by its founders, Logan Green and John Zimmer. The company has not laid out a timeline for when it will turn a profit, but stock investors have shown patience in the past if they feel the growth will pay off, with companies like Amazon staying in the red until years after its IPO.
Lyft is smaller than rival Uber and so far has focused on the U.S. and Canadian markets. Uber, a global logistics and transportation company most recently valued at $76 billion in the private market, is seeking a valuation as high as $120 billion, although some analysts have pegged its value closer to $100 billion based on selected financial figures it has disclosed.
Unlike Uber, which has developed its own self-driving division, Lyft has chosen to strike partnerships to expand in the sector, including with car parts suppliers Magna International Inc and Aptiv Plc. General Motors Co is an investor in Lyft.
( Reporting by Carl O’Donnell and Joshua Franklin in New York; editing by Grant McCool and Lisa Shumaker ) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,016 | 2,019 | "Uber's $82 billion valuation underwhelms in most-anticipated IPO since Facebook | VentureBeat" | "https://venturebeat.com/2019/05/10/ubers-82-billion-valuation-underwhelms-in-most-anticipated-ipo-since-facebook" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Uber’s $82 billion valuation underwhelms in most-anticipated IPO since Facebook Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
( Reuters ) — Uber priced its initial public offering on Thursday at the low end of its targeted range for a valuation of $82.4 billion, hoping its conservative approach will spare it the trading plunge suffered by rival Lyft.
It is an underwhelming result for the most anticipated IPO since Facebook’s market debut seven years ago.
Uber raised $8.1 billion, pricing its IPO at $45 per share, close to the bottom of the targeted $44-$50 range.
However, the IPO still represents a watershed moment for Uber, which has grown into the world’s largest ride-hailing company since its start 10 years ago.
The year’s biggest IPO comes against a backdrop of turbulent financial markets, fueled by the trade dispute between the United States and China, as well as the plunging share price of Lyft, which is down 23 percent from its IPO price in late March.
Uber’s valuation in the IPO is almost a third less than its investment bankers predicted last year but still above its most recent valuation of $76 billion in the private fundraising market.
The IPO was oversubscribed, but Uber settled for a lower price to avoid a repeat of Lyft’s IPO in late March, which priced strongly then plunged in trade. Uber also wanted to accommodate big mutual funds, which unlike hedge funds put in orders for a lower price.
Like Lyft, Uber will face questions going forward over how and when it expects to become profitable after losing $3 billion from operations in 2018.
“Ultimately, the success of Lyft’s and Uber IPO’s offerings will be judged based on post-IPO performance and how these companies can sustain their growth, while moving toward profitability and lowering their cash burn,” said Alex Castelli, managing partner at advisory firm CohnReznick.
Despite Uber moderating its IPO expectations, some still consider the stock overpriced.
“Uber is basically Lyft 2.0. Good model, growing sales. But, yet again, here comes California math once more. It is still losing a ton of money,” said Brian Hamilton, a tech entrepreneur and founder of data firm Sageworks. “If you buy, you are buying a bull market, not a company,” he added.
In meetings with potential investors the past two weeks, Uber’s chief executive Dara Khosrowshahi argued that Uber’s future was not as a ride-hailing company, but as a wide technology platform shaping logistics and transportation.
The IPO pricing was a balancing act for Uber’s team of underwriting banks, led by Morgan Stanley, Goldman Sachs & Co and Bank of America Merrill Lynch, to negotiate a good price while leaving some upside to ensure the stock trades up on its market debut.
Uber is due to begin trading on the New York Stock Exchange on Friday under the symbol “UBER.” ( Reporting by Joshua Franklin in New York; Editing by Leslie Adler, Matthew Lewis and Lisa Shumaker ) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,017 | 2,019 | "WeWork IPO filing hypes transformative workplace potential to rationalize massive losses | VentureBeat" | "https://venturebeat.com/2019/08/14/wework-ipo-filing-hypes-transformative-workplace-potential-to-rationalize-massive-losses" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages WeWork IPO filing hypes transformative workplace potential to rationalize massive losses Share on Facebook Share on X Share on LinkedIn July 11, 2017. WeWork Offices Herzliya, Israel.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Now that coworking startup WeWork has made its IPO filing public, the company faces the burden of convincing investors to look past its gaping losses to see its revolutionary workplace potential.
“Our space-as-a-service offering significantly reduces the complexity of leasing real estate to a simplified membership model, while delivering a premium experience to our members at a lower price relative to traditional alternatives and moving fixed lease costs to variable costs for our members,” the company says in its S-1.
“Our membership model is transforming the way individuals and organizations consume commercial real estate.” The company had secretly filed its prospectus with the U.S. Securities and Exchange Commission months ago. But today it made the filing public in anticipation of what is expected to be a public offering in September.
The company did not reveal how many shares it plans to sell, though it included a placeholder figure saying it would raise $1 billion in the offering. The actual number is expected to be higher, and WeWork indicated it plans to separately raise $6 billion in debt financing.
But first it will have to overcome the skepticism it likely faces after reporting a $904 million net loss for the first six months of 2019, with revenue of $1.5 billion.
Given a growing backlash over the massive losses posted by recent IPO giants like Uber, WeWork is making its public debut at a tricky time. But in the filing the company insists its losses are nothing to worry about because they represent investments in infrastructure that will pay big returns over the long term.
As long as it properly manages its debt, WeWork says, its business has a robust future, in part because its real estate management technology, its membership model, and its ability to create community enable it to lease space to members at costs up to 66% lower than if they leased their own office space directly. That technology has included a rash of acquisitions to extend the services WeWork offers clients.
Whether investors swallow this story or not, the company has clearly had a huge impact on the conversation about workspaces. Founded in 2010, WeWork has raised $8 billion in venture capital, led by SoftBank Ventures and Benchmark. The filing says WeWork now has 528 locations in 111 cities across 29 countries with 527,000 memberships.
The filing also speaks of cofounder and CEO Adam Neumann in very reverant tones: “From the day he cofounded WeWork, Adam has set the company’s vision, strategic direction, and execution priorities. Adam is a unique leader who has proven he can simultaneously wear the hats of visionary, operator, and innovator, while thriving as a community and culture creator.” But the filing also sheds light on the complex nature of Neumann’s relationship with the company, noting that he: controls a majority of the company’s voting power, principally as a result of his beneficial ownership of the company’s high-vote stock, which gives him 20 votes per share.
had personally purchased several buildings in WeWork’s early days that he then leased back to the company at a time when landlords were skeptical about the business model. (The company says it is working on a plan to sell those buildings.) has a line of credit of up to $500 million with UBS AG, Stamford Branch; JPMorgan Chase Bank; and Credit Suisse AG, New York Branch; of which approximately $380 million principal amount was outstanding as of July 31, 2019. Neumman apparently pledged an unknown amount of his stock as collateral.
JPMorgan Chase Bank has also given him a total of $97.5 million in loans and credit, a figure that includes mortgages secured by personal property.
This intertwining of Neumann’s and WeWork’s finances will likely raise additional eyebrows among investors.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,018 | 2,019 | "WeWork is the latest sign tech IPO valuations are nonsense | VentureBeat" | "https://venturebeat.com/2019/09/06/wework-is-the-latest-sign-tech-ipo-valuations-are-nonsense" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest WeWork is the latest sign tech IPO valuations are nonsense Share on Facebook Share on X Share on LinkedIn Opening bell at the NYSE.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
With WeWork getting ready to start its pre-IPO road show, I’ve been thinking a lot about initial public offerings. The company, which provides shared workspace, has been valued at up to $47 billion despite the fact that it lost $1.9 billion last year. To last, every system needs order. Yet, in today’s IPO market, there is no consistent, defensible basis for valuing companies.
Any asset is worth what one person will sell it for and what another person will pay for it. This is called a market. If there are 10 people in a community who eat apples, the price of those apples is just a function of what those 10 people will pay on average, whether it’s $100 per apple or 10 cents. It is not up to me or anyone to assign a relative value to an apple.
However, there are important differences between apples and companies. There are imperfect but reasonable valuation metrics that give one confidence in valuing a company. The impetus is that investments in companies have a goal: to get a return.
Let’s say the cash flow a company generates is the most reasonable proxy for its value and that a given company generates $100 per year in cash profit. Let’s also say an investor in that company wants a 10% return. In this example, the investor might be willing to pay very roughly $1,000 for the company ($100 divided by 10%). If the company has 1,000 shares outstanding, the value of each share is $1 ($1,000 valuation for the company divided by the number of company shares). This is a crude valuation barometer, but it makes sense.
Now, if the above valuation methodology is not the one to use, what other one would be reasonable? The further we move away from cash flow analysis, the more we move towards arbitrary methodologies. But, given a market where almost every company going public is losing money, let’s go to a secondary, less valuable metric: measuring the ratio between the sales of a company and its valuation. Let’s call this the valuation to sales ratio. (I am not thrilled with this one because sales don’t necessarily tie to cash flow). If a company is valued at $1 billion and its sales are $100 million, its valuation to sales ratio is 10 — for every $1 of sales the company is generating, it is purporting $10 worth of value.
Let’s now talk about some companies that have gone public recently. The first company we’ll look at is Beyond Meat. Since the company does not have profits, we have to move to valuing it by looking at sales. The year prior to going public, its sales were $88 million, and its net losses were $30 million. It went public at a valuation of $1.47 billion. Its valuation to sales ratio was therefore 17 (valuation/revenue). Today, since its IPO, its valuation to sales ratio is 107.
WeWork’s most recent valuation in the private markets, $47 billion, is roughly 26 times revenue, even though it is losing more money than it is making in annual revenue. In the good old days of five years ago, even if you were losing money, it was nice if your losses at the bottom were not greater than your total revenue on the top. News broke yesterday that the company is considering significantly dropping its IPO valuation to around $20 billion, which would make its valuation to sales ratio 11.
Another company to go public this year is Slack. In 2018, Slack’s net losses were $140 million and it went public at a valuation of $16 billion. Sales were $401 million. Its valuation to sales ratio was therefore around 40.
What does all this mean? As a back-check on cash flow and valuation to sales ratios, let’s turn to historical markers. When Google went public, not only was it profitable, but its valuation to sales ratio was 24. When Facebook went public, it was also profitable, and its sales multiple was 28. So, the valuation to sales ratios were high, but both companies were actually making money, not a small footnote. I realize it seems like ancient history, but when Microsoft went public in 1986, it was profitable, and its sales multiple was 6. So Slack started trading at roughly 6 times the relative value of Microsoft.
I’ve never heard a convincing argument that puts the value of a company beyond future expected cash flows. It’s not just that the companies are losing money and trading at very high sales multiples. It’s that there is no accepted, reliable, and consistent way to value tech IPOs. I did recently hear a counterargument from Bill Gurley of Benchmark Capital on a podcast. Smart dude. He basically made an indirect argument for high prices by saying these companies are vying to be the leader in the market, the value of which is immensely high and possibly more than we can currently calculate. My argument to this is that it will hold about as long as a bull market.
There are two possibilities. The first is that I’m wrong or am missing some new way to value technology companies. The second possibility is that I’m right, and that the technology companies going public today are vastly overpriced. The fact that WeWork is now talking about slashing its valuation in half when it IPOs indicates I’m right. Although half of $47 billion is probably still too high.
[ VentureBeat regularly publishes guest posts from experts who can provide unique and useful perspectives to our readers on news, trends, emerging technologies, and other areas of interest related to tech innovation.] Brian Hamilton is founder of the HamiltonIPO.com and the Brian Hamilton Foundation.
You can follow him @brianhamiltonnc.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,019 | 2,019 | "WeWork CEO Adam Neumann steps down ahead of IPO | VentureBeat" | "https://venturebeat.com/2019/09/24/wework-ceo-adam-neumann-steps-down-ahead-of-ipo" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages WeWork CEO Adam Neumann steps down ahead of IPO Share on Facebook Share on X Share on LinkedIn Adam Neumann speaks onstage on January 9, 2019 in Los Angeles, California.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
WeWork cofounder Adam Neumann agreed today to resign as CEO and give up majority voting control, after SoftBank Group and other shareholders turned on him over a plunge in the U.S. office-sharing startup’s estimated valuation.
The decision came after We Work parent We Company postponed its initial public offering last week following push-back from perspective investors, not just over its widening losses, but also over Neumann’s unusually firm grip on the company.
This was a blow for SoftBank, which was hoping for We Company’s IPO to bolster its fortunes as it seeks to woo investors for its second $108 billion Vision Fund.
SoftBank invested in We Company at a $47 billion valuation in January. But investor skepticism led to the startup considering a potential IPO valuation earlier this month of as low as $10 billion, Reuters reported.
We Company had vowed to press ahead with an IPO by the end of the year. But there was little sign that IPO investor sentiment would change, threatening the value of the stakes, not just of outside investors in the company, but of Neumann as well.
What was the venture capital world’s biggest upset then morphed into one of corporate America’s most high-profile boardroom dramas. SoftBank managed to muster enough opposition to Neumann in a meeting of We Company’s seven-member board on Tuesday to convince him to step down. Reuters had reported on Monday that Neumann had engaged in talks about changes to his role.
“In recent weeks, the scrutiny directed towards me has become a significant distraction, and I have decided that it is in the best interest of the company to step down as chief executive,” Neumann said in a statement.
Artie Minson, currently chief financial officer of WeWork parent We Company, and Sebastian Gunningham, a vice chairman for the New York-based startup, will become co-chief executives, the company said. Neumann will stay on the board as non-executive chairman, the company added.
Minson will oversee We Company’s finance, legal, human resources, real estate, and public communications, while Gunningham will take responsibility for product, design, development, sales, marketing, technology, and regional teams.
Neumann also agreed to reduce the power of his voting shares, losing majority voting control, according to the sources. Each of his shares will now have the same voting rights as three We Company common shares, not the 10 common shares they had previously, the sources said.
Neumann’s shares used to have the same voting power as 20 We Company common shares, before he agreed to reduce his grip slightly earlier this month in an unsuccessful attempt to make the IPO more attractive to investors.
We Company said on Tuesday it was now evaluating the “optimal timing” for an IPO.
Firm grip Neumann, whose net worth is pegged by Forbes at $2.2 billion, developed a cult following among many We Company employees, vowing to “elevate the world’s consciousness” as he sought to establish WeWork as a brand that transcended office sharing.
While his investors were willing to entertain his eccentricities over the decade that he has led WeWork since its founding in 2010, his free-wheeling ways and party lifestyle came into focus once he failed to get the company’s IPO underway.
During the company’s attempts to woo IPO investors this month, Neumann was criticized by corporate governance experts for arrangements that went beyond the typical practice of having majority voting control through special categories of shares.
These included giving his estate a major say in his replacement as CEO, and tying the voting power of shares to how much he donates to charitable causes.
Neumann also entered several transactions with We Company over the years, making the company a tenant in some of his properties and charging it rent. He has also secured a $500 million credit line from banks using company stock as collateral.
Late concessions Following criticism by potential investors, Neumann agreed to some concessions without relinquishing majority control. He agreed to give We Company any profit he receives from real estate deals he has reached with the New York-based startup.
These changes did little to address concerns about the business model for We Company, which rents out workspace to clients under short-term contracts, even though it pays rent under long-term leases. This mix of long-term liabilities and short-term revenue raised questions among investors about how the company would weather an economic downturn.
Neumann, 40, is not the first founder of a major startup to be forced to step down in the past few years. Uber cofounder Travis Kalanick resigned as CEO of the ride-hailing startup in 2017 after facing a rebellion from his board over a string of scandals, including allegations of enabling a chauvinistic and toxic work culture.
Uber replaced Kalanick with an outsider, former Expedia Group CEO Dara Khosrowshahi, and completed its IPO in May.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,020 | 2,017 | "Is this the end of venture capital as we know it? | VentureBeat" | "https://venturebeat.com/2017/04/22/is-this-the-end-of-venture-capital-as-we-know-it" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Is this the end of venture capital as we know it? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
We are in a perverse moment in the global venture capital industry: VCs are fast coming to resemble private hedge funds, and the more money they are able to raise, the worse off startups are becoming.
Capital is flowing into funds of all types, yet the rate of investment is shrinking rapidly. This could mark the decline of true venture capital by many funds as they are forced to evolve into private hedge funds or momentum investors, investing far larger amounts in much later stage pre-IPO companies, and drifting further away from taking higher risk, long-term investment in innovation. Perversely, this risk is greatest for the most valuable, value-adding investors with the best track records, since they can raise the most money.
Activity is down U.S. venture capital activity was essentially flat in Q1 after a sharp drop over the past two years, PwC and CB Insights have reported. While value is up 15 percent, that is mainly due to larger rounds for more established companies, not true venture capital. However, this does nothing to change the macro picture, which is trending down: Record amounts of capital flowing to VC funds At the same time, the last five quarters have been nearly the best ever for raising venture funds. Almost $50 billion in new venture funds has been raised since the start of 2016 ($40 billion in 2016, another $9 billion in Q1 2017), and the tide shows no sign of turning. This is by far the strongest fundraising period since the heady days of the 2000 tech bubble.
So the obvious question is, where will VCs invest their hard-won war chests, especially since the overall trend in number of deals is down? VCs pushed to become private hedge funds My view is that the VC industry is evolving into a private hedge fund industry, raising larger funds and investing larger amounts in late stage companies that are well past the innovation stage. While very few of the best funds have the discipline to keep their fund sizes artificially down (at different times, both Sequoia and Kleiner Perkins Caufield & Byers took less money than investors offered), the vast majority of funds raise as much as they can.
One example of this trend is Index Ventures, the U.S. and European reference VC that has been a key investor in several high-value companies, from Dropbox to Skype to Just Eat. Below is the trend line of its fund sizes: Realistically, a “venture” fund nearing $600 million in size is already beyond the limit of being able to make real venture stage investments. Why? Very few funds (Sequoia, Andreessen Horowitz, Kleiner, and a few others) get to invest early in the highest value companies. Among CNBC’s 2015 Disruptor 50 list, Kleiner alone was an investor in 15. On CB Insights’ list of unicorns, only three venture firms feature in the most active group (the rest are late stage investors). Any venture firm NOT in the top group doesn’t have the opportunity to invest early enough in high-value companies to deploy a large $500-700 million venture fund successfully.
Companies generally need less money to scale now compared to a decade ago: AWS, cloud software, app stores, and development in low-cost markets have cut the cost of scaling a business by 50 percent or more vs. 10 years ago. For example, Facebook only had 3,200 staff when it IPO’d for $100 billion. Less capital needed equals less venture money required.
Finally, there is now a ready market for large late-stage rounds. Many companies are staying private much longer and raising “mega rounds” before IPOing (witness AirBnB’s $1 billion round in Q1 2017). An unfortunate side effect of these mega rounds is that, as funds become driven by larger momentum investments, less interest and attention is paid to “ordinary” venture investments going to the potential unicorns of the future.
This trend won’t end funding for startups overnight. If only a quarter or a third of a fund is effectively a private hedge fund, there will be room to accommodate a large number of more capital-efficient startups. But if the majority of a fund goes to late stage “momentum rounds” that perform reasonably well, VCs will care a lot less about venture in future.
The bottom line: The more money VCs are able to raise, the worse it is for startups.
Victor Basta is founder of Magister Advisors , a specialist bank focused on M&A exits and larger financing rounds.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,021 | 2,018 | "Early-stage health tech investment is getting squeezed | VentureBeat" | "https://venturebeat.com/2018/06/25/early-stage-health-tech-investment-is-getting-squeezed" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Early-stage health tech investment is getting squeezed Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
At the end of Q3-2017, Crunchbase famously reported that projected near-term global venture “deal and dollar volume” would revisit “post-Dot Com highs.” The numbers matched up. Q3-2017 was the fourth consecutive quarter of growth for both deal and dollar volume, surpassing previous highs reached in 2016.
Through Q1-2018, public markets showed some volatility, but venture deals continued to close at a historically accelerated pace. Global seed and early stage deal volume also appeared to achieve another banner quarter in Q1-2018 — presenting a picture of strong demand for seed and early-stage deals globally.
The U.S. early stage venture market, however, isn’t meeting that global metric. In my team’s recent analysis of Pitchbook data on U.S. early-stage ventures — deals valued at $10 million or less — we saw a sharp contradiction between the state of early stage funding in the U.S. and the trend Crunchbase reported.
By our analysis, early-stage companies in the U.S. are finding it harder to close financing. In fact, Q1 2018 was the lowest quarterly deal count since Q4 2013, with both deal and dollar volume contracting. Within one year, deal volumes between Q1 of last year and Q1 of 2018 dropped a whopping 31 percent.
And, as we detail in our report , the health tech sector is looking particularly rough. From Q1 of 2017 to Q1 of this year, deal volumes in early-stage health technology dropped by 39 percent — a drop we haven’t seen in the last eight years.
U.S. early-stage health tech investing is typically seasonal. However, if you look specifically at Q3-2017 and Q1-2018, the once reliable bounce-backs since 2013 did not occur. This is the first time since at least 2010 that we have seen four consecutive quarters of contraction in early stage investing in the sector.
Why the funding squeeze? Of course, there are many potential causes that may be driving a flight of early-stage capital away from the sector. Increased uncertainty around policy may be impacting risk-sensitive early-stage investors. The emergence of bright and shiny sectors, such as biopharma, may not present capital efficient opportunities for traditional seed or angel investment. Or, given the run-up to the sector’s peak in 2016, smaller investors might just need a breather.
But, whatever the reason, it seems clear that early-stage health technology investment is getting squeezed — that is, health tech investors seem to be favoring larger and later rounds, and seed investors seem to be sitting on the sidelines.
The bigger picture Reviewing the recent contraction in the context of fundamentals fueling historical growth of health tech spending provides insight on this trend’s impact.
First, the demand for health care continues to increase. Driven by an aging population and obesity, U.S. spending on healthcare will continue to increase at a predicable rate for the foreseeable future. Healthcare spending will approach $3.6 trillion or approximately 18 percent of the U.S. economy in 2018.
Exacerbating the impact of this demand on health systems, the margins with which they need to operate continue to contract. The way health systems are compensated for the care they deliver is evolving, forcing health systems to demonstrate greater efficiency — to do more with less — or to go out of business.
Given the continued rising demand for care, it’s not likely that health service providers will be able to maintain a high standard of care with less or less-capable human capital. Rather, health service providers will turn to technology — to innovation — to increase efficiency and margins, while also improving care.
Additionally, in spite of the lack of early-stage funding available for health technology startups, investment in U.S. research and development in health technology continues to remain strong. For example, NIH spending in research and development in 2018 is expected to exceed $200 billion — consistent with an average CAGR of 8.4 percent since 2014.
The migration of venture funding away from early stage is of course leaving a gap in the precious capital required to drive the innovation that this industry needs.
Finally, from the last four quarters of deal volume and value, we know that demand for what we would consider traditional early-stage deals has dropped. Economics 101 show us, consistent with an increase in supply, a decrease in demand delivers a predictable outcome.
The expectation is that prices for what we would have historically considered early-stage deals will drop to increasingly attractive levels over the next several quarters. By taking advantage of this relative lack of early-stage capital in the near term, early-stage health tech investors will be able to generate outsized returns.
Rick Gordon is Director at Inova Personalized Health Accelerator.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,022 | 2,019 | "How to boost the health of your sales org by hyping up your sales team (VB Live) | VentureBeat" | "https://venturebeat.com/2019/09/12/how-to-boost-the-health-of-your-sales-org-by-hyping-up-your-sales-team-vb-live" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live How to boost the health of your sales org by hyping up your sales team (VB Live) Share on Facebook Share on X Share on LinkedIn Presented by Dell for Entrepreneurs The quality of your sales force has a direct, accumulating impact on your business as you grow. Check out this VB Live event for key strategies and best practices small- and mid-sized companies need to build a world-class sales organization from the start.
Access on demand for free now.
What makes a world class sales organization? Of course it starts with your sales team, says Burt Powers, senior director for North America small business sales at Dell.
“There’s a lot of choices out there, especially in the small business market,” Powers says. “You want customers to choose you because of that actively fantastic experience you provide. It creates an ongoing relationship to help them grow and help your organization grow as well.” So your company’s ability to grow directly depends on the strength, quality, and abilities of your each of your teams across the organization. And success there relies on how you build your sales culture from the bottom up, right from your first hire.
Finding the right candidates The myopic mistake a lot of sales managers make out of the gate is looking for direct experience when they’re hunting down a team.
“We’re not necessarily pulling from folks that have it on their resume already,” Powers says. “What we do is identify those characteristics that translate the best.” Some that stand out is the ability to carry a conversation comfortably, engaging on both sides. Are you competitive, and like the idea of having goals and targets? Do you see those as an opportunity to achieve and overachieve against a goal? Are you constantly looking to learn? Do you position yourself as a product or subject matter expert? And maybe most importantly, Power adds, is whether they think about the customer first, and the quality of the experience they’re providing. That means it’s important to find reps that are honest with their customers, because the foundational block for all interactions is trust, and acting in the best interest of the people you work with.
“Product knowledge is something we can teach,” he says. “Relationship building is what makes a good sales rep great.” Transformative training strategies From the start, your organization has to go all-in on mentoring and motivating reps. Training is never a one-and-done situation, Powers says.
“For us, this is a constant review,” he explains. “Do we have it right? Where do we need to evolve and update?” His Dell sales reps go through three to four weeks of intensive classroom training, which includes opportunities to practice and to mentor, go out onto the sales floor, sit with agents that do the job daily, and observe it in action and ask questions.
After their initial training period, reps still get recursive training every week over a variety of topics — refreshers, new content, audits of sales skills and product knowledge, new tools, and strategies. These aren’t all powerpoints and quizzes — they involve hands-on demonstrations from external vendors, team Q+A sessions, and more.
The key to motivation “Compensation is at the heart of sales makers,” Powers says. “I’m a sales maker even though I’m leading the organization, so I have a vested interest in this topic.” When it comes to keeping their sales reps motivated and excited, getting the compensation piece right is at the top of the list, he explains. They stay on top of their agents’ concerns and questions, and rely on the stream of feedback they elicit from roundtables and surveys.
Powers prefers the quota system. When framing up a decision around setting up a quota, you have three key deciding factors: the employee, the customer, and the company.
The employee ideally sees it as attainable. The company sees it as responsible and proper given the expectations. And it’s sending reps after quotas in a way that doesn’t end up diminishing the customer experience. In other words, setting it so it’s not a target that’s encouraging ugly behavior, such as lying to push sales through faster.
When building a quota or working with a finance team to identify it, the first thing to do is look at your organization’s sales history for a baseline, and then the most current six- to 13-week period in order to identify how you’re trending. What’s the rate of improvement or challenge? What’s the most recent performance that we should replicate? Note if there are any significant events to account for, which won’t repeat themselves. For example, in retail, big seasonal buying is only a Q4 phenomenon.
Add market conditions and ongoing initiatives, programs, or investments that are expected to produce new results as the final important variables, have operations crunch the numbers for you, and set your quota and quota cycle from there.
Brand-new agents should have ramp quotas, building steadily from month one and based on very specific expectations that are all based on history to help determine if the new agent is on track, or you can adjust accordingly if they’re wobbling.
Setting up exciting incentives Incentives and rewards are a huge component of successful motivation, but it’s difficult to calibrate the size and type and timeline of rewards that work best for your team.
“When the value of the reward gets higher and higher, you can sometimes unfortunately create environments where some people feel like they have to do something dishonest to achieve it,” Powers explains. “Our coaching and manager engagement with our sales agents focuses on teaching them the right way to do the job, in the best interests of their customers and their targets, and still hit the rewards they’re looking for.” Commissions based on sales quotas have been the most successful arrangement across the board, because it keeps rep attention on meeting quotas, which is the key to growing and scaling their group.
But contests with individual monetary rewards and product giveaways add a spirit of fun and competition, and team contests help encourage and enforce bonds between salespeople when they’re motivated to hit a common goal. Burst contests with small temporary incentives to sell a new product or service can spark excitement .
But always stay sensitive to the size of the reward, so reps aren’t tempted to put their employment at risk to try to win it.
Your team, and how you reward and recognize them, makes all the difference to your company’s success. To learn more about the tools and strategies that keep them motivated at every level, encourage company-wide investment in your sales organization’s success and growth, and how to keep identifying what’s working and what’s not, and what makes all the difference, don’t miss this VB Live event! Access on demand for free right here.
Watch this webinar and learn: How to effectively hire and train a top sales force What the smartest sales quotas look like, and how to set them How to create and implement business management systems What motivates sales teams, and how to take advantage What an effective sales culture looks like and how to build one Speakers: Burt Powers , Senior Director of Dell Small Business Sales Stewart Rogers , Analyst At Large, VentureBeat The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,023 | 2,018 | "How SpaceX plans to land on Mars in 2018 using the most powerful rocket in the world | VentureBeat" | "https://venturebeat.com/2016/04/30/how-spacex-plans-to-land-on-mars-in-2018-using-the-most-powerful-rocket-in-the-world" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How SpaceX plans to land on Mars in 2018 using the most powerful rocket in the world Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
On Wednesday, Elon Musk’s private company SpaceX announced on Twitter that it plans to send a spacecraft to Mars as soon as 2018.
The mission will involve sending a spacecraft called the Red Dragon to Mars to retrieve samples collected by NASA’s Mars rover and then return them to Earth.
Here’s SpaceX’s announcement: Planning to send Dragon to Mars as soon as 2018. Red Dragons will inform overall Mars architecture, details to come pic.twitter.com/u4nbVUNCpA — SpaceX (@SpaceX) April 27, 2016 SpaceX has had big plans to usher in a new era of reusable rockets that could send the first humans to Mars and return them home for a while. In 2011, SpaceX released a video showing how they were going to re-land a rocket booster after launching it to space — something that had never been done before.
On December 21, 2015, SpaceX successfully landed its first reusable rocket , a Falcon 9, on a launch pad. They followed that up on April 8, 2016 by successfully landing another Falcon 9 on a barge floating in the ocean. Musk has announced plans to relaunch this Falcon 9 as early as May.
The Red Dragon will be launched into space with the Falcon Heavy rocket, which is kind of like the Falcon 9 on steroids. SpaceX has announced plans to launch the rocket into space as soon as November 2016.
SpaceX has boasted that the Falcon Heavy is the world’s most powerful rocket, capable of carrying twice the payload of the Space Shuttle. Only the Saturn V, the rocket used to launch astronauts to the moon in the Apollo program, was capable of delivering more payload to orbit.
The Falcon Heavy is a multistage rocket, which means it contains separate rockets, or stages, stacked on top of each other. Each stage contains its own engine and propellant. When a stage runs out of propellant, it is ejected from the spacecraft to decrease the remaining mass of the rocket.
Musk confirmed on Twitter on Friday, April 29 that SpaceX will be attempting to land all three booster stages during the Falcon Heavy launch: @elonmusk For 1st launch of Falcon Hvy will there be effort to simultaneously land all 3 booster stages? #FalconHeavy — Danny S. Parker (@dannysparker) April 30, 2016 So what’s cooler than landing a rocket on Earth? Landing on Mars, of course.
Judging from the illustrations on their Flickr account, SpaceX plans to land on Mars using a simple approach that’s never been tried before.
This is SpaceX’s Dragon spacecraft, which is not designed to carry humans, sitting on the Red Planet: SpaceX Photos This unmanned Dragon capsule has been making trips to the International Space Station since 2010. But to get to Mars, which is 560,000 times farther, the Dragon will need to ride a more powerful rocket than the Falcon 9, which it takes to the ISS.
That rocket is SpaceX’s Falcon Heavy, illustrated below, that is scheduled to launch out of Kennedy Space Center for the first time next year.
SpaceX on Flickr However, this monster rocket will only take Dragon so far. Getting to Mars is easy compared to landing on it because the Martian atmosphere is a tricky beast to control.
The Martian atmosphere is about 1,000 times thinner than Earth’s, so simple parachutes won’t slow a vehicle down enough to land safely.
But that atmosphere is still thick enough to generate a great deal of heat from friction against a spacecraft.Therefore, to land on Mars you have to have a spacecraft with a heat shield that can withstand temperature of 1600 degrees Fahrenheit.
Luckily, Dragon’s heat shield can protect it against temperatures of over 3,000 degrees Fahrenheit , so plummeting toward Mars, illustrated below, shouldn’t be a problem heat-wise.
SpaceX Photos on Flickr But there’s still the problem of slowing down. Although gravity on Mars is about 1/3 of what it is on Earth, the vehicle is still plummeting toward the ground at over 1,000 miles per hour after entering Mars’s atmosphere. If it were to hit the ground at those speeds, you’d have a disaster.
The way that SpaceX aims to deal with this tricky problem is to use the thrusters on board the Dragon spacecraft to first redirect its momentum from downward to sideways, as illustrated below, thus reducing its speed: SpaceX Photos on Flickr And then, as the spacecraft continues to plunge toward the surface, it will fire its thrusters one final time for a soft, vertical touch down: SpaceX Photo on Flickr This sort of landing is unlike anything that anyone has ever tried before, but you have to admit that Dragon looks pretty great on Mars if it ever manages to get there: SpaceX Photo on Flickr The last major Mars landing was NASA’s Curiosity rover in 2012. This landing was a huge success but extremely complicated that involved half a dozen steps that, if not completed perfectly, would end in disaster. NASA dubbed the landing process “7 minutes of terror” because that’s how long it took to enter the atmosphere and land.
But the technology isn’t ready for human passengers just yet. Musk tweeted that the Dragon might not be the most comfortable environment for space explorers.
This mission marks an important milestone in the partnership between NASA and SpaceX, bringing them one step closer to achieving their goal of sending humans to Mars in by the 2030s.
This post originally appeared on Business Insider.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,024 | 2,018 | "Watch Elon Musk's SpaceX launch the Falcon Heavy rocket to throw a Tesla into orbit around Mars | VentureBeat" | "https://venturebeat.com/2018/02/06/how-to-watch-spacex-launch-the-falcon-heavy-that-will-throw-a-tesla-car-into-orbit-around-mars" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Watch Elon Musk’s SpaceX launch the Falcon Heavy rocket to throw a Tesla into orbit around Mars Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Never one to let us be bored for too long, Elon Musk is back today with his latest audacious bet on the future of transportation.
At 1:30 p.m. Eastern ( Update : 3:45 p.m. Eastern), Musk’s SpaceX is scheduled to test launch its Falcon Heavy rocket. It’s the first time for this new generation, which boasts 27 Merlin engines compared to the mere 9 packed into SpaceX’s Falcon 9.
The goal with Falcon Heavy is to develop a rocket that can lift massive amounts of cargo into space. Its capacity of 119,000 pounds is twice as large as the capacity of most rockets that carry payloads.
So what’s it carrying? Naturally, Musk has placed a Tesla inside. Because, why not? The vehicle itself will contain a human dummy dressed in a SpaceX suit. The rocket will hurl the Tesla into space, where it is supposed to eventually drift into orbit around Mars.
You can watch the livestream of the liftoff above and follow SpaceX’s Twitter feed for launch updates. It’s not unusual for there to be minor delays as adjustments are made: The first test flight of Falcon Heavy is targeted for Tuesday, Feb. 6th at 1:30 PM ET from Launch Complex 39A at Kennedy Space Center in Florida. When Falcon Heavy lifts off, it will be the most powerful operational rocket in the world by a factor of two.
https://t.co/jzv975xKB0 pic.twitter.com/yAVGdXJjEs — SpaceX (@SpaceX) February 6, 2018 To get an overall scope of the what Musk has in mind, check out the conceptual video above.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,025 | 2,019 | "Google Cloud Text-to-Speech now has 187 voices and 95 WaveNet voices | VentureBeat" | "https://venturebeat.com/2019/08/27/google-cloud-text-to-speech-now-has-187-voices-and-95-wavenet-voices" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Cloud Text-to-Speech now has 187 voices and 95 WaveNet voices Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Back in February, Google announced a series of updates to its Google Cloud Platform (GCP) AI text-to-speech and speech-to-text services that introduced multichannel recognition, device profiles , and additional languages synthesized by an AI system — WaveNet — pioneered by Google parent company Alphabet’s DeepMind. Building on those enhancements, the Mountain View company today expanded the number of new variants and voices in Cloud Text-to-Speech by nearly 70%, boosting the total number of languages and variants covered to 33.
Now, thanks to the addition of 76 new voices and 38 new WaveNet-powered voices, Cloud Text-to-Speech boasts 187 total voices (up from 106 at the beginning of this year) and 95 total WaveNet voices (up from 57 in February and 6 a year and a half ago). Among the newly supported languages and variants are Czech, English (India), Filipino, Finnish, Greek, Hindi, Hungarian, Indonesian, Mandarin Chinese (China), Modern Standard Arabic, Norwegian (Nynorsk), and Vietnamese, all of which have at least one AI-generated voice.
“With these updates, Cloud Text-to-Speech developers can now reach millions more people across numerous countries with their applications — with many more languages to come,” wrote product manager Dan Aharon. “This enables a broad range of use cases, including call center IVR, interacting with IoT devices in cars and the home, and audio-enablement of books and other text-based content.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For the uninitiated, WaveNet mimics things like stress and intonation, referred to in linguistics as prosody, by identifying tonal patterns in speech. It produces much more convincing voice snippets than previous speech generation models — Google says it has already closed the quality gap with human speech by 70% based on mean opinion score — and it’s also more efficient. Running on Google’s tensor processing units (TPUs), custom chips packed with circuits optimized for AI model training, a one-second voice sample takes just 50 milliseconds to create.
Aharon notes that Cloud Text-to-Speech handily leapfrogs rivals like Microsoft’s Azure Speech Services and Amazon Poly by the number of AI voices on offer: 11 of Polly’s 58 voices are generated by an AI model, while only 5 of Azure Speech Services’ voices are AI-synthesized. Moreover, Polly and Azure Speech Services feature only 2 and 4 total languages/variants with AI-powered voices, respectively.
“When customers call into contact centers, use verbal commands with connected devices in cars or in their homes, or listen to audio conversions of text-based media, they increasingly expect a voice that sounds natural and human,” wrote Aharon. “Businesses that offer human-sounding voices offer the best experiences for their customers, and if that experience can also be provided in numerous languages and countries, that advantage becomes global.” Cloud Text-to-Speech is free to use up to the first million characters processed by the API.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,026 | 2,017 | "PayPal partners with China's Baidu to target Chinese tourists | VentureBeat" | "https://venturebeat.com/2017/07/27/paypal-partners-with-chinas-baidu-to-target-chinese-tourists" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages PayPal partners with China’s Baidu to target Chinese tourists Share on Facebook Share on X Share on LinkedIn Baidu Silicon Valley AI Lab in Sunnyvale, California.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
( Reuters ) — Baidu said on Thursday that it has entered a strategic agreement with PayPal to tap overseas merchants as Chinese tech companies ramp up the fight for overseas payment partnerships.
Under the agreement, Baidu’s payment platform, Baidu Wallet, will be accepted by roughly 17 million PayPal merchants globally, the Chinese firm said in a statement on Thursday.
“Partnering with PayPal on technology and product innovation will provide Baidu users with the ultimate cross-border consumer experience,” said Baidu senior vice president Guang Zhu.
The deal comes as China’s top payment firms, Alibaba Group Holding Ltd affiliate Ant Financial and Tencent Holdings Ltd, are making aggressive moves to expand their payment networks overseas.
This year Ant Financial entered a $1.2 billion bid for U.S. remittance firm MoneyGram International Inc, a deal that has attracted scrutiny from U.S. lawmakers who say it could threaten national security.
Ant has also purchased stakes in half a dozen payment firms across Asia as part of a plan to expand its finance business outside of China.
Tencent’s WeChat Pay has also inked a series of partnerships with global payment firms and earlier this month said it has applied directly for a Malaysian payments license.
PayPal, which reported better-than-expected results on Thursday, entered into new partnerships with Apple Inc and Bank of America this month.
It has also targeted Southeast Asian markets recently through local partnerships, but earlier attempts to enter the Chinese market directly have been unsuccessful due to tight regulation and strong local players.
PayPal will work with Baidu’s financial services group under the new agreement to target cross-border payments between Chinese consumers and online businesses based outside of China.
Baidu Wallet has roughly 100 million users, and still trails behind Tencent’s WeChat and Ant Financial’s Alipay in terms of users and global reach.
( Reporting by Cate Cadell; Editing by Kim Coghill ) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,027 | 2,018 | "U.S. blocks MoneyGram sale to China's Ant Financial over national security concerns | VentureBeat" | "https://venturebeat.com/2018/01/03/u-s-blocks-moneygram-sale-to-chinas-ant-financial-over-national-security-concerns" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages U.S. blocks MoneyGram sale to China’s Ant Financial over national security concerns Share on Facebook Share on X Share on LinkedIn A Moneygram logo is seen outside a bank in Vienna, Austria, June 28, 2016.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
( Reuters ) — A U.S. government panel rejected Ant Financial’s acquisition of U.S. money transfer company MoneyGram International over national security concerns, the companies said on Tuesday, the most high-profile Chinese deal to be torpedoed under the administration of U.S. President Donald Trump.
The $1.2 billion deal’s collapse represents a blow for Jack Ma, the executive chairman of Chinese internet conglomerate Alibaba Group Holding, who owns Ant Financial together with Alibaba executives. He was looking to expand Ant Financial’s footprint amid fierce domestic competition from Chinese rival Tencent Holdings’ WeChat payment platform.
Ma, a Chinese citizen who appears frequently with leaders from the highest echelons of the Communist Party, had promised Trump in a meeting a year ago that he would create 1 million U.S. jobs.
MoneyGram shares were down 8.5 percent at $12.06 in after-market trading.
The companies decided to terminate their deal after the Committee on Foreign Investment in the United States (CFIUS) rejected their proposals to mitigate concerns over the safety of data that can be used to identify U.S. citizens, according to sources familiar with the confidential discussions.
“Despite our best efforts to work cooperatively with the U.S. government, it has now become clear that CFIUS will not approve this merger,” MoneyGram Chief Executive Alex Holmes said in a statement.
A standard CFIUS review lasts up to 75 days, and the companies had gone through the process three times in order to address concerns. Additional security measures and protocols that the companies suggested failed to reassure CFIUS, the sources said.
The U.S. Treasury said it is prohibited by statute from disclosing information filed with CFIUS and declined to comment on the MoneyGram deal.
The U.S. government has toughened its stance on the sale of companies to Chinese entities, at a time when Trump is trying to put pressure on China to help tackle North Korea’s nuclear ambitions and be more accommodative on trade and foreign exchange issues.
The MoneyGram deal is the latest in a string of Chinese acquisitions of U.S. companies that have failed to clear CFIUS. They include China-backed buyout fund Canyon Bridge Capital Partners LLC’s $1.3 billion acquisition of U.S. chip maker Lattice Semiconductor, China Oceanwide Holdings Group’s $2.7 billion acquisition of U.S. life insurer Genworth Financial Inc and Chinese buyout firm Orient Hontai Capital’s $1.4 billion acquisition of U.S. mobile marketing firm AppLovin.
Financial services deals The MoneyGram deal’s demise is also the latest example of how CFIUS’ focus on cyber security and the integrity of personal data is prompting it to block deals in sectors not traditionally associated with national security, such as financial services.
Other U.S. financial services deals by Chinese firms are waiting for approval from CFIUS, including HNA Group Co’s acquisition of hedge fund-of-funds firm SkyBridge Capital LLC from Anthony Scaramucci, the Trump administration’s former communications director.
Skybridge and HNA did not immediately respond to requests for comment.
Dallas-based MoneyGram has approximately 350,000 remittance locations in more than 200 countries. Ant Financial was looking to take over MoneyGram not so much for its U.S. presence but to expand in growing markets outside of China.
Ant Financial and MoneyGram said they will now explore and develop initiatives to work together in remittance and digital payments in China, India, the Philippines and other Asian markets, as well as in the United States. This cooperation will take the form of commercial agreements, one of the sources said.
Any arrangements reached by Ant Financial and MoneyGram that do not involve a transaction would not be subject to review by CFIUS.
Some U.S. lawmakers, including Republican Senators Pat Roberts and Jerry Moran, had written to Treasury Secretary Steven Mnuchin, who also serves as chairman of CFIUS, to express concern that Ant Financial’s acquisition of MoneyGram could pose national security threats, arguing that the information of U.S. citizens, including military personnel, could be compromised.
Ant Financial had argued that MoneyGram’s data infrastructure would remain in the United States, with personal information encrypted or held in secure facilities on U.S. soil. It had also pointed to existing U.S. regulations that call for such protections.
CFIUS had approved a previous deal by Ant Financial, its acquisition in 2016 of Kansas City-based EyeVerify, which designs a mobile eye verification technology.
“What is more likely to happen at this point is that MoneyGram will sell to another company, and one company that has shown interest in the past is Euronet,” said Gil Luria, an equity analyst at D.A. Davidson & Co.
Ant Financial clinched an $18 per share all-cash deal to acquire MoneyGram in April, seeing off competition from U.S.-based Euronet Worldwide, which had made an unsolicited offer for MoneyGram and openly lobbied U.S lawmakers, saying Ant’s proposal created a national security risk.
“Euronet continues to believe there is compelling commercial logic to a combination between Euronet and MoneyGram. However, significant developments have been disclosed by MoneyGram since Euronet’s offer, and Euronet has not conducted any evaluation of the business in that time. While we continue to view a transaction with MoneyGram as logical, there is no guarantee any offer will be made or any transaction will ultimately occur,” Euronet said in a statement.
Ant Financial said it paid MoneyGram a $30 million termination fee for the deal’s collapse.
( Reporting by Greg Roumeliotis in New York; Additional reporting by Nikhil Subba and Vibhuti Sharma in Bengaluru; Editing by Lisa Shumaker and Cynthia Osterman ) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,028 | 2,019 | "My.Games will launch global game store in Q4 | VentureBeat" | "https://venturebeat.com/2019/08/16/my-games-will-launch-global-game-store-in-q4" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages My.Games will launch global game store in Q4 Share on Facebook Share on X Share on LinkedIn My.Games Store Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
My.Games is a Russian game publisher that is going global. And so the company announced today that its My.Games Store will debut in the fourth quarter.
Developers and publishers will be able to distribute free-to-play and premium games through the My.Games Store while gamers will benefit from a wide range of PC titles and unique gaming services.
“I’m pleased to announce the launch of My.Games Store, our global gaming platform,” said My.Games CEO Vasiliy Maguryan, in a statement. “Although our background is delivering free-to-play and premium titles to an Eastern European audience, the My.Games Store is the next step in bringing our expertise to a global market. We believe we can offer a wealth of support and knowledge to studios and publishers looking to reach both a Russian and a global audience.” My.Games Store will build on the success of My.Games’ Russian-speaking platform, Games.Mail.Ru, which currently has 13 million monthly active users. At launch, players will gain access to popular My.Games titles like Warface and Conqueror’s Blade, as well as a whole library of games including third-party titles. Future developers and publishers will be offered a 70/30 revenue split to distribute titles through the My.Games Store.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: My.Games’ Conqueror’s Blade.
The platform will also feature integration with two unique gaming services. Lootdog allows gamers to trade in-game items for real money securely, while Donation Alerts is a tool that empowers streamers to monetize their content.
“Our Games.Mail.ru platform has been tailor-made for the Russian-speaking audience. Over the years, we’ve created unique technology, monetization systems, and the expertise needed to succeed in this market. We are now ready for the next step — taking the platform international,” said Rodion Kotelnikov, head of My.Games Store, in a statement. “For our international partners, this will open the door to our platform’s multi-million audience, while the players will gain access to a broader range of games.” My.Games Store is currently in beta testing, with plans for a full-scale launch by the end of this year.
My.Games launched in May 2019 combining the gaming assets of Mail.ru Group and My.com. Revenues rose in Q2 by 33.7% owing to the strong performance of existing titles including the Warface franchise, Conqueror’s Blade and mobile title Hustle Castle. In July, My.Games announced a new strategy mobile game, American Dad! Apocalypse Soon. Developed in partnership with FOX Next, the game is set to release on iOS and Android devices this fall.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,029 | 2,019 | "Warface scores 13 million users on PS4 and Xbox One in a year | Page 2 of 2 | VentureBeat" | "https://venturebeat.com/business/warface-scores-13-million-users-on-ps4-and-xbox-one-in-a-year/2" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Warface scores 13 million users on PS4 and Xbox One in a year Share on Facebook Share on X Share on LinkedIn My.Games' Warface title.
Why Warface has lasted GamesBeat: What has helped Warface stay popular after quite a long time in the market? What do you think is still driving the game forward? Pabiarzhyn: It’s an interesting question. Starting from 2013, we’ve had a rule across the studio. We want to provide users with new content every month. Over all the years we’ve been doing so, it’s helped us a lot. If you look at the different content inside Warface now, we have around 10 different PvP modes, 10 special operations, a lot of PvP maps. In each mode we have six or seven unique maps, and we have more than 100 unique, real, existing weapons in the game. Every month, players can find something new to them and try it out and keep playing.
We also have a very deep achievement system inside the game. There are more than 1,000 different achievements across all kinds of content. Every time players join the game, they can set their own long-term or short-term goals and keep playing to pursue them.
Starting in 2017, we introduced the battle pass system. This was very positively received by our players, so we’re trying to provide new seasons for battle pass and combine that with new seasons of ranking matches, to provide people with different kinds of new content every season. We want to reward their participation, so every time we create new rewards — new skins, new equipment, new badges, and so on. Every time a player joins the game, they know what to do, and they have goals to work toward.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Warface’s most popular playermodes GamesBeat: Do you still identify some countries as places where Warface is the strongest, or do you feel like you have a worldwide audience now? Pabiarzhyn: Talking about consoles, as I say, the United States is our main priority right now. A lot of people from the U.S. come into Warface each day, and we have a separate community department to talk to U.S. players and collect their feedback. We’re seeing a big impact from the western audience in general on the franchise. We’re making the western market a priority to develop the franchise worldwide.
Above: Warface in 2013 GamesBeat: How are things going on the esports front? Have you been able to measure how that impacts your user base? Pabiarzhyn: We have a very strong esports team at the Moscow office, which helps us organize that aspect of the game. We have regular competitions in the system with several leagues — common, casual, and pro master leagues and so on. Right now we’re implementing support for the ESL IP for PlayStation. In October or November we plan to have a big tournament on PS4 with the help of the PlayStation team.
Future platforms and competition GamesBeat: How are you thinking about future platforms? Everyone sees the new PlayStation and Xbox coming next year.
Pabiarzhyn: We’ve already started communicating with the platforms about new hardware. We’ll support both of them as we can. We’d like to keep the game on the previous versions as well, though, and make it so people from the different platforms will be able to keep playing together.
GamesBeat: What do you view as the most direct competition for you? I saw the new Wargaming project at Gamescom, which reminded me a bit of Warface.
Pabiarzhyn: We’re in a unique situation as far as competition. Warface has a very long story in the market. We have a lot of content. That’s our strongest point. Sometimes we need to release content that will let people play Warface in different ways. For example, this autumn for the PC we’re introducing a new playable [class of characters], a robot with a heavy machinegun, and in the beginning of next year we’ll have that update available on consoles as well. We’re always trying to keep updating and providing players with something new. There aren’t many titles on the market, especially free-to-play titles, that have such a strong roadmap of new content.
Above: Elena Grigoryan is marketing director for My.Games.
Elena Grigoryan: Talking about competition and different platforms as far as Warface and shooters as a genre, we believe that the shooter genre is one of the key competencies at the My.Games team. We’re very happy with the success of Warface as our key franchise in the shooter genre, but in addition to Warface we have several shooters in our lineup, for PC and console, and for mobile as well. [My.Games recently added Panzer Dogs as a studio, which is making the shooter Tacticool on mobile]. The tactical category is one of the most successful ones on mobile. We’ve been developing in the genre for many years now, and we have a lot of people who are very experienced in the genre. We’re trying to develop our competence further.
One recent step, some months ago we announced a new initiative, My.Studio, an opportunity where we’ve invited independent studios and developers to submit their ideas for developing triple-A console shooters with us. As a company, we’ve committed to finance the development of the winner of this competition and help the studio with the legal aspects, the organization aspects, and other elements of the business side, in order to let them focus on creating the game. We’re going to announce the winner late this year or in the beginning of next year.
We want to provide an opportunity for a studio that would otherwise have no way of developing their idea the way they want to. We’ll try to provide all our expertise to help them bring it to market. For now, our team is working hard to choose the best one out of all the applications we have at the current stage. We’re very pleased with the huge interest in this initiative around the world, and we’ll be happy to announce the winner very soon.
Pabiarzhyn: At My.Games we have many different studios with different kinds of experience on different platforms. That’s something we can bring to the table and share with a newcomer, to make sure that their future games will be as technologically advanced as they can be.
Ongoing development Above: Warface on the consoles is most popular in the U.S.
GamesBeat: Where is Warface made? Is it the responsibility of a single studio in a single city, or do you have multiple studios working on it? How many people are involved, and how does it compare to some of the other shooter projects that you have going? Pabiarzhyn: Overall we have more than 300 people working on Warface. Our main studio developing Warface, making the main content for the game, is based in Kiev, Ukraine. That studio is between 170 and 200 people right now. We have another studio based here in Moscow that focuses on the console versions. They created the console versions of the game and continue to support them. Here in Moscow there are around 30 or 40 people working on the console versions. We also have some amount of people working on content creation at our studio in Minsk, Belarus. All three teams are working as one larger team.
Starting last year and into the beginning of last year, we’ve dedicated a number of people to work on mobile as well. We have Warface mobile in development as well, and I believe the project should go live on iOS and Android next year.
GamesBeat: Do you still use the CryEngine for development? Pabiarzhyn: For Warface on PC and consoles, yes, we’re still using CryEngine. The games engine can’t be changed over very easily. As far as our new titles, though, we’ll be using Unreal Engine 4.0. The Unreal Engine is very popular all around the globe, and there are a lot of specialists who have very good development skills in the engine. It’s especially popular here in the countries we’re based in, so we have good opportunities to hire quality engineers and developers who can help us create amazing products in the future.
The future GamesBeat: How do you think about whether or not to replace Warface with something brand-new, versus keeping it going as it is? Pabiarzhyn: Right now the Warface franchise is still growing on console and PC, even though the game is already eight years old. We’re growing our audience and growing our revenue. It’s still a good opportunity for us, so we want to keep the project going. We’re putting additional resources into the development of the project. We also have different plans about new projects based on the Warface franchise, though, and we’re researching that question right now.
1 2 View All GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,030 | 2,019 | "Warface scores 13 million users on PS4 and Xbox One in a year | VentureBeat" | "https://venturebeat.com/business/warface-scores-13-million-users-on-ps4-and-xbox-one-in-a-year/view-all" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Warface scores 13 million users on PS4 and Xbox One in a year Share on Facebook Share on X Share on LinkedIn My.Games' Warface title.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
My.Games ‘ first-person shooter online game Warface is celebrating its first anniversary on the PlayStation 4 and Xbox One today, and the Russian company said it has 13 million players on the consoles so far.
That’s a solid addition to the Warface audience, which now numbers more than 80 million registered players. Made by Crytek Kiev, Warface first debuted as a free-to-play military combat game in 2013. Earlier this year, a team of developers from Crytek Kiev left to form Blackwood Games, which is now in charge of development for the Warface franchise.
I spoke with Ivan Pabiarzhyn, Warface franchise lead at My.Games; and Elena Grigoryan, marketing director of My.Games, about the latest on Warface. In the future, don’t be surprised to see a mobile version of Warface on iOS and Android.
Here’s an edited transcript of our interview.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Ivan Pabiarzhyn, Warface franchise lead Ivan Pabiarzhyn: It’s been a year since Warface launched on PS4 and Xbox One. We’re amazed by the success of the project in this new market for our team. In the first three months, we acquired around 5 million [players], and we’re still growing. At present it’s about 13 million people. We’ve experienced a lot of interesting things about the console audience, about what they want from a first-person shooter. It’s been a very valuable experience for us.
GamesBeat: How has that fit into the larger overall userbase? Pabiarzhyn: The players coming to console were almost all new to us. They’d never played Warface on PC before. That’s been a big benefit to the franchise by itself. Talking about the total players, our player base for the franchise as a whole is around 80 million people overall, all over the globe. Console has become a way to acquire a new audience for our products. It speaks well for the community overall.
GamesBeat: How do people play? Is it on their own platforms, or do you have crossplay between console and PC? Pabiarzhyn: Currently we don’t have crossplay, but we’ve already had some communication with the teams at PlayStation and Xbox. They’ve provided us with documentation and relationships around cross play. Next year we plan to support cross play across both consoles, across Xbox and PlayStation.
How the consoles differ GamesBeat: As far as the different characteristics of the players on the platforms, what do you notice about the differences between PS4 and Xbox One and PC? Pabiarzhyn: There are some differences. But our game was released in 2013 on PC, originally. We have very well-optimized system performance in the game client. It runs very well on both PlayStation and Xbox One, on the basic versions and the pro versions as well. We don’t think it has affected the experience of the player base. The gameplay is very smooth.
Talking about the console markets, the behavior between the two platforms is the same. If you compare console to PC, there are some small differences in user behavior. For example, on PC, more players — around 60 percent — prefer to play [player-vs.-player] matches. On console it’s the other way around. The players tend to prefer to play [player-vs.-environment]. The percentage is about 60 percent PvE and 40 percent PvP.
Above: Warface stats on the consoles GamesBeat: Geographically, is there a considerable spread? I remember that Warface on PC was very strong in eastern Europe.
Pabiarzhyn: The console market has brought a lot of people from the United States. Around 40 percent of our console players are from the U.S., and we’re very pleased by that metric.
GamesBeat: What’s the most interesting data you’ve discovered from the first year of the console game? Pabiarzhyn: Many players are focusing on the classic modes in PvP — team deathmatch, free-for-all, and storm, the objective-based mode, which is a very popular mode around the globe by itself. What’s interesting for us, though, is that many players on console are preferring to play the storyline mode, special operations. That’s been a big part of attracting a new user base over the years, and it’s also helped bring back players who’ve left the game for one reason or another. Players who’ve been gone from the game for 30 days or more, when we provide new content month by month in that mode, it’s been a good way of making the game more interesting and bringing them back each month.
GamesBeat: Do you notice, as far as monetization goes, whether any one of the platforms has done particularly well for you? Pabiarzhyn: We can’t distinguish differences between platforms around the monetization. Everything seems very similar.
Why Warface has lasted GamesBeat: What has helped Warface stay popular after quite a long time in the market? What do you think is still driving the game forward? Pabiarzhyn: It’s an interesting question. Starting from 2013, we’ve had a rule across the studio. We want to provide users with new content every month. Over all the years we’ve been doing so, it’s helped us a lot. If you look at the different content inside Warface now, we have around 10 different PvP modes, 10 special operations, a lot of PvP maps. In each mode we have six or seven unique maps, and we have more than 100 unique, real, existing weapons in the game. Every month, players can find something new to them and try it out and keep playing.
We also have a very deep achievement system inside the game. There are more than 1,000 different achievements across all kinds of content. Every time players join the game, they can set their own long-term or short-term goals and keep playing to pursue them.
Starting in 2017, we introduced the battle pass system. This was very positively received by our players, so we’re trying to provide new seasons for battle pass and combine that with new seasons of ranking matches, to provide people with different kinds of new content every season. We want to reward their participation, so every time we create new rewards — new skins, new equipment, new badges, and so on. Every time a player joins the game, they know what to do, and they have goals to work toward.
Above: Warface’s most popular playermodes GamesBeat: Do you still identify some countries as places where Warface is the strongest, or do you feel like you have a worldwide audience now? Pabiarzhyn: Talking about consoles, as I say, the United States is our main priority right now. A lot of people from the U.S. come into Warface each day, and we have a separate community department to talk to U.S. players and collect their feedback. We’re seeing a big impact from the western audience in general on the franchise. We’re making the western market a priority to develop the franchise worldwide.
Above: Warface in 2013 GamesBeat: How are things going on the esports front? Have you been able to measure how that impacts your user base? Pabiarzhyn: We have a very strong esports team at the Moscow office, which helps us organize that aspect of the game. We have regular competitions in the system with several leagues — common, casual, and pro master leagues and so on. Right now we’re implementing support for the ESL IP for PlayStation. In October or November we plan to have a big tournament on PS4 with the help of the PlayStation team.
Future platforms and competition GamesBeat: How are you thinking about future platforms? Everyone sees the new PlayStation and Xbox coming next year.
Pabiarzhyn: We’ve already started communicating with the platforms about new hardware. We’ll support both of them as we can. We’d like to keep the game on the previous versions as well, though, and make it so people from the different platforms will be able to keep playing together.
GamesBeat: What do you view as the most direct competition for you? I saw the new Wargaming project at Gamescom, which reminded me a bit of Warface.
Pabiarzhyn: We’re in a unique situation as far as competition. Warface has a very long story in the market. We have a lot of content. That’s our strongest point. Sometimes we need to release content that will let people play Warface in different ways. For example, this autumn for the PC we’re introducing a new playable [class of characters], a robot with a heavy machinegun, and in the beginning of next year we’ll have that update available on consoles as well. We’re always trying to keep updating and providing players with something new. There aren’t many titles on the market, especially free-to-play titles, that have such a strong roadmap of new content.
Above: Elena Grigoryan is marketing director for My.Games.
Elena Grigoryan: Talking about competition and different platforms as far as Warface and shooters as a genre, we believe that the shooter genre is one of the key competencies at the My.Games team. We’re very happy with the success of Warface as our key franchise in the shooter genre, but in addition to Warface we have several shooters in our lineup, for PC and console, and for mobile as well. [My.Games recently added Panzer Dogs as a studio, which is making the shooter Tacticool on mobile]. The tactical category is one of the most successful ones on mobile. We’ve been developing in the genre for many years now, and we have a lot of people who are very experienced in the genre. We’re trying to develop our competence further.
One recent step, some months ago we announced a new initiative, My.Studio, an opportunity where we’ve invited independent studios and developers to submit their ideas for developing triple-A console shooters with us. As a company, we’ve committed to finance the development of the winner of this competition and help the studio with the legal aspects, the organization aspects, and other elements of the business side, in order to let them focus on creating the game. We’re going to announce the winner late this year or in the beginning of next year.
We want to provide an opportunity for a studio that would otherwise have no way of developing their idea the way they want to. We’ll try to provide all our expertise to help them bring it to market. For now, our team is working hard to choose the best one out of all the applications we have at the current stage. We’re very pleased with the huge interest in this initiative around the world, and we’ll be happy to announce the winner very soon.
Pabiarzhyn: At My.Games we have many different studios with different kinds of experience on different platforms. That’s something we can bring to the table and share with a newcomer, to make sure that their future games will be as technologically advanced as they can be.
Ongoing development Above: Warface on the consoles is most popular in the U.S.
GamesBeat: Where is Warface made? Is it the responsibility of a single studio in a single city, or do you have multiple studios working on it? How many people are involved, and how does it compare to some of the other shooter projects that you have going? Pabiarzhyn: Overall we have more than 300 people working on Warface. Our main studio developing Warface, making the main content for the game, is based in Kiev, Ukraine. That studio is between 170 and 200 people right now. We have another studio based here in Moscow that focuses on the console versions. They created the console versions of the game and continue to support them. Here in Moscow there are around 30 or 40 people working on the console versions. We also have some amount of people working on content creation at our studio in Minsk, Belarus. All three teams are working as one larger team.
Starting last year and into the beginning of last year, we’ve dedicated a number of people to work on mobile as well. We have Warface mobile in development as well, and I believe the project should go live on iOS and Android next year.
GamesBeat: Do you still use the CryEngine for development? Pabiarzhyn: For Warface on PC and consoles, yes, we’re still using CryEngine. The games engine can’t be changed over very easily. As far as our new titles, though, we’ll be using Unreal Engine 4.0. The Unreal Engine is very popular all around the globe, and there are a lot of specialists who have very good development skills in the engine. It’s especially popular here in the countries we’re based in, so we have good opportunities to hire quality engineers and developers who can help us create amazing products in the future.
The future GamesBeat: How do you think about whether or not to replace Warface with something brand-new, versus keeping it going as it is? Pabiarzhyn: Right now the Warface franchise is still growing on console and PC, even though the game is already eight years old. We’re growing our audience and growing our revenue. It’s still a good opportunity for us, so we want to keep the project going. We’re putting additional resources into the development of the project. We also have different plans about new projects based on the Warface franchise, though, and we’re researching that question right now.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,031 | 2,018 | "Jaunt shows off new augmented reality 360-degree full body selfies | VentureBeat" | "https://venturebeat.com/2018/08/30/jaunt-shows-off-new-augmented-reality-360-degree-full-body-selfies" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Jaunt shows off new augmented reality 360-degree full body selfies Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Jaunt made its name as a creator of 360-degree videos and cinematic VR experiences.
But under new management, the company has shifted its focus to supplying mixed reality technology to other companies. Today, it‘s showing off some of the cool research it has been doing in volumetric capture, such as creating 360-degree digital avatars for augmented reality applications.
I tried it out at Jaunt’s headquarters in San Mateo, California, and it was quick and seamless. Jaunt demonstrated how it can capture a 360-degree image of a person with six Intel RealSense depth cameras. Jaunt stitches the images from the cameras into a single 3D avatar, and then inserts the avatars into augmented reality streams that can be sent to mobile devices in real time. I saw how they could capture me instantly and turn the data into a shareable image in no time.
The new AR volumetric capture selfies will eventually be part of a growing XR platform, a business-to-business software solution that Jaunt licenses to its partners, who distribute augmented, virtual, and mixed reality (collectively known as XR) content through their own applications and channels. Jaunt is formally launching the volumetric capture technology on its platform for customers in the fourth quarter.
“Our objective is to make streaming of experiences like this very easy and portable,” said Arthur van Hoff, chief technology officer and founder of Jaunt, in an interview with VentureBeat. “We have been strong in 360-degree video and immersive content for VR, and now we are figuring out what we can contribute for AR experiences.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Jaunt still has a big VR film studio in Santa Monica, California, where it puts to use its expertise in telling stories in XR. In the past, it has launched more than 350 productions, including numerous virtual reality films. But today, the company only starts work on a VR film if a partner pays it to do so.
Above: Jaunt’s AR avatars can be created in real time.
“We want to be the global partner of choice for producing and distributing immersive content,” said Jaunt CEO George Kliavkoff, in an interview with VentureBeat. “In the past, we were consumer-focused. We wanted to be the Netflix of VR and have people come to a Jaunt-owned and operated site. We have repositioned as a business-to-business company.” With the 360-degree selfies, Jaunt offers volumetric content as another medium for its partners to tell stories and create intellectual property. Volumetric capture (as captured through the six-camera rig) makes it possible to film a subject from a 360-degree perspective, then output an avatar that matches the subject’s appearance, movements, and vocals in the form of a shareable, to-scale, augmented reality asset.
Instead of an expensive green screen studio, Jaunt uses a simple capture stage with a streamlined setup and leverages the same streaming technology it developed for delivering high-quality VR and AR experiences. The ease of use and established distribution tools are ideal for companies looking for an efficient way to add cutting-edge immersive content to their media strategies. The stage is pretty portable, considering what it can do.
“We wanted to be able to process content in real time, not in weeks, and make it portable and shareable,” van Hoff said. “I recorded a 3D capture of my parents when they were visiting from Holland. That’s kind of special, as that’s something I’ll never forget.” Jaunt’s investment in R&D signifies the company’s focus on building business-ready solutions for creating and sharing next-generation immersive content. Other initiatives in the pipeline include immersive content driven by machine learning and technologies foundational to enabling premium XR experiences.
Since 2013, Jaunt has been on the leading edge of immersive content, quickly becoming the premier producer and distributor of consumer cinematic VR by creating an end-to-end pipeline for creating and delivering VR experiences. But the VR market for consumers didn’t materialize in the way everyone hoped after the launch of the first major VR headsets, such as the Oculus Rift and the HTC Vive in 2016.
Above: Jaunt’s AR avatars can be streamed to mobile devices.
“Some analysts say XR is a $120 billion opportunity in 2021,” said Kliavkoff. “We believe in the size of the market, but we’re not sure if the analysts are correct on the timing. In the long run, we think AR is the bigger opportunity.” So in 2016, Jaunt hired Kliavkoff, a former Hearst executive, as its CEO. And he has steered the company beyond the consumer VR space, transforming it into a B2B-focused company that helps partners deliver new forms of XR content spanning augmented, virtual, and mixed realities.
“Standing as one of the early, major players in the virtual and augmented reality space, we’ve taken our consumer-facing technology and storytelling capabilities and made them available to our partners,” said van Hoff. “Our B2B focus addresses a need for companies to be able to deliver AR and VR, and we see our XR Platform’s real-time volumetric capabilities as yet another avenue for them to share engaging stories through their own channels. It’s just one of the ways we’re continuing to push the boundaries of XR content at Jaunt.” The new Jaunt XR platform debuted in December 2017, and it can be put to use in a variety of ways. Sky broadcasting was the first customer announced for the Jaunt XR Platform at the end of 2017. Now customers include Diageo and NTT Data. The Jaunt XR Platform allows augmented reality assets, virtual reality content, and 2D assets to be delivered across devices and live side-by-side with existing media libraries.
In the past, all the VR, AR, and mixed reality formats had different technology chains, but Jaunt has united them so they can all be viewed through a single player.
“This technology platform is that single solution for managing all the different media types and playing them back holistically inside of one system, rather than having to build them separately,” said David Moretti, vice president of corporate development and operations, in an interview.
In 2017, Jaunt also opened an office in Shanghai, which allows it to distribute content in China through a partnership with Shanghai Media Group and China Media Capital.
Jaunt’s investors include the Walt Disney Company, Evolution Media Partners, CMC, Highland Capital Partners, Redpoint Ventures, SMG, Axel Springer, ProSiebenSat.1 SE, The Madison Square Garden Company, Google Ventures, Peter Gotcher, and Sky. Jaunt currently has 100 employees.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,032 | 2,018 | "Jaunt acquires Teleporter, Personify's real-time 3D AR asset streamer | VentureBeat" | "https://venturebeat.com/2018/09/10/jaunt-acquires-teleporter-personifys-real-time-3d-ar-asset-streamer" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Jaunt acquires Teleporter, Personify’s real-time 3D AR asset streamer Share on Facebook Share on X Share on LinkedIn Jaunt's AR avatars can be streamed to mobile devices.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
If you’ve been waiting for live, hologram-like human beings to pop up in AR or VR experiences, that wait’s almost over. Mixed reality company Jaunt announced today that it has acquired Teleporter, a solution for turning people and objects into real-time 3D video streams that can be displayed within AR and VR apps.
Developed by a Chicago-based team at Personify , Teleporter is described as software to capture, process, and stream lifelike AR assets online. Jaunt has acquired both the Teleporter system and the engineers behind it, including a seven-person team led by former Personify CTO Simon Venshtain. The team and its patent portfolio will support the Jaunt XR Platform, a B2B solution designed to let customers distribute mixed reality assets through their own applications.
In an August profile of Jaunt , VentureBeat explained how the company is using an array of six Intel RealSense cameras to capture a circular video of a moving person, turning the video and depth data into a volumetric stream that represents the person’s body and motions. The person can then be inserted into an AR or VR app as a hologram-like avatar, interacting in real time with the user — including gesturing in 3D and talking.
Real-world applications for the technology include concepts previously only imagined in science fiction. An AR mapping app could include a way to call for a pop-up human assistant, for instance, while a hospital or doctor’s office could dial up a nurse to provide personalized medical assistance.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Jaunt founder and CTO Arthur van Hoff says that the Teleporter acquisition will help the company “move further into the extended reality arena with the Jaunt XR Platform at the core of our business. We’re honing in on fully immersive virtual, mixed, and augmented reality experiences, and are thrilled to advance those technologies with the help of our new Chicago-based team.” The new Chicago office will join Jaunt facilities in San Mateo and Los Angeles, California; New York City; and Shanghai.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,033 | 2,018 | "Mixed reality video platform Jaunt gets a new CEO | VentureBeat" | "https://venturebeat.com/2018/09/18/mixed-reality-video-platform-jaunt-gets-a-new-ceo" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Mixed reality video platform Jaunt gets a new CEO Share on Facebook Share on X Share on LinkedIn Jaunt XR platform adds AR and mixed reality to VR content.
Jaunt , which pivoted from making cinematic VR films to providing platform tools for mixed reality , announced that its CEO George Kliavkoff is stepping down as CEO. He will be replaced by Mitzi Reaugh, Jaunt’s former vice president of global business development and strategy, effective October 1.
Reaugh has been a leader behind the launch and growth of the Jaunt XR Platform, a business-to-business solution that enables partners to distribute augmented, virtual, and mixed reality assets through their own applications and channels.
She succeeds Kliavkoff, who will remain an active member of the Jaunt board of directors. Kliavkoff is leaving the company to take a new position in entertainment that will be announced shortly.
For the past two years, Reaugh has worked closely with Kliavkoff and the executive leadership team, and she was critical in developing and implementing the strategy that evolved Jaunt into a B2B company focused on providing the full spectrum of immersive solutions for global partners.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Reaugh has spent the majority of her career working on innovative digital media growth initiatives. While at NBCUniversal, she was part of the team that formed and launched Hulu. She led strategy and business development for Miramax, and during her time at The Chernin Group she focused on new subscription video platforms. In addition to her role as CEO, Reaugh will join Jaunt’s board.
Above: Mitzi Reaugh is CEO of Jaunt.
“It continues to be a pleasure to work at Jaunt, driving the growth of this business, and I’m honored to have been chosen by the board of directors to lead the company forward,” Reaugh said, in a statement. “The future of content is immersive, and Jaunt will continue to deliver industry-leading production and technology that brings scale, impact and ease to our global partners.” In addition to Reaugh, two other Jaunt executives have newly expanded roles. Dominic Collins moves from general manager of international to president, product engineering and international, overseeing all international efforts as well as product and engineering. David Moretti is being promoted to president, strategy and business development, responsible for overseeing the sales and business development teams, as well as the legal department.
Rounding out the executive team are Arthur van Hoff, founder and chief technology officer, and Fabrice Cantou, Jaunt’s chief financial officer.
“This is an exciting new chapter in Jaunt’s story, as we continue our tremendous growth as a leader in immersive content creation and distribution,” said van Hoff, in a statement. “George and Mitzi together have set us on a clear business course to be the premier partner for brands and publishers around the world, and their close collaboration has led to a smooth transition. Mitzi has been instrumental in our evolution, and her extensive experience in emerging content formats, digital media, entertainment and technology makes her the natural choice. I have the utmost confidence in her leadership as our new CEO, and in our entire team.” In August, Jaunt officially announced the expansion of the XR Platform with an innovative approach to volumetric capture that brings fast, affordable, scalable AR content production and distribution capabilities to brands and publishers across the globe. The company also recently acquired Personify’s Teleporter team and technology to accelerate the development of volumetric capture and the XR Platform.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,034 | 2,018 | "Jaunt drops VR projects, will focus solely on AR and XR | VentureBeat" | "https://venturebeat.com/2018/10/15/jaunt-drops-vr-projects-will-focus-solely-on-ar-and-xr" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Jaunt drops VR projects, will focus solely on AR and XR Share on Facebook Share on X Share on LinkedIn Jaunt is working on new augmented reality applications.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Jaunt’s last two months have been pretty eventful: Originally known for virtual reality experiences , the company recently shifted into providing augmented and extended reality technologies to other companies , and even installed a new CEO.
Today, it unexpectedly announced that it’s winding down its VR products and services to focus its efforts on AR and XR creation technologies.
The company says that it’s going to focus its resources on “further developing technologies that allow for the scaled creation of AR content.” Back in August, Jaunt showed off a volumetric body scanning technology capable of streaming a real-time AR video feed of a person as recorded by six depth-sensing cameras — effectively hologram-style content.
Weeks later, Jaunt acquired Teleporter , a streaming solution capable of sending volumetric AR content over the internet.
These offerings appear to be core to Jaunt’s efforts going forward. Its San Mateo and Chicago teams will continue to work on these volumetric AR/XR technologies, but the company says it will shed staff as it “smoothly and professionally” discontinues VR-related content services and products in concert with its customers.
Augmented and mixed reality hardware have yet to take off outside of industrial and enterprise applications, as the headsets have thus far proved unappealing to the wider consumer marketplace. However, streaming of holographic and otherwise ultra high-resolution 3D content is expected to increase in both importance and popularity as companies work to integrate live 3D assets into live help assistants and personalized online tutorial applications. Unlike current cellular and Wi-Fi networks, next-generation 5G wireless networks are expected to have the necessary bandwidth and latency to support these services.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,035 | 2,018 | "Jaunt in talks to sell VR business | VentureBeat" | "https://venturebeat.com/2018/11/05/jaunt-in-talks-to-sell-vr-business" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Jaunt in talks to sell VR business Share on Facebook Share on X Share on LinkedIn Jaunt is working on new augmented reality applications.
Jaunt is in talks to sell its cinematic virtual reality business to other companies, VentureBeat has learned. One of the bidders is Spinview Global , the virtual reality business-to-business content management platform, according to a source familiar with the matter.
Jaunt recently said it would end its VR efforts and focus exclusively on augmented reality and extended reality (XR) technologies. A spokesman for San Mateo, California, confirmed that Jaunt is in talks with multiple parties to sell the VR business.
Jaunt still has a big VR film studio in Santa Monica, California, where it puts to use its expertise in telling stories in XR. In the past, it has launched more than 350 productions, including numerous virtual reality films. Jaunt has invested heavily in its camera hardware and VR player.
Spinview hopes to bolster its own VR offering as it moves into 2019. It also just purchased Agority , a VR communications app. Spinview is based in London and Stockholm.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,036 | 2,019 | "Apple hires Jaunt VR founder and multi-camera 3D expert Arthur van Hoff | VentureBeat" | "https://venturebeat.com/2019/04/09/apple-hires-jaunt-vr-founder-and-multi-camera-3d-expert-arthur-van-hoff" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple hires Jaunt VR founder and multi-camera 3D expert Arthur van Hoff Share on Facebook Share on X Share on LinkedIn Arthur van Hoff.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Apple’s depth-sensing iPhone cameras have already enabled features such as Face ID and portrait mode photography — now the company has hired 3D camera expert Arthur van Hoff to serve as a senior architect for an unnamed project. The new hire was first reported by Variety today, with van Hoff’s LinkedIn account listing his start date with Apple as April.
As founder of VR video company Jaunt and inventor of its Jaunt One camera system, a rig designed to bring 360-degree 3D to virtual reality headset wearers, van Hoff has decades of experience in developing dual- and multi-camera photography products. In an August 2018 interview with VentureBeat , he discussed a new depth camera-based capture system designed to easily create volumetric 3D selfies.
Though it was known for its pioneering VR work, Jaunt began to offload some of its key assets last year, and dropped its VR projects and cinematic VR business in favor of focusing on AR and XR. Three months ago, van Hoff signed on to advise AI vision startup and RED partner Lucid on how to bring AI, machine learning, and 3D vision to an increasing range of mobile devices, though he apparently ended that arrangement before joining Apple.
Van Hoff’s role with Apple is, as is the case with most of its new hires, unknown. While Variety reports that Apple previously hired former Jaunt engineers to work on varied projects ranging from AR and camera systems to computer vision, van Hoff could easily be involved with any or all of them. Apple is reportedly ramping up next-generation AR software and hardware , more sophisticated depth-sensing 3D cameras for iPhones , and computer vision/AI projects inside and outside the automotive realm.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Given his expertise in creating cinematic VR video with Jaunt, however, the greatest likelihood is that van Hoff will assist Apple in developing consumer and/or professional applications for future devices with depth-sensing cameras. That could mean enhancing iOS’s Camera or FaceTime apps with volumetric 3D capture capabilities, or assisting participants in Apple’s new Apple TV+ video creation program with bringing 3D into their shows.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,037 | 2,019 | "Louis Vuitton partners with Riot Games on League of Legends World Championship | VentureBeat" | "https://venturebeat.com/2019/09/23/louis-vuitton-partners-with-riot-games-on-league-of-legends-world-championship" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Louis Vuitton partners with Riot Games on League of Legends World Championship Share on Facebook Share on X Share on LinkedIn Louis Vuitton created the new League of Legends trophy case.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Louis Vuitton and Riot Games announced they will collaborate on the League of Legends World Championship that kicks off October 2 and culminates in Paris on November 10.
For Riot, the partnership is a new high mark for esports sponsorships, as Louis Vuitton is a high-end luxury brand that is nonendemic for esports, meaning it normally doesn’t have anything to do with gaming. But Louis Vuitton specializes in creating trophy gear for sporting events, and so it is collaborating with Riot on a one-of-a-kind Trophy Travel Case to hold the Summoner’s Cup, which goes to the world champions in League of Legends. Louis Vuitton will also be a global partner for the League of Legends World Championship.
League of Legends has 8 million concurrent daily players, and the world championship is the biggest esports tournament in terms of viewership, said Riot Games head of esports Naz Aletaha in an interview with GamesBeat. Last year, 99.6 million people tuned in to watch the world championship.
“First and foremost, it is a testament to the growth and scale and relevance of League of Legends as the global leader in the esports industry,” Aletaha said. “Louis Vuitton has a history of being associated with some of the most prized or coveted trophies in the sports world, like the FIFA World Cup, where they also produce a trophy case. And so, for us, as we look to really elevate the most prestigious moment of our season, we thought that there was no better partner than Louis Vuitton. We think the coming together of the companies will be really great for the sport.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Every year, Riot Games organizes the League of Legends World Championship for the planet’s best players and teams, which this year begins October 2 in Berlin for play-ins and group stages, continues in Madrid starting October 26 for the quarterfinals and semi-finals, and culminates with the final in Paris on November 10.
Louis Vuitton will create a bespoke trunk, the first of its kind for an esports championship, that features both traditional Louis Vuitton savoir-faire along with cutting-edge, high-tech elements inspired by the League of Legends universe.
Louis Vuitton and Riot Games also will soon announce unique champion skins and a capsule collection designed by Nicolas Ghesquière, Louis Vuitton’s artistic director of women’s collections, along with other League of Legend digital assets.
Above: League of Legends trophy case, created by Louis Vuitton That’s pretty heady stuff for the young field of esports. League of Legends has been around for 10 years, and the world championship competition now boasts more than 100 professional esports teams with 800 pro players and millions of fans. Louis Vuitton has been around creating fashion and accessories since 1854.
Louis Vuitton will also have a major presence at next year’s world championship, when Riot Games goes back to China. The trophy trunk for League of Legends will be on display at the Eiffel Tower for a couple of days before the November 10 finals.
“That’s the first time the two world finalist teams will see it onstage as part of the opening ceremony,” Aletaha said. “What we find really exciting about Louis Vuitton is how iconic of a brand they are, how aspirational they are as a brand, and the fact that they’ve been, you know, this classic fashion house for over 160 years. And they’re able to really take tradition, and blend it with innovation.” Aletaha said the audience is large, the fan base is engaged, and the size of esports compared to other entertainment is really appealing for brands like Louis Vuitton. MasterCard, State Farm, Dell/Alienware, and Nike are also big supporters of League of Legends.
The trophy for the World Championship itself, the Summoner’s Cup, is made by Thomas Lyte. As one of the foremost trophy makers in the world, he has produced trophies for prestigious sporting events such as the FA Cup, Six Nations Rugby, Australian Open, and more. Riot worked closely with Lyte to make sure that it captured the essence of League of Legends and that the sport’s highest prize had symbolism pulled from all aspects of League of Legends.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,038 | 2,015 | "Look out, 'brain games' -- the Federal Trade Commission is cracking down | VentureBeat" | "https://venturebeat.com/2015/01/20/look-out-brain-games-the-federal-trade-commission-is-cracking-down" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Look out, ‘brain games’ — the Federal Trade Commission is cracking down Share on Facebook Share on X Share on LinkedIn Lumosity is one of the sites aiming brain training at adults.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Many parents will do anything to ensure their children are smart and healthy, but one company just got in trouble with the Federal Trade Commission for preying on those desires with potentially false claims.
Focus Education, a company responsible for the development of a “edutainment” software, has settled a complaint with the FTC regarding claims it made about one of its products. The government agency alleged that Focus Education marketed its Jungle Rangers game by telling parents that it would help improve school performance, attention, and behavior in children. The company even claimed that it could help alleviate the symptoms of attention deficit hyperactivity disorder (ADHD). As part of the settlement, Focus Education agreed to no longer mislead people by suggesting the cognitive claims are scientifically proven. Americans alone spent approximately $1.3 billion on brain games in 2013, and million of people subscribe to website that promise to improve mental functions.
We’ve asked Focus Education for a comment, and we’ll update this story with its reply.
Jungle Rangers sells for around $215, and Focus Education advertised the product with an infomercial that featured children saying their school work had improved. During a 12 month period from mid-2012 to mid-2013, the company generated sales of nearly $4.5 million, according to the FTC.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “This case is the most recent example of the FTC’s efforts to ensure that advertisements for cognitive products, especially those marketed for children, are true and supported by evidence,” Bureau of Consumer Protection director Jessica Rich said. “Many parents are interested in products that can improve their children’s focus, behavior, and grades, but companies must back up their brain training claims with reliable science.” Jungle Rangers is hardly the only product on the market making unsubstantiated claims about improving mental functions. In 1996, Colorado company Infant Entertainment started a line of toys and multimedia products that it advertised as helping develop brains in babies and small children through the use of music and other techniques. In 2006, the Campaign for a Commercial-Free Childhood brought a complaint against Infant Entertainment to the FTC — although the group decided not to take action since the company willingly removed many questionable claims and testimonials from its website.
The brain-game market is also a growing industry for adults. Video game publisher Nintendo kicked off the craze with its line of Brain Age releases for the Nintendo DS starting in 2006. Brain Age: Train Your Brain in Minutes a Day featured several short games that designer Dr. Ryuta Kawashima claimed would increase blood flow to the prefontal cortex.
Nintendo targeted the Brain Age games at the aging “Baby Boomer” generation.
In recent years, other companies have picked up where Nintendo left off. One notable company, Lumosity, advertises its web-based mental-training services online, during podcasts, and on National Public Radio. It claims that it will help get your brain young and that its games are “based on neuroscience.” Only this has one problem: Scientists do not agree with that claim.
Late last year, Stanford University Center on Longevity and the Berlin Max Planck Institute for Human Development released a joint statement claiming sites like Lumosity do not help the brain. More than 70 of the world’s top cognitive psychologists and neuroscientists signed the release.
Here’s the heart of the statement: “The strong consensus of this group is that the scientific literature does not support claims that the use of software-based “brain games” alters neural functioning in ways that improve general cognitive performance in everyday life, or prevent cognitive slowing and brain disease.” While the FTC is going after companies targeted at children first, Lumosity and other brain-game groups will also have to stop hiding behind false science.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,039 | 2,018 | "How Smart Brain Aging hopes to hold off dementia with brain training games | VentureBeat" | "https://venturebeat.com/2018/09/22/how-smart-brain-aging-hopes-to-hold-off-dementia-with-brain-training-games" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Smart Brain Aging hopes to hold off dementia with brain training games Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
My 85-year-old mother has dementia , a memory loss disease that affects about 60 million people in the U.S. And so I can relate to the challenges of fighting the disease that John DenBoer has dedicated his life to fighting. He has started a company called Smart Brain Aging that has created brain-training exercises that he hopes can hold off the onset of dementia, which has seven different stages.
DenBoer’s own grandmother suffered from dementia when he was going to medical school, and it became his obsession to treat it. He became a clinical neuropsychologist and researcher in hopes of treating dementia. And his research shows that cognitive and functional impairment in early stage dementia can be slowed by the right kind of cognitive exercise. His company has created a cognitive training program, an iOS app called Brain U Online , that prevents memory loss and mitigates functional decline.
The company said it can delay early stage dementia by as much as 2.5 years, while reducing cognitive impact by up to 45 percent. Brain U Online is available through an online subscription for people 50 and older, and Smart Brain Aging is working with national hospital chains including Dignity Health and Innovative Care Partners in hopes of getting the game in front of a million patients in a very short period of time.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! In a TEDx Talk last year, DenBoer said, “That to me is the worst thing about this disease. It kills us before we die.” The disease, where the brain shrinks faster than it normally does, can develop in your brain six to eight years before you show any symptoms, DenBoer said. The treatment is to give our brains something new and novel, which releases a chemical (glutamate) in your brain and prevents your brain from engaging in that accelerated shrinkage.
“This is a simple lifestyle change that you can make,” he said.
Everybody knows someone who has had dementia or Alzheimer’s disease, DenBoer said, because they have become so pervasive as our bodies outlive our brains. He describes it as a “pandemic level disease worldwide.” DenBoer is the subject of an upcoming Netflix documentary, This is Dementia , and a book as well. DenBoer presented about this topic last week at the Alchemist Demo Day at SRI International in Menlo Park, California. And I interviewed him on the phone.
The market for products and services for seniors is expected to surge past $436.6 billion this year, with as more than 75 million baby boomers continue to age and live longer. It is estimated 28 million Boomers will develop Alzheimer’s by 2050, of which dementia is a symptom. The need for services to help manage and mitigate symptoms will be crucial, and drive continued growth.
Here’s an edited transcript of our interview. This story, by the way, is one in a series of stories in which I’ll talk about my elderly mother, and technology that might help her.
Above: John DenBoer, CEO of Smart Brain Aging, hopes that brain training games can help treat dementia.
VentureBeat: My 85-year-old mother has dementia, and so I am interested in what your company is doing.
John Denboer: Oh my. I’m sorry to hear that. We strongly feel it is important work we are doing. I’m sorry to hear that.
VentureBeat: Thank you. I wondered how you got the idea of doing some kind of puzzle- or game-like training for people.
DenBoer: I started this because of my grandmother. My grandmother raised me for a good period of my life, and she was extremely important to me. She developed dementia in her late eighties. We lost her as the person she was before she physically died. It’s one of the most horrible experiences that someone can go through, losing their personhood.
I was at Harvard University at the time, and it galvanized me to change my career path a bit and take on my mission to help develop anything I can to help prevent or mitigate dementia as a disease. That was in 2004, 2005. It’s take off since then. But that was the foundational personal experience that led me to try to do everything I can to help get in the way of dementia.
VentureBeat: When you start figuring out this particular path, that dementia in some way could be prevented or held off or treated? DenBoer: We were researching this at Harvard Medical School and Boston University School of Medicine between 2004 and 2007. At that time we were involved in a randomized clinical trial looking at the effects of cognitive intervention, essentially brain activity, on mitigating and helping to prevent or delay the onset of dementia. There were some good results that came out of that trial, and I decided to switch my career — at that point I was an intern — and became a resident down in Arizona at a place called Barrow Neurological Institute.
I continued to pursue it from there. I worked with Medicare for a couple of years to make the research we were working on publicly available, and then convinced them, over the course of about three years, to change their regulations regarding reimbursement for this. Once they did, we essentially had a bit of a small business model, a research lab that eventually became a business.
VentureBeat: Can you describe where you’re at now? DenBoer: Right now we’ve been working on — we’re a Delaware C corp. We’ve been incorporated about three years now. We’ve been developing two product lines. One is an online platform where we offer dementia-related brain activity, essentially brain games, but really more like digital therapy. It’s called Brain U Online. We have about 10,000 users, but we’re developing an initiative to scale to 250,000 users in the next month. We also have a Medicare-reimbursable clinic-based product that we work with in assisted living and independent living arrangements.
As a company, we’re serving about 5,000 individuals, or about 15,000 if you count the online platform. We’re doing $3-4 million in revenue this year as a startup. We’re affiliated with Alchemist Accelerator, and we’re looking to raise a Series A round. We’re pitching a demo day on Thursday.
Above: Brain U Online trains your brain to reduce dementia.
VentureBeat: I know Lumosity was fined by the FDA for their advertising around brain health games.
How do you distinguish from something like that? What have you found that really gets results as far as brain activities? DenBoer: Let me answer the second question first. What works is doing new and novel exercise, doing things people have never done before. What doesn’t work is doing repetitive games or teaching people to the test. One thing Lumosity was criticized for, besides their advertising, was that they were teaching people to a test, to the exams they were giving. People were improving their test scores, but they weren’t discernably improving, because they were being taught how to do the tests they were given.
One thing we do that’s different is we offer new and novel exercises, things people have never seen before. That helps release a chemical in the brain called glutamate, and it helps prevent the brain from shrinking at an overall progressive rate, like in advanced cortical atrophy. Advanced cortical atrophy is a hallmark of all forms of dementia. In that way, we mitigate or help prevent Alzheimer’s disease and other forms of dementia.
Lumosity, just to address the first part of the question, did something really good for our field. They opened up a large amount of people to brain games, to the use of brain games as a potential digital therapy. The only problem is that they did so without any research backing, and in a disingenuous way. They were correctly and accurately criticized and sued by the FDA in 2013 for false advertising in this particular area.
We have the research backing. We’re a research lab that became a company, rather than the other way around, a company that tries to do research. We pride ourselves on our research backing. The research engagements we have include Harvard, UCLA, and some local engagements here in Phoenix. We do things differently because we have research-based exercises, but we also do things differently because we introduce new and novel learnings to individuals.
VentureBeat: What are some of the results you’ve seen so far? DenBoer: We can mitigate the disease, lessen the extent of it, by 40 to 50 percent. We can reduce the onset of disease, push out the onset of disease by 2.25 years. If somebody has dementia up through about stage two, we’re able to reduce the intensity by 40 to 50 percent. If somebody is aging, we’re able to push out the disease by as much as 2.25 years.
Above: John DenBoer gave a TEDx Talk on disrupting dementia. He said dementia takes your life before you are dead.
VentureBeat: How much activity do the subjects have to do every day or every week? DenBoer: They about one to two hours of activity per week. On our online platform, it syncs their calendar and schedules it for them, so they do one to two hours of training per week. They can do it in different spurts, like 15- to 30-minute increments. But they do about one to two hours per week altogether.
VentureBeat: Is the idea essentially that if you’re using your mind, it’s not going to decay over time? DenBoer: Yes, absolutely, but you have to use it in the right way. One of the major things Lumosity did wrong is they perpetuated this myth that just using your mind is enough. Using your mind is good, but it’s really not that much better than not using it, if you’re just doing the same things all the time.
An example would be, my grandmother got through stage three dementia. She would do the New York Times crossword puzzle up through stage two or stage three, even though she didn’t know the names of her family members. People can do things and do them well, things they know how to do, up through stage two or stage three dementia. This is a very common phenomenon, and it’s very misleading to both doctors and people interested in caregiving.
We introduce new and novel things, things people have never seen or done before. In doing so, that’s helpful and assistive in releasing this chemical, glutamate, which helps engage in this cascade of neurochemical reactions that helps prevent the brain from atrophying.
VentureBeat: What are some of the games like, if you were trying to describe this to someone who isn’t familiar? DenBoer: We think of what we do as a digital therapy. It’s more like a drug that people are prescribed. On the surface it’s a brain game, but in Phoenix we have physicians in health care organizations prescribing this as a therapy for people in their fifties, sixties, and even their early seventies, people who want to prevent the onset of disease. We think of ourselves more as a prescriptive digital therapy than a brain game per se. We don’t lump ourselves into the brain game market.
Above: One simple test that can challenge your brain.
VentureBeat: What are the different stages of dementia like? I think you said that this can primarily apply in the first couple of stages? DenBoer: The stages of dementia are stages one through seven. Stage seven is absolutely the worst. Stage one is just experiencing some very minor problems that can be almost imperceptible. We work through stage three. We give someone an accurate assessment of where they’re at as far as stage, and we give them customized brain exercises they can do up through about stage two or three that would be helpful to prevent their dementia from getting worse.
These are called the Alzheimer’s criteria. Those stages can be found very easily on websites like the Alzheimer’s Association. It’s a very common structure that people use.
VentureBeat: At some point, though, is there some line where you find that even this can’t help people? DenBoer: Past stage three, nothing can really assist people. It’s more of a comfort care situation. At that point nothing can be significantly helpful, unfortunately.
VentureBeat: How do you get it started, and how do you charge for it? DenBoer: Right now we have two ways of charging for it. We have a subscription model, where people can log on to our website, BrainUOnline.com, and we have fee-for-service, subscription type of thing where people pay $19.99 a month if they pay for a year up front, or $29.99 per month if they’re paying month-to-month. We do offer a free 14-day trial for people that are interested in trying out a limited version of the program.
We also are developing the first ever Medicare reimbursable form of online exercise. We’re working with Medicare to make sure it can be Medicare reimbursable as well. That’s an initiative we’re undertaking. We’re working with a couple of hospitals here in Phoenix to expand the program to more than 250,000 individuals.
VentureBeat: How much material do you have for people to go through? Could it last for years? How do you generate that? DenBoer: We have material for years. We generate it out of a pool of exercises we developed at Harvard. That pool has become somewhat limited at this point, so we’ve been developing more new and novel exercises. We have about three and a half years of exercises at this point to keep people in exercises as we’re doing them.
We also have a social platform where people can interact in classes. The whole format is called Brain U Online, as I say. People can interact in classrooms where they’re grouped together via a personality algorithm. They answer some personality questions and they’re grouped together with like-minded individuals that share similar interests and have a similar cognitive profile. They do they exercises online, but they can’t progress to the next unit of exercises without their whole class getting through it. There’s a social component, a social incentive for them to complete their exercises as well.
Above: Can you solve these brain problems? VentureBeat: How many people are working for you now? Can you tell me more about the money you’re raising? DenBoer: We have 45 people that work for us at this point. Sal Kohgadai is our chief technological officer. He’s up in the bay today, and I’ll be there tomorrow. We’re raising our first Series A, an $8 million round. We have, right now, about $6 million committed of that round. We’re looking for some additional partners, syndicate investors, to come in and round out the last $2 million. We’ve been fortunate, and we’re excited to try to close that last $2 million, close the round by the end of the month.
Our big initiative is to scale our product from 10,000 users up to 250,000 in a month. That’s our big plan. The company is currently valued at $18 million as far as pre-money valuation. It’d be great to see people attending the demo day to see all the great companies Alchemist has to offer.
VentureBeat: As far as the help you have right now to get to that larger scale, do you have any partnerships working on that? DenBoer: We have substantive partnerships. One is with Dignity Health. Another is with Honor Health. Honor Health is here locally in Phoenix. Then we have a couple of other major partnerships that we’re developing with Banner Health and some other folks.
We’re also engaging in partnerships with accountable care organizations. Accountable care organizations are responsible for fiscal management of larger hospital chains. Our product is something that saves companies money significantly, and we’re looking forward to having not only improved clinical efficacy for patients, but also fiscal assistance to these ACOs that are looking to save money by providing excellent health care and innovative services to their patients.
This whole market of digital therapy is a new and emerging one. There was an excellent article in the New York Times a few months ago about it. We’re differentiating ourselves from brain games like Lumosity. It underscores the fact that applications and digital mechanisms can be used on a prescriptive basis in medicine. We’re excited to be considered in the class of digital therapy, not just in the old class of brain games.
VentureBeat: I’m starting to see insurance companies take the lead on a variety of these fronts, doing more of these kinds of preventive tasks.
DenBoer: Right. They’re starting to pay for this stuff, because they see the value in it, in many forms. With diabetes care, chronic care management, and so on, so many people are using apps and other forms of digital health care. Prescribing and paying for these things is a new evolution that’s happening. It’s an interesting area of health care that’s innovative and dynamic, and in the end very useful to patients.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,040 | 2,018 | "Microsoft acquires app-provisioning startup FSLogix | VentureBeat" | "https://venturebeat.com/2018/11/19/microsoft-acquires-app-provisioning-startup-fslogix" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft acquires app-provisioning startup FSLogix Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Another day, another profitable exit for a well-regarded startup. Microsoft today announced that it has acquired Atlanta, Georgia-based FSLogix , the company behind the eponymous award-winning FSLogix app provisioning platform, for an undisclosed sum.
In a blog post , Brad Anderson, corporate vice president at Office 365, and Julia White, corporate vice president at Microsoft Azure, wrote that FSLogix’s technology would enable faster load times for user profiles in Outlook and OneDrive, leading to improved overall Office 365 performance in “multi-user virtual environments.” “The way Microsoft 365 enables customers to shift to a modern desktop experience puts it at the heart of workplace transformation,” Anderson and White wrote. “From small businesses to very large global enterprises across numerous industries, FSLogix solutions enhance customer experience and productivity, while reducing support requirements for IT departments.” FSLogix had raised $10.3 million in funding prior to the acquisition, according to CrunchBase.
Its solutions suite, which is compatible with a swath of cloud vendors including Amazon, VMware, Citrix, and Red Hat, targeted customers with 1,000 to 50,000 users.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “When we launched FSLogix in 2012, our goal was to build software that helped customers reduce the amount of resources, time, and labor required to support virtual desktops,” FSLogix cofounder and CTO Randy Cook wrote in a statement. “Our first two products, FSLogix Apps and FSLogix Profile Container, focused on addressing critical needs that have existed from the dawn of desktop virtualization [and our] most recent product, Office 365 Container, is designed to enhance the Microsoft Office 365 experience in those virtual desktop environments … Although it’s still business as usual, FSLogix will soon integrate with Microsoft and join the strength of its enterprise productivity solutions and global reach.” FSLogix Apps, one of its flagship solutions, is a software agent that enables virtual desktop administrators to manage per-user applications by presenting only the apps, add-ins, fonts, printers, and folders they’re allowed to see by organization- or administrator-defined policy. FSLogix Profile Container and Office 365 Container, meanwhile, store profile data locally as it’s being used in a “Cloud Cache” that sits between user’s desktop and remote container storage, reducing network and file server load. (The Cloud Cache — which supports Azure Page Blobs and Premium Page Blobs — can also be configured to store containers in more than one location at the same time, on-premises or in the cloud.) The FSLogix acquisition dovetails with Microsoft’s recently announced Windows Virtual Desktop, a cloud-based service that offers a Windows 10 experience “optimized” for Office 365 ProPlus and includes free Windows 7 Extended Security Updates.
“Through customer engagement, we know that Microsoft Office applications are some of the most highly used and most commonly virtualized applications in any business,” Anderson and White wrote. “We are excited to welcome FSLogix to Microsoft, and we look forward to the impact its technology and its people will have on our customers’ virtualization experience.” It wasn’t immediately clear how FSLogix’s workforce — which is spread across its headquarters and satellite offices in Salt Lake City, Denver, Boston, the Netherlands, and London — would be impacted by today’s news. We’ve reached out for comment and will update this story when we hear back.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,041 | 2,019 | "Microsoft launches previews of Windows Virtual Desktop and Defender ATP for Mac | VentureBeat" | "https://venturebeat.com/2019/03/21/microsoft-launches-previews-of-windows-virtual-desktop-and-defender-atp-for-mac" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft launches previews of Windows Virtual Desktop and Defender ATP for Mac Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Microsoft today made a slew of announcements to help IT pros reduce costs, increase security, and boost employee productivity. The headline items are the launch of Windows Virtual Desktop in public preview and Microsoft Defender Advanced Threat Protection (ATP) for macOS in limited preview. But there are also updates around Office 365 ProPlus, Windows 10, Configuration Manager, Intune, and Microsoft 365.
But first, let’s tackle the big ones.
Windows Virtual Desktop Microsoft announced Windows Virtual Desktop in September, but only made it available as part of a private preview. Now in public preview , the Azure-based service provides a virtualized multi-session Windows 10 experience and Office 365 ProPlus virtual desktop on any device. It supports Remote Desktop Services (RDS) desktops and apps in a shared public cloud and will even include free Windows 7 Extended Security Updates (ESU) until January 2023. Windows 7 will hit end of support on January 14, 2020, so Microsoft is strategically offering the almost decade-old operating system via Windows Virtual Desktop.
In November, Microsoft acquired app-provisioning startup FSLogix.
That platform’s strength was reducing the resources, time, and labor required to support desktop and app virtualization. FSLogix technologies have now been put to work in Windows Virtual Desktop to enable faster load times for non-persistent users accessing Outlook or OneDrive, plus support for client and server RDS deployments.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Think of Windows Virtual Desktop as a tool for deploying and scaling Windows desktops and apps on Azure with built-in security and compliance. To deploy and manage your virtualization environment, you’ll need an Azure subscription — you can optimize costs by taking advantage of Reserved Instances (up to 72 percent discount) and by using multi-session Windows 10. You will not be charged more for accessing Windows 10 and Windows 7 desktops and apps if you have Microsoft 365 F1/E3/E5, Windows 10 Enterprise E3/E5, or Windows VDA. You will also not be charged more for using Windows Server desktops and apps if you’re a Microsoft RDS Client Access License customer.
Microsoft is slating Windows Virtual Desktop general availability for the second half of this year.
Microsoft Defender If you did a double-take here because you thought it was called Windows Defender, you’re not wrong. Microsoft is extending its endpoint protection platform to additional operating systems, starting with macOS. And so, with the release of Microsoft Defender ATP for Mac in limited preview, Windows Defender is now Microsoft Defender.
Microsoft Defender ATP gives macOS users “next-gen” antivirus protection, but Microsoft is also promising to add Endpoint Detection and Response, and Threat and Vulnerability Management (TVM) capabilities in public preview next month. TVM is designed to help security teams discover, prioritize, and remediate known vulnerabilities and misconfigurations exploited by hackers. Specifically, TVM promises: Real-time detection insights correlated with endpoint vulnerabilities Machine vulnerability context during incident investigations Built-in remediation processes through integration with Microsoft Intune and Microsoft System Center Configuration Manager Microsoft’s security pitch is for a “comprehensive” platform with “built-in sensors, cloud scalability, unparalleled optics, AI & machine learning-based protection to enhance the power of defenders, and the seamless integration with Microsoft’s identity and information protection solutions.” Now the company can add cross-platform to that list.
Office 365 ProPlus and Windows 10 Microsoft also shared today that new installs of Office 365 ProPlus will include the Microsoft Teams app by default and that the default installation for ProPlus will now be 64-bit. The former comes just days after the company announced that Teams is now used by 500,000 organizations.
As for the latter, those already on 32-bit installs will soon be offered an in-place upgrade to 64-bit that doesn’t requiring uninstalling and reinstalling.
Separately, Microsoft shared that since Windows 10 Creators Update (version 1703), it has seen a 20 percent reduction in operating system and driver stability issues. Starting with Windows 10 Fall Creators Update (version 1709), the company says devices are updating up to 63 percent faster.
Configuration Manager, Intune, and Microsoft 365 Configuration Manager and Intune are getting new insights and deployment options to help manage your devices across platforms.
More specifically, Configuration Manager branch 1902 arrives this week with the following: New Office analytics: Native integration with the Office Readiness Toolkit provides insights that help organizations with the end-to-end readiness, deployment, and status tracking of Office 365 ProPlus.
Updates to CMPivot for real-time queries: CMPivot investigates the whole device estate using pre-built queries. You can now access CMPivot from the Configuration Manager Central Admin Site.
New management and client health visibility: Improved management insights to prepare for co-management, new rules for optimizing and simplifying collections and packages, and a dashboard with detailed breakdowns of device status.
New deployment options will also be available, including phased deployments , configuring known-folder mapping to OneDrive, and Configuration Manager integration with the Office Customization Tool.
Intune has meanwhile received Mobile Device Management (MDM) Security Baselines in preview. Microsoft considers these recommended configuration settings that increase your security posture, operational efficiency, and reduce costs.
And finally, the new Microsoft 365 Admin Center is now generally available. Going forward, admin.microsoft.com is your single entry point for managing your Microsoft 365 services. It includes guided setup experiences, improved groups management, and multi-factor authentication for admins.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,042 | 2,019 | "Net Applications: Windows 10 passes 50% market share, Windows 7 falls to 30% | VentureBeat" | "https://venturebeat.com/2019/09/01/net-applications-windows-10-windows-7-market-share" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Net Applications: Windows 10 passes 50% market share, Windows 7 falls to 30% Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
More than four years after its release, Windows 10 has passed 50% in market share. This means every other desktop computer is now running Microsoft’s latest and greatest operating system, according to Net Applications.
In January, Windows 10 passed Windows 7 in market share.
These are nice milestones for Microsoft to finally hit, even though they took longer than the company had hoped.
Windows 10 adoption started out very strong but naturally slowed as the months progressed. Microsoft was aiming for 1 billion devices running Windows 10 in two to three years but backpedaled on that goal.
The operating system was installed on over 75 million PCs in its first four weeks and passed 110 million devices after 10 weeks. Growth was fairly steady afterwards: 200 million in under six months, 270 million after eight months, 300 million after nine months, 350 million after 11 months, and 400 million after 14 months. Growth naturally tapered, though: 500 million after 21 months, 600 million after 28 months, 700 million after 38 months, and 800 million after 44 months.
Windows breakdown Windows 10 had 48.86% market share in July and gained 2.13 percentage points to hit 50.99% in August. Growth has been slow ever since the Windows 10 free upgrade expired in July 2016.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Windows 8 stayed flat at 0.63%, while Windows 8.1 lost 0.91 points to 4.20%. Together, they owned 4.83% of the market at the end of August. The duo’s peak was 16.45% back in May 2015.
Windows 7 dropped 1.49 percentage points, falling from 31.83% to 30.34%. (Windows 7 overtook Windows XP way back in September 2012 and passed the 60% market-share mark in June 2015.) Microsoft is ending support for Windows 7 on January 14, 2020. Hundreds of millions of users have just four months to get off Windows 7 — while there will be paid security updates , that’s only for corporate clients that can afford it.
At the bottom end, Windows Vista stayed flat at 0.15% (it fell below 1% market share at the start of 2017 , the month of its 10-year anniversary). Windows XP slipped 0.11 points to 1.57%.
Market share breakdown Windows overall slipped 0.56 percentage points to 87.89% in August. macOS gained 0.70 points to 9.68% while Linux slipped 0.38 points to 1.72%.
Net Applications uses data captured from 160 million unique visitors each month by monitoring some 40,000 websites for its clients. This means it measures user market share.
If you prefer usage market share , you’ll want to get your data from StatCounter , which looks at 15 billion page views every month. The operating system figures for August are available here.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,043 | 2,019 | "Instagram introduces new shopping, donation, and camera features | VentureBeat" | "https://venturebeat.com/2019/04/30/instagram-introduces-new-shopping-donation-and-camera-features" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Instagram introduces new shopping, donation, and camera features Share on Facebook Share on X Share on LinkedIn Apps Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Today marked the kickoff of Facebook’s annual F8 developer conference , which was predictably full of announcements. In a keynote this morning at the McEnery Convention Center, Instagram CEO Adam Mosseri shared that Instagram , the company’s popular photo- and video-sharing app with more than 1 billion users, is getting three major enhancements: curated product collections, donation stickers, and a revamped camera interface.
“People come to Instagram to be with their close friends. They stay to be inspired by art, fashion, sports, and entertainment — as well as the people behind those crafts,” Instagram wrote in a blog post. “Enabling expression and fostering those connections are at the heart of Instagram, and today we’re announcing new ways to strengthen those connections with the people and things you love.” Shopping Above: Product tags in Instagram.
Remember Checkout? It’s the feature Instagram debuted earlier this year that lets businesses sell goods directly to U.S. users, who can check out and pay via Mastercard, Visa, American Express, Discover, or PayPal within the app. Starting next week, Checkout is gaining a tagging tool that will let users shop looks from brands and creators like Vogue, Hypebeast, GQ, Refinery29, Elle, Kim Kardashian, Kimberly Drew, Kylie Jenner, Chiara Ferragni, Camila Coelho, and Huda Kattan.
Here’s how it works: Over the coming weeks, a “small group” of creators will be able to tag products from businesses participating in the Checkout beta. Followers will be able to quickly view these items with the option to buy, while tagged creators and brands will receive insights within Instagram to help track the performance of shopping posts.
Product tags, which Instagram began testing in March, will no doubt boost the company’s bottom line. Instagram last revealed that more than 130 million users were tapping product tags in shopping posts every month, up from 90 million in September.
Donation sticker in Stories Above: Donation stickers in Instagram Stories.
Instagram is not just making it easier to buy things — it’s also introducing a new way to solicit donations for nonprofits. As previously announced by Facebook director of project management for social good Emily Dalton Smith, users will be able to raise money through Instagram Stories by launching the camera, snapping a picture, tapping the sticker icon, and selecting the new donation sticker from the tray.
Once the stickered Story is live, swiping up on it will reveal the total amount raised, 100% of which goes to the registered nonprofit. The many participating organizations include Black Girls Code, JED Foundation, No Kid Hungry, Boys and Girls Clubs of America, ASPCA, Malala Fund, GLAAD, and the Nature Conservancy.
“The donation sticker will help us create two-way conversations with current and new supporters that will elevate awareness for the Boys & Girls Club brand in a whole new way,” said Boys and Girls Clubs of America senior vice president Karl Kaiser in a statement. “Instagram is one of our fastest-growing channels, and it’s critical for our mission to have the ability on this platform to inspire advocacy around issues that impact kids and teens everywhere.” Camera and Create Mode Above: Instagram’s new camera UI and Create Mode.
If you are tired of the Instagram app’s old-in-the-tooth camera UI, good news: It’ll soon be replaced with a new one. Instagram says that in a few weeks it will launch a “fresh look” with a slick, semicircular mode switcher feature — Create Mode — that will let users share photo- and video-free Stories and posts with Quiz stickers, GIFs, and other content. Another noticeable change: Modes like Superzoom and AR Effects have been relegated to a new Normal tab.
“This new camera will make it simpler to use popular creative tools like effects and interactive stickers, so you can express … what [you’re] doing, thinking, or feeling” more freely, wrote Instagram in a blog post, “especially for those moments in-between when there isn’t photo or video to share.” App researcher Jane Manchun Wong, who spotted the new UI in March, noted that it looked a bit like a DSLR mechanical circular switcher.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,044 | 2,018 | "HP launches new 15-inch Spectre x360 2-in-1 convertible laptop | VentureBeat" | "https://venturebeat.com/2018/01/08/hp-launches-15-inch-spectre-x360-2-in-1-convertible-laptop" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages HP launches new 15-inch Spectre x360 2-in-1 convertible laptop Share on Facebook Share on X Share on LinkedIn HP Spectre 360.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
HP is wasting no time taking advantage of the détente between Intel and Advanced Micro Devices (AMD), which have gotten together to package a processor and a graphics chip in a much smarter way in laptops.
The Palo Alto, California-based computer giant is unveiling the HP Spectre x360 convertible laptop, which has a 15.6-inch 4K (3840 x 2160) display with better battery life than past machines at 12 hours. HP is showing off the machine at the 2018 Consumer Electronics Show (CES), the big tech trade show in Las Vegas this week.
Above: Intel and AMD are teaming up.
HP is using Intel’s 8th Gen Core processors in the laptop, together in a special package for the first time with an AMD Radeon RX Vega M graphics chip. The two chips are connected on the same piece of silicon via an eight-lane PCI Express 3.0 bridge, Embedded Multi-Die Interconnect Bridge (EMIB). It also uses High Bandwidth Memory 2 (HMB2), a more power-efficient stacked memory technology that AMD pioneered and has since been turned into an industry standard.
The innovative solution results in a savings of several square inches of board space in a computer, as well as better battery life and higher performance. HP was able to add more cooling options to the machine, allowing it to have better power efficiency.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: HP’s new 15.6-inch Spectre x360 convertible laptop.
The main processor is an 8th Generation Intel Core i7-8550U processor. It has 512 gigabytes of internal storage, 16GB of main memory, and it weighs 4.6 pounds. About three years ago, such a system would have weighed about 6 pounds.
It has a hinge that allows it to convert in different modes, such as a tablet mode or a presentation mode. It goes from zero to a 50 percent charge in just 45 minutes. The 4K IPS display has 8.2 million pixels with a 178-degree viewing angle. It also has Amazon Alexa voice service, a digital pen, and USB-C Thunderbolt connectivity.
HP also has another version of the Spectre x360 with an Nvidia graphics chip.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,045 | 2,019 | "Talespin's virtual human platform uses VR and AI to teach employees soft skills | VentureBeat" | "https://venturebeat.com/2019/02/28/talespins-virtual-human-platform-uses-vr-and-ai-to-teach-employees-soft-skills" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Talespin’s virtual human platform uses VR and AI to teach employees soft skills Share on Facebook Share on X Share on LinkedIn Talespin's 'virtual human,' Barry.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Training employees how to perform specific tasks isn’t difficult, but building their soft skills — their interactions with management, fellow employees, and customers — can be more challenging, particularly if there aren’t people around to practice with. Virtual reality training company Talespin announced today that it is leveraging AI to tackle that challenge, using a new “virtual human platform” to create realistic simulations for employee training purposes.
Unlike traditional employee training, which might consist of passively watching a video or lightly interacting with collections of canned multiple choice questions, Talespin’s system has a trainee interact with a virtual human powered by AI, speech recognition, and natural language processing. Because the interactions use VR headsets and controllers, the hardware can track a trainee’s gaze, body movement, and facial expressions during the session.
Talespin’s virtual character is able to converse realistically, guiding trainees through branching narratives using natural mannerisms and believable speech. Visualized as an emotive person rather than a list of steps to memorize, the AI’s adaptive interactive training can teach interpersonal skills including proper communication, conflict resolution, and negotiation.
In a basic training scenario, the virtual human might do something as simple as demonstrating the correct way to act in a situation — effectively, “follow what I’m doing.” To that end, Talespin previously created a VR simulation for Farmers Insurance where a trainee learns to act like a claims adjuster inspecting water damage.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! But AI enables the simulation to go broader and deeper. A demo of the new platform places the trainee in the role of an HR employee who must terminate a fellow employee — a challenge for almost any untrained person. Rather than just working through a rote set of procedures, Talespin presents an interactive virtual human, complete with the heavy emotions and stress normally found in that situation. In addition to following a branched narrative, the scenario teaches users how to avoid common wrongful termination issues.
Businesses interested in training using the virtual human platform can find more information at Talespin’s site.
The company’s immersive and extended reality solutions are custom-built for enterprise customers, with prices determined on a per-project basis.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,046 | 2,018 | "How ILMxLAB and Oculus will bring Vader Immortal to life in virtual reality | VentureBeat" | "https://venturebeat.com/2018/09/28/how-ilmxlab-and-oculus-will-bring-vader-immortal-to-life-in-virtual-reality" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How ILMxLAB and Oculus will bring Vader Immortal to life in virtual reality Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Disney’s ILMxLAB and Facebook’s Oculus VR announced this week that the Oculus Quest wireless virtual reality headset will debut in the spring of 2019 with a new original Star Wars VR story dubbed Vader Immortal: A Star Wars VR Series.
The original VR series will have three parts and is part of the same effort that led to The Void’s Star Wars: Secrets of the Empire location-based VR experience. David S. Goyer, a Hollywood screenwriter on everything from The Dark Knight trilogy to Call of Duty: Black Ops, is the executive producer and writer of the Vader experience. He is working with Mohen Leo, ILMxLAB director of immersive content, and Colum Slevin, head of experiences at Oculus, to bring Darth Vader to life.
While earlier efforts to create VR Jedi experiences were cool demos, Goyer said this experience will be the “gold standard,” where you’ll be able to wield a lightsaber in a 360-degree, wireless VR experience. I interviewed them this week at the Oculus Connect 5 event in San Jose, California. Lucasfilm established ILMxLAB in 2015 to create immersive storytelling powered by real-time computer graphics. The Oculus Quest VR headset is a standalone device and it will go on sale for $399.
Here’s an edited transcript of our interview.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Left to right: David Goyer, Colum Slevin, and Mohen Leo.
GamesBeat: I didn’t realize that Secrets of the Empire had so much work behind it until now. How long ago would you say you got started on that? David Goyer: I think Lucasfilm first contacted me almost three years ago. At the time they knew they wanted to do some form of VR narrative involving Vader. That was about the extent of it. It started a very long learning process. At a certain point, once we landed on what it was going to be, the opportunity to do Secrets of the Empire came up.
Even though Secrets came up first, the more contained experience, we actually started work on the Vader project earlier. There was an opportunity to tease some of the elements of this in Secrets of the Empire. The locations are obvious. Some will become apparent once this comes out. Maybe midway through that, that’s when we became aware of Oculus Quest. Conversations started happening with Oculus.
GamesBeat: Even before you got in touch with Oculus, was it always VR? Goyer: Yeah, it was always VR. It wasn’t AR or MR. But we hadn’t decided what platform we were going to use. Once we started seeing demos, we decided to create that partnership. That even started affecting some of the storytelling.
Leo: When this project started, I was actually still working on the visual effects side. I was in London working on Rogue One. While we were working on the movie, we started getting phone calls. “Can you scan these sets? We might want to do something in VR with it.” We were already scanning them, so no problem. But it’s been in the works for a long time.
Above: Star Wars: Secrets of the Empire. Dean Takahashi (right) went through it with two strangers.
GamesBeat: What are the challenges of writing for VR? You’re not always certain of things like where the viewer is looking when someone’s supposed to be speaking to them.
Goyer: Yeah, there are lots of different challenges. When Lucasfilm first approached me, I’d been involved in film and TV, but I’d also worked on some of the Call of Duty games, which I guess you’d consider a cousin of VR, something like that. I had to re-learn, un-learn a lot of things, a lot of assumptions. We started off with a full script and then realized, as soon as we did the very first test, that we had to throw a lot of assumptions out the window.
It took us a long time, but ultimately we evolved this sort of iterative process where I would write an outline. Mohen and his team gray-boxed things in R&D, different interactive experiences. Then I would go back and rewrite and suggest some things. We’d just go back and forth. I wish I’d known then what we know now. [laughs] We would have saved some time.
Leo: One thing we learned throughout the process is how much we have to make the experience about you as the user, the person in the experience. What we found early on is that if you have two other characters talking to each other, pretty quickly – much faster than in the movies – people start zoning out. “They’re just ignoring me.” Goyer: Because you can literally just go walk somewhere else.
Leo: But as soon as a character turns to you and speaks to you? You suddenly have the attention. You feel like you’re part of the story. The characters acknowledge you. That became a big part of the design, to try to make sure that every scene puts you in the center.
Goyer: When we first approached it, two and a half years ago, three years ago, there was a thought that in some ways it might be a VR version of immersive theater, where you’re watching a play and you have freedom to move around and observe it from different points of view. As Mohen said, we quickly realized we weren’t capitalizing on this enormous opportunity.
But then the flip side of that is, when someone like Vader is paying attention to you, almost everything else fades away. You don’t hear anyone else talking because you’re so intimidated. We did a lot of tests with that. We did an initial test where Vader comes out and talks to you. That informed a lot of what we did.
GamesBeat: I assume he’s even more intimidating in VR.
Goyer: Way more. I remember the first test I did. It’s so different from watching him on film.
Above: Star Wars: Secrets of the Empire is a location-based VR experience.
GamesBeat: It seems like this is a moving target. I did Secrets of the Empire in January, down in Anaheim. Certain things about the experience are still clunky – the wooden gun, the heavy backpack. Although it was interesting that they made putting on the backpack part of the experience.
Goyer: We’re still in the early, early days of this medium. I liken it, in CG films, to when Pixar first released that short film, Luxo Junior. It’s so early. There’s a big difference between Secrets, which is location-based, and something you can experience in your home. This is also a cooperative experience. But I imagine all of that, the backpacks and whatnot, that will massively shrink very soon.
Leo: The second we got our hands on Quest and tried it out—you immediately understand that it changes the way you experience VR. One of the things about this experience, you do get to wield a lightsaber. Just knowing that you don’t have to worry about, as you turn around, wrapping yourself up in a cable and tripping over, that changes things. It allows you to dive deeper into the experience. When you have a cable coming out of the back of your head, part of your brain is always thinking about that. Having the freedom to not think about that is fantastic.
GamesBeat: Do you feel you learned some things from Secrets that went into the Vader project? Goyer: Absolutely. Even tiny things, like what duration of time you can take to impart information to a person in VR. It’s different. We discovered that, for instance, if you really need the user to pay attention for a little while, you might have to spotlight the environment a bit more. You take away some of the distractions, because the feeling of presence is so real. People talk about VR and presence, but being able to inhabit a Star Wars story—we’ve all dreamed about this since we were little kids. Now you can actually do it. It’s a whole new ballgame.
The other thing we learned—Secrets is a very fast-paced experience. It’s also a fairly short experience. This is a more expansive experience. We shouldn’t and wouldn’t want to maintain that pace for the entire experience. There are moments where you can be reflective in this experience. There are even moments of strange beauty. We were able to modulate all these different emotions.
Leo: Pacing is something that we’ve thought a lot about over time. You can’t drag people at a particular speed through the experience. “Now this, now this, and now this.” You need to let them, in a way—you give them a bite of story, let them digest that, and then they can decide when they’re ready to move on. That also came out of Secrets of the Empire.
Especially when people do VR for the first time, or they haven’t done it a lot–the first thing that pretty much everyone does in Secrets of the Empire is look down at their hands. If you’re trying to tell them the story at that moment, it’s just not going to register. If you have to do setup or exposition, which is certainly something we’re doing in this project, you have to do it carefully. You have to keep it short and focused. You may have to repeat things.
Above: Star Wars: Secrets of the Empire is a VR experience for the galaxy far, far away.
GamesBeat: I just came out of Quest as well, and it seems like the size of the space available to you might change. Does that change the game experience in some way, with a smaller or larger area to play in? Leo: We’re definitely trying to capitalize on the freedom of movement. What we have to be mindful of, though, is the experience has to work in spaces that everyone has — in your living room. We don’t want to make something that requires a massive space.
GamesBeat: How did you proceed once you started working with Oculus? You started with Secrets, and now you’re moving to this three-part thing, which sounds a lot more ambitious.
Goyer: It was actually a boon when Colum’s team came on. This is a new medium. Even this specific kind of VR narrative is new. It hasn’t really been attempted before. His whole department has been working on that from all these different angles. They were able to add all the experience they’ve had in some of the other projects they’ve been developing. It’s very helpful.
Colum Slevin: We’ve been talking to David and to the team at xLAB since the inception of the project, but Oculus Quest wasn’t formally on the road map yet. We didn’t have that all planned out. It took some time for the stars to align around what we wanted to do. I was really excited about helping to bring a Star Wars story to the platform, but until Oculus Quest came into view—that was the last piece for us, to realize that untethered VR and 360 degrees of motion with a lightsaber would be amazing.
Most of what we do with xLAB is similar to what we do with a lot of developers. We just try to remove obstacles, be supportive, and to the extent that we have the benefit of some wisdom from working with a range of developers, we try to bring that to the table as well. But these guys are the best in the world at what they do.
Above: Vader Immortal will be a three-part VR story series.
GamesBeat: Fans have already seen the Jedi experience in things like Disney’s AR products, or Trials of Tatooine. Is there something more you want to give to fans than what they’ve already tried? Goyer: When I first did Trials of Tatooine I thought it was amazing. But that was a demo, in a way. This, we think, is hopefully the gold standard. This is an original story. It was constructed specifically for VR. It’s not a piece of marketing that’s done as an adjunct to one of the movies or a TV show. It’s a stand-alone story with a beginning, a middle, and an end. It’s an expansive story. Without giving too much away, it’s something fans will not have experienced yet.
Slevin: It’s richer. I love the glimpses and the tastes I’ve gotten of Star Wars in VR, but this is a full meal. This is a full story. It has its own creative integrity. It has depth. It’s connected, but it stands alone.
Goyer: It’s connected to the Star Wars universe. We’ve already had some touch points with Secrets of the Empire, and there may be more touch points with other Star Wars stories in other media. But it’s its own story. Hopefully we’re also adding to the Star Wars mythology in this.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,047 | 2,019 | "Vader Immortal: Episode II is available now | VentureBeat" | "https://venturebeat.com/2019/09/25/vader-immortal-episode-ii-is-available-now" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Vader Immortal: Episode II is available now Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Developer ILMxLab used Facebook’s Oculus Connect 6 developer conference to launch the Vader Immortal: Episode II today. The Star Wars VR experience is available for both Oculus Rift and Oculus Quest right now.
As you would expect from an episodic experience, Vader Immortal’s second installment picks up where the last one left off. It has players continuing their training with Darth Vader. It also introduces new characters, such as a droid featuring the voice of actor Maya Rudolph.
“In Episode II, you’ll get to explore underneath Vader’s castle,” ILMxLab senior producer Alyssa Finely said during an Oculus Connect presentation. “You’ll get to use the Force, and there will be some incredible plot twists along the way.” ILMxLab’s goal with Episode II was to build on the foundation of both the story and the gameplay. The company brought back writer David S. Goyer to do the story, and it worked with Lucasfilm to ensure the story is canonical. [ So I guess this means four-armed Rancors are cannon now — Ed.
] Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! One of the big gameplay advancements in Episode II is the simplified Force mechanics. Now you can use a singe button to begin the activation of Force powers like stunning enemies or flinging objects.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,048 | 2,011 | "Plunging solar panel prices claim first victim: Solyndra files for bankruptcy | VentureBeat" | "https://venturebeat.com/2011/09/01/solyndra-bankruptcy-solar-costs" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Plunging solar panel prices claim first victim: Solyndra files for bankruptcy Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
A glut of photovoltaic solar panels on the market has caused solar cell manufacturer Solyndra to go under. The company filed for bankruptcy and laid off 1,100 workers this morning.
The company makes cylindrical solar rooftop systems and is a now-controversial recipient of a $535 million loan guarantee from the Department of Energy — one of the first of its kind as part of the U.S. government’s federal stimulus program. After raising $1 billion, the company was forced to slash costs, close a factory and cancel an initial public offering as photovoltaic panel prices collapsed amid the economic recession that began in 2008.
“The price drops outpaced market growth and put pressure on solar panel manufacturer revenues,” Lux Research analyst Matt Feinstein told VentureBeat. “Eventually, the market growth begins to then outpace the price decline, but that hasn’t happened yet.” As a result, the company has closed its factory in Fremont, Calif. The company grew from $6 million in revenue in its first year of operations to $140 million in revenue. Its solar panels cost an average of around $2 per watt to produce — which is high compared to many manufacturers in China and First Solar, which can produce solar panels for less than $1.20 per watt.
“I wish we would target more U.S. companies, but the solar manufacturing market is in Asia,” senior director of global product marketing Jim Cushing of Applied Materials, a company that sells solar panel manufacturing equipment, told VentureBeat. “China in particular has been extremely aggressive investing in building up a photovoltaic infrastructure and a photovoltaic capacity, and they’ve been able to drive their costs down.” Dropping demand for solar panels that capture between 16 and 17 percent of the sunlight shining on them has also taken its toll on solar cell manufacturers. Many governments are offering incentives for solar panel manufacturers, but the market is shifting in favor of higher-efficiency photovoltaic manufacturers because the incentives favor rooftop solar installations with smaller surface areas, Cushing said.
The loan guarantees are one of the financial engines powering many cleantech investments in Silicon Valley. But concerns about burgeoning debt in the United States have raised questions about whether the program will continue to exist. The program will likely continue to exist in its current form for electric car manufacturers, but other cleantech sectors could be in jeopardy, Kleiner Perkins Caufield & Byers partner Ray Lane told VentureBeat. Kleiner Perkins Caufield & Byers has a large presence in the cleantech space with investments in Fisker Automotive and electric bus manufacturer Proterra.
The U.S. government has allotted an enormous amount of money to companies like solar panel manufacturer First Solar. The program has issued conditional guarantees valued at around $16 billion to solar power projects and $38 billion to clean technology projects. Other countries stimulate cleantech expansion by other means, like feed-in tariffs.
“I don’t think (cutting the loan program) will ever not be on the table,” Feinstein said. “Newer technologies, unproven technologies are the ones that would suffer most because they rely on those loan guarantee programs.
Solyndra’s annual revenues topped $140 million last year amid growth in U.S. and European markets. Solyndra had shipped nearly 100 megawatts of panels and expected to reach an installed system cost-of-goods-sold price of about $2 per watt in the first quarter of 2013 as of February.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,049 | 2,019 | "Could a pre-mortem analysis have saved Theranos? | VentureBeat" | "https://venturebeat.com/2019/07/21/could-a-pre-mortem-analysis-have-saved-theranos" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Could a pre-mortem analysis have saved Theranos? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Have you ever wondered what it would be like to see the future of business? If Theranos’ leaders and high profile investors could have predicted the future — one with incredible sounding blood test technology that just plain didn’t work — would they have avoided their problems? What steps would you take if you knew how it all might end? And how would you change your plans if you could foresee all the obstacles, pitfalls, and dangers along the way? Unfortunately, time machines don’t exist (at least yet). However, there is a technique that might get you similar results – the pre-mortem analysis.
Seeing the future to change the present Let’s be honest – if you are an entrepreneur, an executive, or a project manager, any initiative you are responsible for faces an enormous number of risks, many of them unknown. In the case of Theranos, a big risk was, what if the device did not actually work — ever? In 20/20 hindsight, professors, PhDs, and journalists are saying, “Aha. I knew it would never work.” But many experts did share concerns about this failure scenario privately to the company earlier on.
If you run a startup or manage a breakthrough innovative project, the odds are stacked against you and failure is a very real option. The good news is, most of the problems you could face can be reverse-engineered and solved before they occur.
The pre-mortem analysis is a tool that lets you do just that. It helps to visualize risks for the project and develop strategies to mitigate them. It leverages the power of the prospective hindsight, i.e. imagining that an event has already happened, to allow you to correctly identify the reasons for future outcomes.
The analysis starts by imagining the future in which your project has spectacularly failed and lets you and your team track all the reasons why it happened. While the post-mortem analysis exists to learn what caused the death of a project after its end, the pre-mortem happens at its beginning – to improve the project while there is still time.
Running the pre-mortem The pre-mortem analysis is based on a relatively simple but very powerful process. It is possible to conduct it just by yourself, but you get real value when the entire team or at least the key stakeholders are involved. It creates a safe space to identify problems, empowers people to speak about potential weaknesses, reduces over-confidence, and promotes a culture of candor.
The time necessary to conduct the analysis depends on the size of the project, but you should reserve at least 90 minutes to 2 hours of uninterrupted time. If that seems like a lot, just consider how much time it would take to fix everything that could go wrong.
1. Preparation Before the team gathers for the workshop, you as the leader should prepare. First, review the goals and objectives of the project. Second, decide on the timeline. The shorter the horizon you look at, the more specific and relevant the brainstorming might get. However, with a longer time span, the ideas might be more creative and trailblazing. Third, you maximize the value of the session when all the ideas are recorded – so select a note-taker who will not participate in the workshop apart from taking notes.
2. Set the stage Every pre-mortem starts with setting the stage for the participants. After you describe the purpose and the steps of a pre-mortem analysis, remind everyone of the scope of the project, its goals and objectives, and brief the participants on the existing plans. Then paint the picture of impending doom, describing how the project failed miserably, letting everyone feel the pain of failure. The more vivid the picture, the better.
3. Brainstorm Once the image sinks in, ask the team to be brutally honest with themselves and to brainstorm all possible reasons for the failure – any negative turns of events, bad decisions, ignored warning signs, or assumptions that proved false. They might be connected to planning, execution, communication, results, or any other key aspects.
You can use any kind of brainstorming technique. Team members can work individually, but it is better to keep it as a collective exercise, eventually breaking the team down to groups of 4 – 6 people if there are too many. Remember to remind participants that the point is to brainstorm all and any ideas. The task is to record them, not to analyze them – or even worse, reject them. Furthermore, ask the team to use language as if they were in the future – using past tense for what “happened” and present tense for it “is now”, to dive into the experience and switch on the right parts of their brains.
4. Examine and select key risks After the brainstorming potential is exhausted, it is time to share all the ideas. Post-it notes are a great tool, because they allow you not only to make the thoughts visible to all but also to group them into themes. When the ideas are presented, it is time to challenge them – not to reject them, but to push them to their limit. Ask each other why it happened, why you did not see it coming, or how you made it happen.
When all the ideas are presented, select the most important ones. Consider their impact, the likelihood of them happening, and your level of control over them. During this exercise, you should focus on those you might be able to prevent. The external risks that are out of your control should be considered in a different risk management session. The team might vote on this, e.g. each member having 3 votes and assigning them to 1, 2, or 3 risks they see as the most critical ones.
5. Create an action plan and assign owners Now that you have identified the key risks, you must devise an action plan that would mitigate them. Be as specific and tactical as possible. Go into details. As the last step, assign owners to the plans – not to be responsible to fulfill the plans by themselves, but to make sure that all necessary actions will happen.
Rewrite the future Once you are finished with the workshop, you have seen the world where you have magnificently failed, but you have also identified the potential problems, why they might happen, when they might occur and, most importantly, what can be done to prevent them. Now you need to do what it takes to rewrite the future to the one where you succeed. Your time traveling mission is complete. .
If you feel like there is too much negativity involved, you can bring some positive elements into the workshop. For the brainstorming part, divide the group into a Failure Team, mapping the risks, and a Success Team, brainstorming opportunities and decisions leading to unexpected outcomes that are even better than what is planned.
Optimism bias and confirmation bias tend to be the real enemies of startups, though. So it’s very important to challenge that optimism. And many individuals are afraid to speak against the group and bring up problems – for personal, organizational, or political reasons. Negativity might be perceived as disloyalty. A pre-mortem session allows all of that overlooked insight to come out and be examined.
Lukáš Konečný is a Principal at Y Soft Ventures.
You can reach him at [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,050 | 2,019 | "Elon Musk has solved Tesla's problems using video games | VentureBeat" | "https://venturebeat.com/2019/06/18/elon-musk-has-solved-teslas-problems-using-video-games" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion Elon Musk has solved Tesla’s problems using video games Share on Facebook Share on X Share on LinkedIn Elon Musk, a gamer.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Elon Musk , a noted Twitter user and someone who probably says “waifu” out loud, is all about video games this week. The Tesla founder has apparently taken a break from not admitting or denying SEC allegations to put a video game in the dashboard of the company’s electric cars. And you know how I love video games, so I’m very excited about all of this. But the one thing I love more than video games is leveraging them to better target the key male 18-to-34 demographic.
The reason this works is because Musk obviously loves video games. You can tell because he tweets fan art of 2B from Nier Automata and then has a meltdown when people ask him to credit the artist. Who among us hasn’t had that happen to them? “[Baldur’s Gate 2] was so good,” Musk also tweeted in response to a joke.
I think this moment is especially insightful because it shows that he both likes something and doesn’t have a sense of humor. These are two of the most important characteristics of my gamer identity.
Of course, Musk doesn’t just tweet. He also appeared on a panel with The Elder Scrolls V: Skyrim director Todd Howard and The Game Awards creator Geoff Keighley at the Electronic Entertainment Expo (E3) game industry show last week.
During that event, Musk treated the audience of average gamers better than he treats his investors. And by that, I mean he did not call any of the questions “boring” or “boneheaded” like he did in a conference call with analysts last year.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Instead, Musk agreed to take a selfie with a fan by mumbling “sure” while only barely rolling his eyes. So gracious.
But what about the game? We’ve established that Elon Musk is potentially the greatest gamer alive. And now you can play a video game on one of his cars. Truly, we live in an age of technological miracles.
Tesla posted a video of Beach Buggy Racing 2: Tesla Edition on Twitter today, so you can see it in action for yourself: https://twitter.com/Tesla/status/1141041018835353600 And who doesn’t want to sit in their car and play a game on an off-center display? I’m gonna make an appointment with my chiropractor right now because I’m going to stare slightly down and to the right for hours! But the genius of turning the Tesla into a giant console on wheels is that it addresses a problem. The electric batteries on Tesla’s cars have a range of about 310 miles. That means you may have to stop to charge them during long or frequent trips. And it takes about an hour to top off a Tesla.
Thankfully, this solves that. Now, you are no longer waiting for your car to charge or for Tesla to improve its technology. It doesn’t need to improve its technology anymore! Sitting in a car and playing a port of mobile kart racer for 60 minutes at a time is perfect, and I refuse to believe life can get better than that.
Most importantly, though, this means that Tesla can reach out to the gaming audience. The company is inviting gamers to “Experience the Tesla Arcade” through the end of this month.
At its showrooms, Tesla will enable people to play previously added games like Missile Command and Asteroids as well as Beach Buggy Racing 2.
It’s almost like Musk read some research about how Teslas appeal to gamers, and this is what they came up with. What a titan of industry! GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,051 | 2,019 | "Atari re-opens preorders for latest design for VCS game console | VentureBeat" | "https://venturebeat.com/2019/06/10/atari-shows-off-latest-design-for-vcs-game-console" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Atari re-opens preorders for latest design for VCS game console Share on Facebook Share on X Share on LinkedIn Atari VCS is targeted for launch in March 2020.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Atari has revealed its latest designs for the Atari VCS video game console.
Atari CEO Fred Chesnais spoke to me about the latest design in an interview. The new machine will be a Linux-based console with a Ryzen processor from Advance Micro Devices, and it will play both old Atari games as well as new titles created in recent years.
While the original console was first targeted for release in 2017, Chesnais said that (as announced in March) the latest target date for the VCS is late 2019 for crowdfunding purchasers and March 2020 for new customers.
He said the machine would cost $250 for a 4-gigabyte version and $390 for an 8-gigabyte version. The company said the system is available for preorder today on GameStop.com, Walmart.com, and AtariVCS.com, for deliveries starting in March 2020.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Pricing starts at $250 for the Atari VCS 400 Onyx (4GB RAM) Base model and tops off at $390 for one of three Atari VCS 800 (8GB RAM) All-In system bundles that include an Atari VCS Classic Joystick and Atari VCS Modern Controller. Classic Joysticks priced at $50 and Modern Controllers at $60, created in partnership with PowerA, are also available now for retail pre- order. Additional international presale dates and retailers will be announced soon.
Chesnais said the company is showing off the graphical user interface this week as well.
Above: Atari VCS “It’s very interesting to put your hands on the machine and your eyes on it,” he said. “We naturally went back to basics.” Atari will still have options for a modern controller and a classic joystick, with designs that have been modified from the original ones shown off in the past couple of years. The machine can also use a keyboard.
As for delaying from the summer to late 2019, Chesnais said the reason was to get access to better AMD chips.
Above: Atari VCS “When we saw the reaction come in from the community, it became a no-brainer for us,” he said. “We announced the upgrade to the new generation of AMD chips. We will do the end of the year for the Indiegogo units, and then March 2020 for mainstream.” Chesnais estimated 30 games would be available. As for competition with Intellivision, he noted it would be a closed system, whereas the Atari system would be open, with the ability to add to the system and go out on the internet.
“It’s going to be plug and play, more than a toy, with a sandbox mode,” Chesnai said. “My dream is to have thousands of apps on the device for people to download and use on their living room TVs.” The industrial design was inspired by the Atari 2600 from 1977. The Linux-based design will enable players to operate in sandbox mode, which lets them install a second PC operating system like Windows or Chrome.
Original Atari VCS backers will receive their hardware starting in December 2019, completing a drive that began on May 30, 2018. Total gross sales have since topped $4 million with more than 12,000 individual contributors.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,052 | 2,019 | "Tencent invests in Antstream Arcade retro-game streaming platform | VentureBeat" | "https://venturebeat.com/2019/07/22/tencent-invests-in-antstream-arcade-retro-game-streaming-platform" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Tencent invests in Antstream Arcade retro-game streaming platform Share on Facebook Share on X Share on LinkedIn Left Behind III on Antstream Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Antstream Arcade , a London startup that makes a streaming platform for retro games, said it received a round of funding led by Chinese gaming and social giant Tencent.
London-based venture firm Hambro Perks also participated in the Series A funding. The funding amount wasn’t disclosed.
Antstream Arcade offers a monthly subscription service to more than 2,000 retro video games. With that big catalog, Antstream Arcade offers casual “snacks” for its subscribers, letting them pick up and play classic retro games on demand. They can also interact with social challenges on the platform and compete with friends.
AntStream Arcade is available on Mac, PC, Xbox One, tablet and mobile devices, Nvidia Shield, and Amazon Fire Stick, with other platforms coming soon. In this case, you don’t have to buy separate vintage hardware to access older games.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Following a successful Kickstarter campaign, Antstream Arcade soft-launched in July 2019 in Western Europe, and it plans to launch in the U.S. at the end of 2019.
“Streaming is the future of gaming and the feedback we have received from our fans and early adopters has proven the point, they love it,” said Antstream Arcade CEO Steve Cottam, in a statement. “We are truly privileged that Tencent recognized our potential by investing so early in our journey. The company is synonymous with gaming and we are proud to be part of its family. Tencent has given us an incredible endorsement of our plans at Antstream Arcade, and we are delighted to lean on the support of its team as well as Hambro Perks to bring the joy of retro gaming to modern devices and new audiences.” “Hambro Perks is thrilled to back Antstream alongside Tencent,” Dom Perks, CEO of Hambro Perks, in a statement. “We predict that its retro gaming platform will be hugely popular with veteran gamers as well as younger demographics. Mobile streaming classic games is a fantastic idea and the team behind the vision is truly world-class.” Antstream Arcade was founded in 2013 in London.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,053 | 2,019 | "Siggraph 2019 offers a sneak peek into what's next for AR, VR, and CG | VentureBeat" | "https://venturebeat.com/2019/07/30/siggraph-2019-offers-a-sneak-peek-into-whats-next-for-ar-vr-and-cg" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Feature Siggraph 2019 offers a sneak peek into what’s next for AR, VR, and CG Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
It’s hard to believe that the computer graphics conference Siggraph is celebrating its 46th birthday this year, but the annual event certainly doesn’t show any signs of middle age. Held this week at the Los Angeles Convention Center, Siggraph 2019 is all about the future of 2D and 3D digital worlds, attracting everyone from luminaries in pre-rendered CG to budding AR developers and VR artists.
Siggraph’s exhibition area opens today, adding to educational sessions that have been in progress since this weekend, and an “experiences” area that opened yesterday. I had the opportunity to attend the show’s official media preview and go hands-on with a bunch of this year’s most exciting innovations; here’s a photo tour of some of the best things I saw and tried.
Biggest wow moment: Il Divino – Sistine VR There’s no shortage of sophisticated mixed reality hardware at Siggraph, but I was most impressed by a piece of software that really demonstrated VR’s educational and experiential potential. Christopher Evans, Paul Huston, Wes Bunn, and Elijah Dixson exhibited Il Divino: Michelangelo’s Sistine Ceiling in VR , an app that recreates the world-famous Sistine Chapel within the Unreal Engine, then lets you experience all of its artwork in ways that are impossible for tourists at the real site.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The app begins with a modestly impressive ground-level recreation of the Chapel. Epic’s Unreal Engine lets you see realistic marble barriers and ceramic floor tiles if you look closely, and you’ll have no trouble making out the individual paintings as you approach them, though you won’t confuse the faux wall curtains or other elements with reality. Even so, Il Divino’s developers provide an impressive audio and lightly visual guided tour through the space, making the most of an interface that’s largely about teleporting from place to place within a large box, and looking at art.
But everything changes when the app opens up access to a mechanical lifter and wooden scaffolding that elevate you to the Chapel’s ceiling. All of a sudden, you can control your up-close views of the paintings, and experience Michelangelo’s masterpiece Creation of Adam from the same perspective as the painter himself. The developers use VR — including your own fatigue after a comparatively brief session — to suggest how difficult the act of painting for hours (and months) on end must have been, while offering insights into the pacing and order of the works.
There are thousands of eye-melting VR experiences out there, and an equal number of dull “educational” ones.
Il Divino succeeds because it’s hyper-realistic in a different way, using virtual reality to both simulate and go past the original experience, enabling a form of education that feels more open to personal exploration. It will be available for free later this year from SistineVR.com.
Cinema, group and individual It wouldn’t be Siggraph without an exhibition of computer-generated movies, and the VR Theater at this year’s show is worth seeing. Here, Disney’s Bruce Wright welcomes early visitors to check out the company’s brand new short, A Kite’s Tale , an upbeat cartoon about handdrawn 2D animation and computer-rendered 3D living side-by-side.
Fifty guests at a time are welcomed into the venue to see a collection of five different real-time CG shorts developed by separate studios, including A Kite’s Tale , Faber Courtial’s impressively realistic space odyssey 2nd Step , and Baobab’s charming interactive Bonfire.
Once inside, viewers are seated in chairs with individual VR headsets, headphones, and two controllers, collectively experiencing the five shorts over a roughly one-hour session. A bank of high-end PCs sits in the center of the room, powering and synchronizing the experiences, though there’s little ongoing sense of collaboration between participants. Instead, it’s a VR theater where everyone’s watching pretty much the same thing, albeit from whatever angle a specific head is on, and — in some cases — with differences attributable to the shorts’ interactive elements.
In a smaller room elsewhere at Siggraph, New York-based Parallux is offering a more clearly shared experience. The company has developed a short story for group viewing that’s akin to watching a Broadway show with friends, but you could be watching it from anywhere.
Here, Parallux CEO Sebastian Herscher gestures towards a table surrounded by Magic Leap AR headsets, which seated viewers use to watch Mary and the Monster , a unique spin on Mary Shelley’s creation of the Frankenstein story. Strong voice acting and solid motion capture bring the animated experience to life within a diorama-like stage setting. Magic Leap wearers can use their controllers as magnifiers to zoom in on the individual actors, akin to opera glasses.
Each viewer sees the play-like performance appearing on the same table, and it’s synchronized across all of the headsets at once; it can also be watched using VR headsets, and can be scaled to fairly large local or remote audiences. This is a glimpse into what could be the future of plays, experienced holographically and from any seat in the house you prefer.
Apart from the examples above, most of the VR displays I saw at Siggraph were focused on individual experiences. One interesting exhibit, MIT Media Lab’s Deep Reality , used live heart rate, electrodermal activity, and brain activity monitoring to create an intensely personal relaxation and reflection experience. After someone lies down and dons a VR headset, Deep Reality uses “almost imperceptible light flickering, sound pulsations, and slow movements of underwater 3D creatures [to] reflect the internal state of the viewer.” Who wouldn’t love to kick back and relax to something so personally attuned at home? Next-generation AR eyewear Two of Siggraph’s most notable hardware exhibits were Nvidia’s new prescription AR eyewear and foveated AR headset — both still in research stages, but available to test with prototypes. The prescription AR glasses offered a vision-corrected, see-through AR display solution, including a demo of how the lenses let viewers see optically sharp projections that appear to float within the real world.
In the prototype form, the glasses had small, clear ribbons that displayed projected virtual images such as colored bottles or an Nvidia logo in front of the lenses. They didn’t require cables, and were as lightweight as modern, inexpensive plastic glasses are today.
A separate demo showed off Nvidia’s work on a Foveated AR Display, which the company suggests will use gaze tracking to enable multi-layer depth in AR images. In the image below, you can see how a specific small gaze area tracked by the headset becomes sharper to your eye as the background becomes softer and less detailed.
Nvidia is touting the Foveated AR Display as a “dynamic fusion of foveal and peripheral display,” and releasing a research paper to accompany the project. It’s unclear when the technology will actually appear in a shipping product, but it’s interesting to see Nvidia diving deeper into the AR world at this stage.
Next-generation haptics and immersion Some of the other innovations at Siggraph are wild, if not crazy. For instance, Taipei Tech is showing off LiquidMask, a briefcase-sized face haptic solution that lets your face feel hot and cold liquid sensations in VR.
LiquidMask can deliver feedback and temperatures between 68 and 97 degrees Fahrenheit, potentially useful for underwater VR experiences — assuming, of course, that you’re willing to hook yourself up to something as large as this to experience those sensations.
Another company was taking steps towards a very different type of future with a gigantic prosthetic tail — something that one wouldn’t have expected to find at Siggraph. The tail can be used to augment someone’s existing sense of balance with a third stabilizing limb, or disrupt their balance for exercise or other purposes.
The prototype tail uses pneumatics, relying on a separate cabled air tank for motion, so there’s no need to worry about an imminent attack by The Lizard or Doctor Octopus. If it can be made portable (and quiet), it might wind up being useful for people with physical disabilities or motor limitations.
More small steps for Magic Leap Magic Leap is offering two main demos at Siggraph’s “experience” area. Long lines were forming to try Mica, a demo of the company’s AI assistant, which presently can’t do much. Mica looks like a pixie-haired human woman, and at some point, will supposedly be able to speak with and guide headset wearers.
In the demo, you can look at her as she looks back at you, then silently follow her gestures to make an artistic collage together. It’s not particularly exciting stuff at this stage, but in a world where digital assistants such as Siri can spend years delivering hit-and-miss experiences, Magic Leap may well beat Apple to delivering a more compelling, fully-formed alternative.
Magic Leap’s other new demo, Undersea, lets users interact with a nearly photorealistic coral reef that appears within any room you choose, and a picture-sized portal window into the ocean on the wall. In addition to letting you walk around and view a piece of coral and small collection of fish, the demo lets you hold out your hand to generate bubbles and hold a fish in your palm, albeit with so-so tracking.
While the Siggraph demo is designed for a two-minute experience (and isn’t especially compelling), a full version of Undersea with more settings and depth has just been released for Magic Leap users. Regardless of how many or few of the $2,300 Magic Leap headsets have been sold, it’s clear at Siggraph that the company is working to actively push the platform forward.
Best of the rest One of Siggraph’s greatest strengths is the diversity of computer-generated art it brings into focus for attendees. You might not love all of it, but even some of the most basic concepts are thought-provoking.
John Wong’s RuShi interactive art exhibit above uses your birthdate and birth hour to generate, through some unspecified mechanism, a moving and colorful AI-based data flow that is presented on the central screen while prior users’ data appears on adjacent screens. It’s supposed to make you consider the amount of data about you that’s already being processed by AI in the real world, and whether that processing has any value.
A Siggraph-wide new focus on Adaptive Technology includes multiple Microsoft adaptive controllers, a touchscreen presentation of different adaptive technologies, and 11 sessions/talks on the subject.
Last but not least, David Shorey’s booth demonstrated the use of 3D printers to create real-world physical clothes that looked like they were straight out of video games and fantasy settings, including dragon scale-like fabrics that could be used for cosplay. His techniques yielded an incredible collection of different textures, surface treatments, and end products that look set to merge the worlds of CG and real-world fashion.
The future’s already here My biggest takeaway from Siggraph 2019 is that the CG future some of us were expecting a decade or more ago is already here — if you know where to look. VR and AR aren’t ubiquitous at this point, but it’s obvious from this show that there are lots of smart people working to evolve CG from its early 2D roots into genuinely immersive, interactive 3D.
Attendees could spend nearly a week at Siggraph without fully grasping everything that’s underway with huge companies such as Disney and tiny groups of researchers across the world. Scenes like the one below, where a group of people are all sharing a computer-generated entertainment experience in VR, have become table stakes for VR as of 2019.
The question is “where does it go from here,” and there’s not just one good answer. If anything, Siggraph shows how many directions CG is heading in, and the reason is simple: Hugely talented and creative people are now heavily invested in the futures of these technologies. At this point, the challenge is to polish and spread their ideas to as many people as possible, bringing what’s currently in the Los Angeles Convention Center out to everyone’s homes and public spaces.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,054 | 2,019 | "New IBM technique cuts AI speech recognition training time from a week to 11 hours | VentureBeat" | "https://venturebeat.com/2019/04/10/new-ibm-technique-cuts-ai-speech-recognition-training-time-from-a-week-to-11-hours" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages New IBM technique cuts AI speech recognition training time from a week to 11 hours Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Reliable, robust, and generalizable speech recognition is an ongoing challenge in machine learning.
Traditionally, training natural language understanding models requires corpora containing thousands of hours of speech and millions (or even billions) of words of text, not to mention hardware powerful enough to process them within a reasonable timeframe.
To ease the computational burden, IBM in a newly published paper (“ Distributed Deep Learning Strategies for Automatic Speech Recognition “) proposes a distributed processing architecture that can achieve a 15-fold training speedup with no loss in accuracy on a popular open source benchmark (Switchboard). Deployed on a system containing multiple graphics cards, the paper’s authors say, it can reduce the total amount of training time from weeks to days.
The work is scheduled to be presented at the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) conference next month.
As contributing researchers Wei Zhang, Xiaodong Cui, and Brian Kingsbury explain in a forthcoming blog post, training an automatic speech recognition (ASR) system like those in Apple’s Siri, Google Assistant, and Amazon’s Alexa requires sophisticated encoding systems to convert voices to features understood by deep learning systems and decoding systems that convert the output to human-readable text. The models tend to be on the larger side, too, which makes training at scale more difficult.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The team’s parallelized solution entails boosting batch size, or the number of samples that can be processed at once, but not indiscriminately — that would negatively affect accuracy. Instead, they use a “principled approach” to increase the batch size to 2,560 while applying a distributed deep learning technique called asynchronous decentralized parallel stochastic gradient descent (ADPSGD).
As the researchers explain, most deep learning models employ either synchronous approaches to optimization, which are disproportionately affected by slow systems, or parameters-server (PS)-based asynchronous approaches, which tend to result in less accurate models. By contrast, ADPSGD — which IBM first detailed in a paper last year — is asynchronous and decentralized, guaranteeing a baseline level of model accuracy and delivering a speedup for certain types of optimization problems.
In tests, the paper’s authors say that ADPSGD shortened the ASR job running time from one week on a single V100 GPU to 11.5 hours on a 32-GPU system. They leave to future work algorithms that can handle larger batch sizes and systems optimized for more powerful hardware.
“Turning around a training job in half a day is desirable, as it enables researchers to rapidly iterate to develop new algorithms,” Zhang, Cui, and Kingsbury wrote. “This also allows developers fast turnaround time to adapt existing models to their applications, especially for custom use cases when massive amounts of speech are needed to achieve the high levels of accuracy needed for robustness and usability.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,055 | 2,019 | "IBM's AI performs state-of-the-art broadcast news captioning | VentureBeat" | "https://venturebeat.com/2019/05/14/ibms-ai-achieves-state-of-the-art-broadcast-news-captioning" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM’s AI performs state-of-the-art broadcast news captioning Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Two years ago, researchers at IBM claimed state-of-the-art transcription performance with a machine learning system trained on two public speech recognition data sets, which was more impressive than it might seem. The AI system had to contend not only with distortions in the training corpora’s audio snippets, but with a range of speaking styles, overlapping speech, interruptions, restarts, and exchanges among participants.
In pursuit of an even more capable system, researchers at the Armonk, New York-based company recently devised an architecture detailed in a paper (“ English Broadcast News Speech Recognition by Humans and Machines “) that will be presented at the International Conference on Acoustics, Speech, and Signal Processing in Brighton this week. They say that in preliminary experiments it achieved industry-leading results on broadcast news captioning tasks.
Getting to this point wasn’t easy. The system came with its own set of challenges, like audio signals with lots of background noise and presenters speaking on a wide variety of news topics. And while most of the training corpora’s speech was well-articulated, it contained materials such as onsite interviews, clips from TV shows, and other multimedia content.
As IBM researcher Samuel Thomas explains in a blog post , the AI leverages a combination of long short-term memory (LSTM) — a type of algorithm capable of learning long-term dependencies — and acoustic neural network language models, along with complementary language models. The acoustic models contained up to 25 layers of nodes (mathematical functions mimicking biological neurons) trained on speech spectrograms, or visual representations of signal spectrums, while the six-layer LSTM networks learned a “rich” set of various acoustic features to enhance language modeling.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! After feeding the entire system 1,300 hours of broadcast news data published by the Linguistic Data Consortium, an international nonprofit supporting language-related education, research, and technology development, the researchers set the AI loose on a test set containing two hours of data from six shows with close to 100 overlapping speakers altogether. (A second test set contained four hours of broadcast news data from 12 shows with about 230 overlapping speakers.) The team worked with speech and search tech firm Appen to measure recognition error rates on speech recognition tasks and report that the system achieved 6.5% on the first test set and 5.9% on the second — a bit worse than human performance at 3.6% and 2.8%, respectively.
“[Our] new results … are the lowest we are aware of for this task, [but] there is still room for new techniques and improvements in this space,” wrote Thomas.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,056 | 2,018 | "Google Assistant can now speak two languages at once | VentureBeat" | "https://venturebeat.com/2018/08/30/google-assistant-can-now-speak-two-languages-at-once" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Assistant can now speak two languages at once Share on Facebook Share on X Share on LinkedIn Google Home.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Starting today, Google Assistant can now speak two languages at once. At feature launch, Google Assistant will be able to speak a combination of English, French, German, Italian, Japanese, and Spanish. Multilingual support will expand to include more languages in the coming months, a company spokesperson told VentureBeat in an email.
Google Assistant is the first among competitors like Siri, Bixby, Alexa, and Cortana to be able to speak two languages at once.
Multilingual Google Assistant support is available on many devices that speak with Assistant today, including Android smartphones, Home speakers, and Android tablets. The one exception to this rule is smart speakers with a screen and Google Assistant inside, like the Lenovo Smart Display released last month and the JBL Link View due out in the weeks ahead.
To add support for two languages, go to the Settings menu in the Google Assistant app, choose Preferences, then choose Assistant Languages. Once enabled, users can switch freely between their two languages of choice.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google was able to make Assistant multilingual automatic speech recognition by using the LangID model to identify language and simultaneously deploying automatic speech recognition models. Another algorithm is then used to rank the confidence level of each speech-to-text language transcription in a matter of milliseconds. Google will next try to enable Assistant to speak three languages at once, according to a Google AI blog post explaining the process to make it bilingual.
Google has extended support to new languages like Spanish, Swedish, and Dutch this year, and has committed to making Google Assistant available in more than 30 languages by the end of 2018.
Extension to new markets and non-English speaking parts of the world presents its own challenge, as Alexa and Google Assistant’s language understanding has been found to be 30 percent less accurate when hearing accents from outside the United States.
As smart speaker adoption grows and conversational computing becomes more available around the world, tech giants with AI-powered assistants, including Google, have made efforts to laud their technology’s ability to speak multiple languages.
This summer, Amazon announced that Alexa can now speak French and Echo speakers are coming to Mexico , and Apple lauded Siri’s growing use of location data to deliver accurate speech recognition.
The fight for consumers abroad appears to be increasingly important.
A Canalys report released earlier this month found that Google Home sales have outpaced Echo speaker sales worldwide for the second quarter in a row. The report also found demand growing at a healthy pace in other parts of the world, with Alibaba and Xiaomi now ranked third and fourth in the world in global smart speaker sales.
Also announced today: Google Home Max speakers are now available in the United Kingdom, France, and Germany.
The news comes a day ahead of the start of IFA, an international consumer electronics conference in Berlin where third-party manufacturers are putting Google Assistant into their devices such as the XBoom AI ThinQ smart speaker from LG and Voice, the first smart speaker from Marshall.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,057 | 2,019 | "Google Home gets real-time interpretations for 27 languages | VentureBeat" | "https://venturebeat.com/2019/01/08/google-home-gets-real-time-interpretations-for-27-languages" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Home gets real-time interpretations for 27 languages Share on Facebook Share on X Share on LinkedIn Google Assistant space at CES 2019 Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Google Assistant today announced the introduction of real-time translations with Google Home speakers and third-party smart displays like those from JBL, Sony, and Lenovo. Interpretations will initially be available in 27 languages. Plans are to later bring real-time interpretations to mobile devices, but no date has been set, a company spokesperson told VentureBeat.
Real-time interpretation with Google Assistant is the latest conversational AI milestone from Google, following the release of Duplex and Call Screen for Pixel phones in late 2018. But just like the first response to Duplex , you should taper your expectations.
Initial demos by VentureBeat found Interpreter Mode to be quick in its response, but each exchange could last no more than 15 seconds, a limitation that makes Interpreter Mode helpful but not yet capable of handling the longer exchanges that often occur in a typical conversation. That length of time does seem helpful for interpretations in environments like hotel check-in or concierge desks.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Interpreter Mode for Google Assistant appears to build upon translation abilities introduced in fall 2017 for Pixel Buds that drew comparisons to the fictional Babel Fish.
While this feature attracted a fair deal of attention at the time, in practice there were some notable limitations. For example, instead of an assistant that listens to a whole piece of conversation, then spits out a response, in Pixel Buds these translations could not last more than 10 seconds.
Google Translate app today can also deliver some on-the-spot translations, but requires an internet connection , a hurdle that can throw travelers who are away from home without a cell connection.
The news was announced today at the Consumer Electronics Show in Las Vegas.
Also today: Google Assistant is coming to Google Maps , and Android smartphone users can do things like share their ETA with a friend or loved one, search for places to visit along their route, or add a new stop to their journey. Android Maps users will be also able to ask Google Assistant to read their messages and enable hands-free reply messages with WhatsApp, SMS, Facebook Messenger, and other popular chat apps.
Starting with United Airlines flights in the U.S. in the coming days, Google Assistant can help you check in to a flight and get your boarding pass.
Building upon an announcement made late last year , in the coming weeks Android smartphones with Google Assistant will be able to respond to commands even when the phone is locked, so you can check your schedule, ask a question, add a reminder, or set a timer hands-free without the need to pick up and unlock your phone.
Assistant on lock screens was first made available for Pixel users and will be rolled out for other Android smartphones in the coming weeks, a company spokesperson told VentureBeat in an email.
Samsung introduced Google Assistant for its televisions , and Google Assistant will get increase in entertainment options in the year ahead with integrations planned with Dish remotes and Hopper receivers.
Google introduced Google Assistant Connect, a way for manufacturers to make simple integrations with Google Assistant for smart home control or to display information. With Connect, a manufacturer can enable simple smart home control or make an external button (like Amazon’s Echo Buttons can carry out Routines) that allows you to, for example, turn on a washing machine or see the weather on a smart mirror.
Above: Google Assistant Connect button On Monday, ahead of the start of CES, Google announced its Assistant will soon be available on one billion devices.
VentureBeat’s Kyle Wiggers reported that the voice of John Legend for Google Assistant, first showcased last year at I/O , has been postponed indefinitely.
While the new interpretation feature introduced today is only intended to act as a translator, it’s easy to imagine the same tech and intelligence someday helping to teach people a new language from scratch. Google Assistant can already provide answers about how to say phrases in more than 100 languages, such as “How do you say ‘hello’ in Mandarin Chinese?” The introduction of a translator for Google Home comes shortly after Google Assistant reached a previously stated goal to speak 30 languages in 80 countries by the end of 2018. Google also introduced a bilingual assistant capable of switching its language understanding between two languages.
Understanding of languages will naturally be key to powerful speech recognition and the spread of voice-powered assistants. By comparison, Apple’s Siri speaks more than 20 languages, and Amazon’s Alexa speaks roughly half a dozen languages.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,058 | 2,019 | "Google Assistant adds 9 new AI-generated voices | VentureBeat" | "https://venturebeat.com/2019/09/18/google-assistant-gains-9-new-voices-generated-by-ai" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Assistant adds 9 new AI-generated voices Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Google Assistant can talk and sing like John Legend in the U.S., and it’s conversant in over 30 languages in 80 countries (up from 8 languages and 14 countries in 2017). But in the years since its international launch, Google’s AI interlocutor hasn’t offered a choice of voices outside of the U.S. Fortunately, that’s changing.
Google today announced that Google Assistant users who’ve selected English in the U.K., English in India, French, German, Japanese, Dutch, Norwegian, Korean, or Italian will gain a second voice type with a unique cadence. These will join the 11 English voices already available stateside, six of which were previewed at Google’s I/O 2018 developer conference last year.
Google Assistant product manager Brant Ward said each voice was synthesized by a machine learning system — WaveNet — pioneered by Alphabet’s DeepMind. For the uninitiated, WaveNet mimics things like stress and intonation (referred to in linguistics as prosody ) by identifying tonal patterns in speech. In addition to producing much more convincing speech snippets than previous AI models, it’s also more efficient. Running on Google’s tensor processing units (TPUs), or custom chips packed with circuits optimized for AI model training, a one-second voice sample takes just 50 milliseconds to create.
Amazon followed in DeepMind’s footsteps earlier this year with a neural text-to-speech model that enables Alexa to narrate snippets from Wikipedia more naturally, with a contextually sensitive voice.
Nearly a dozen voices generated by the same model rolled out to Amazon Polly in July, following the addition of 38 new WaveNet-generated voices to Google’s Cloud Text-to-Speech service.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “[The new Google Assistant voices] sound natural, with great pitch and pacing,” said Ward. “We’ve learned that people enjoy choosing between voices to find the one that sounds right to them.” Eager to give them a go? Head to the Settings menu in the Google Assistant app for Android or iOS, where you’ll see a list of choices displayed by color as opposed to gender. (Google says the hues are intended to avoid influencing people’s voice selection with labels.) Alternatively, if you live in one of the countries receiving a new voice, either a “red” voice or “orange” voice will be randomly assigned when you first set up Google Assistant.
“A lot of people are surprised to learn that they don’t always stick with the voice they’ve been using, so give it a shot. You might just find one that sounds even better than the one you’ve been using,” said Ward.
Google Assistant’s new voices follow the rollout of Continued Conversation , a feature akin to Alexa’s Follow-Up Mode that listens for additional queries or follow-up questions after an initial exchange. In February, Google expanded multilingual support — which enables Google Assistant to recognize multiple languages in multiturn conversations — to Korean, Hindi, Swedish, Norwegian, Danish, and Dutch. In other news, Google introduced Interpreter Mode for translations in dozens of languages ; announced a reduction in speech recognition errors of 29% ; and detailed Duplex on the web , an evolution of Google’s Duplex chat agent that can handle things like rental car bookings and movie ticket purchases.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,059 | 2,019 | "Google researchers train AI to distinguish 9 Indian languages | VentureBeat" | "https://venturebeat.com/2019/09/30/google-researchers-train-ai-to-distinguish-9-indian-languages" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google researchers train AI to distinguish 9 Indian languages Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The world speaks thousands of languages — roughly 6,500, to be exact — and systems from the likes of Google, Facebook, Apple, and Amazon become better at recognizing them each day. The trouble is, not all of those languages have large corpora available, which can make training the data-hungry models underpinning those systems difficult.
That’s the reason Google researchers are exploring techniques that apply knowledge from data-rich languages to data-scarce languages. It’s borne fruit in the form of a multilingual speech parser that learns to transcribe multiple tongues, which was recently detailed in a preprint paper accepted at the Interspeech 2019 conference in Graz, Austria. The coauthors say that their single end-to-end model recognizes nine Indian languages (Hindi, Marathi, Urdu, Bengali, Tamil, Telugu, Kannada, Malayalam, and Gujarat) highly accurately, while at the same time demonstrating a “dramatic” improvement in automatic speech recognition (ASR) quality.
“For this study, we focused on India, an inherently multilingual society where there are more than thirty languages with at least a million native speakers. Many of these languages overlap in acoustic and lexical content due to the geographic proximity of the native speakers and shared cultural history,” explained lead coauthors and Google Research software engineers Arindrima Datta and Anjuli Kannan in a blog post. “Additionally, many Indians are bilingual or trilingual, making the use of multiple languages within a conversation a common phenomenon, and a natural case for training a single multilingual model.” Above: A comparison of conventional ASR system architectures to that of Google’s end-to-end model.
Somewhat uniquely, the researchers’ system architecture combines acoustic, pronunciation, and language components into one. Prior multilingual ASR works accomplished this without addressing real-time speech recognition. By contrast, the model proposed by Datta, Kannan, and colleagues taps a recurrent neural network transducer adapted to output words in multiple languages one character at a time.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In order to mitigate bias arising from small data sets of transcribed languages, the researchers modified the system architecture to include an extra language identifier input, an external signal derived from the language locale of the training data. (One example: the language preference set in a smartphone.) Combined with the audio input, it enabled the model to disambiguate a given language and learn separate features for separate languages as needed.
The team further augmented the model by allocating additional parameters per language in the form of residual adapter modules, which helped to fine-tune a global per-language model and improve overall performance. The end result is a multilingual system that outperforms all other single-language recognizers, and that simplifies training and serving while meeting the latency requirements for applications like Google Assistant.
Above: Chart of training data for the nine languages Google’s AI model recognizes.
“Building on this result, we hope to continue our research on multilingual ASRs for other language groups, to better assist our growing body of diverse users,” the coauthors wrote. “Google’s mission is not just to organize the world’s information but to make it universally accessible, which means ensuring that our products work in as many of the world’s languages as possible.” The system — or one like it — is likely to find its way into Google Assistant, which in February gained multilingual support for multiturn conversations in Korean, Hindi, Swedish, Norwegian, Danish, and Dutch. In related news, Google introduced Interpreter Mode for translations in dozens of languages and nine new AI-generated voices.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,060 | 2,018 | "Cursor Enterprise AI helps companies find and use their code across cloud services | VentureBeat" | "https://venturebeat.com/2018/10/24/cursor-enterprise-ai-helps-companies-find-and-use-their-code-across-cloud-services" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Cursor Enterprise AI helps companies find and use their code across cloud services Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Data management is time-consuming — particularly when you’re dealing with information across multiple cloud services. The typical enterprise uses about 91 marketing cloud solutions , and they’re going underutilized; they pay an average of 36 percent more for cloud services than they actually need to.
That’s why Adam Weinstein, Jason McGhee, and Patrick Farrell — veterans of companies such as LinkedIn, Salesforce, and Pandora — cofounded Cursor , an analytics platform that enables companies to take full advantage of the disparate apps they’re already paying for. The San Francisco startup, which emerged from stealth last year with $2 million in seed funding from Toba Capital and other investors, counts employees at Apple, Atlassian, Slack, and more than 500 other organizations among its customers.
This week, it’s launching a new offering — Cursor Enterprise — tailored to fit the needs of large organizations.
“Cursor Enterprise is designed to meet the needs of the world’s largest data-driven organizations with a host of new features, spearheaded by the use of machine learning to bring relevant data to you, faster,” Weinstein said. “We’ve been listening to … feedback and have been hard at work on new features that will allow you (and large-scale organizations everywhere) to not only better manage your data, but extract more actionable insights from it.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! From a user perspective, Cursor behaves like a search engine. Queries pull up a list of results compiled from connected services to a segment of SQL, to which people can attach comments and tags. The idea is to cut down on the amount of time folks spend pilfering through cloud folders or pinging colleagues for files and code.
So what differentiates Cursor Enterprise, which operates on a paid software-as-a-service model, from Cursor’s free plan? For starters, there’s a new centralized management dashboard that lets admins impose user and content access policies, as well as an enhanced tracking tool built with compliance in mind. Also in tow is an AI-powered recommendation engine that surfaces files that might otherwise fly under the radar, and a data catalog that indexes files from business intelligence platforms (like Tableau, Power BI, and Looker), on-premises databases, and content management systems.
The new arrivals join Cursor’s existing suite of features. Subscribers get universal search across all connected services, real-time automated backups of files undergoing edits, and automatic code formatting and verification, in addition to built-in Q&A tools that connect users with domain experts within their organization.
“It’s designed to help folks in the data weeds,” Weinstein told VentureBeat in a phone interview. “[The platform can] identity [files and code] that can be can be reused [and] provide insights.” Cursor Enterprise is available broadly today, and will gain new features in the weeks ahead. Python Notebooks support is on the way, as is metric tracking and integrations with ElasticSearch, BigQuery, Microsoft Dynamics, and more.
Cursor’s in a profitable market segment. The cloud infrastructure services market is forecast to be worth $81.29 billion by 2023, according to MarketsandMarkets.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,061 | 2,019 | "DataRobot acquires Cursor, a data collaboration platform | VentureBeat" | "https://venturebeat.com/2019/02/26/datarobot-acquires-data-collaboration-platform-cursor" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DataRobot acquires Cursor, a data collaboration platform Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Boston startup DataRobot , which helps enterprises build predictive models, today announced that it’s acquiring San Francisco-based data collaboration platform Cursor for an undisclosed amount. As part of the deal, DataRobot says it’ll build a San Francisco office and integrate Cursor’s technology into its machine learning solutions suite.
Prior to the acquisition, Cursor had raised $2 million in venture capital, according to Crunchbase.
“The Cursor technology and team are key pieces of the puzzle for [our] vision of an AI-driven world,” said DataRobot CEO Jeremy Achin. “The Cursor team has architected an incredible platform and shares DataRobot’s mission to build unprecedented levels of automation across the AI pipeline. We’re thrilled to have them on board.” Cursor, which was cofounded in 2017 by LinkedIn, Salesforce, and Pandora veterans Adam Weinstein, Patrick Farrell, and Jason McGhee, offers a coding environment with a data dictionary and toolset aimed at bridging the gap between the C-Suite and technical users. “We’ve seen the value that gets unleashed when companies truly become data-driven,” said Weinstein, who serves as CEO.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Within Cursor’s solution, team members contribute business terms and definitions and link them to data assets that can be searched for, cloned, reused, or amended down the line with comments, quality ratings, and other metadata. (A customizable permission model enables admins to lock content by user or group.) Users can chat with other users to get answers to questions, and pull up authorship history on the fly.
Cursor’s platform doesn’t require a server and supports Mac, Windows, and Linux. Moreover, it integrates with existing business intelligence platforms like Looker, Qlik, and Tableau; with databases such as Amazon’s Athena, IBM’s DB2, and Snowflake; and with customer relationship management (CRM) products from Salesforce, ServiceNow, and Workday.
Weinstein says that hundreds of companies use Cursor, including teams at Cisco, Under Armour, LinkedIn, Slack, and Apple.
DataRobot, which was cofounded in 2013 by data scientists who previously worked at Travelers Insurance, offers hundreds of trained and validated machine learning algorithms contributed by data scientists through its platform. (One of those models correctly predicted which nominated song would win this year’s Grammy Award for Song of the Year.) It has raised $225 million in funding to date from New Enterprise Associates, Sapphire Ventures, Meritech, DFJ, and other investors, and earlier this year it made research firm CB Insights’ list of top 100 AI companies.
Cursor is DataRobot’s third purchase in recent months. In July 2018, it also acquired Columbus, Ohio-based AI startup Nexosis, and in 2017 it bought Nutonian.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,062 | 2,015 | "BlackBerry lays out security strategy as turnaround takes shape | VentureBeat" | "https://venturebeat.com/2015/07/23/blackberry-lays-out-security-strategy-as-turnaround-takes-shape" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages BlackBerry lays out security strategy as turnaround takes shape Share on Facebook Share on X Share on LinkedIn A Blackberry sign is seen in front of their offices on the day of their annual general meeting for shareholders in Waterloo, Canada June 23, 2015.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
(By Euan Rocha, Reuters) – BlackBerry on Thursday showcased a suite of security products that safeguard everything from medical devices to Hollywood movie scripts, though its CEO acknowledged that his effort to transform the company remains a work in progress.
The Waterloo, Ontario-based company, whose smartphone market share has dwindled, is attempting to morph into a more software-focused entity.
“I’m pretty satisfied with the progress on the turnaround so far,” BlackBerry’s Chief Executive John Chen said in an interview just before an event in New York. “I laid out the $500 million software revenue target and I’m still comfortable with that commitment for this fiscal year, it looks good.” He indicated however that the full turnaround he has been promising could take longer than initially promised. Going by his initial timetable, BlackBerry would now be about six months away from seeing real traction from its overhaul. But Chen said he now sees it taking about 12 to 18 months for investors to reap rewards.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Analysts have been skeptical about the company’s ability to steadily and sustainably grow software revenue, even as revenues from its smartphone unit and legacy system access fees decline.
“We’re patiently building the product pipeline and the sales channel,” he said. “There is still a lot of work to do, I’d love for everything to move faster, but I caution people to be a bit patient because we can’t rebound in a very short period of time, no company can. We are doing all the right things for the long term and the company is definitely out of financial trouble.” Despite Chen’s success in shoring up BlackBerry’s balance sheet, and halting its cash bleed, its shares are still trading at levels they were at 15 months ago, as investors look for proof that it can get back on a growth trajectory.
The company, which has acquired a string of niche software-focused companies in the last 18 months, is now set on building out a bigger sales team, while also tapping the sales staff of telecom carriers and other partners to market its array of security-focused products.
“The company was not really set-up as a software delivery company, and it’s not a trivial thing to get there,” said BlackBerry’s Chief Operating Officer Marty Beard, adding that measures taken in the last year have improved BlackBerry’s ability to identify and target potential clients.
(Reporting by Euan Rocha; Editing by Christian Plumb) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,063 | 2,017 | "BlackBerry inks licensing deal to sell smartphones to more than 1 billion people in India and neighboring markets | VentureBeat" | "https://venturebeat.com/2017/02/06/blackberry-announces-licensing-deal-to-target-more-than-a-billion-people-across-india-and-neighboring-markets" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages BlackBerry inks licensing deal to sell smartphones to more than 1 billion people in India and neighboring markets Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
BlackBerry has announced a licensing deal to bring BlackBerry-branded devices to more than a billion people across India and neighboring markets.
The news comes a little more than four months after the Canadian tech company revealed it would end in-house hardware development , electing to outsource to partners and become a software-focused company instead. The first fruit of the licensing deal was a joint venture called BB Merah Putih in Indonesia, which has been one of BlackBerry’s biggest markets for devices, as well as for usage of its BBM messaging service.
BlackBerry followed up with a global partnership signed with California-based TCL Communication in December, providing “acceleration to BlackBerry’s transition into a security software and services company,” the company announced at the time. However, a month earlier, BlackBerry laid the foundation for its India push when it announced a partnership with Indian telecom company Optiemus Infracom to market and distribute a duo of Android BlackBerry phones — the DTEK60 and DTEK50.
Fast-forward three months, and BlackBerry has now confirmed that it will use Optiemus to “design, manufacture, sell, promote, and support” BlackBerry-branded devices across India, Sri Lanka, Nepal, and Bangladesh.
“Together, that encompasses nearly 1.5 billion people, most of whom have never owned a smartphone before,” explained Alex Thurber, head of BlackBerry’s mobility solutions division. “BlackBerry will maintain security on those devices through regular updates.” This is a milestone moment for BlackBerry as it strives to reinvent itself after years of decline — it now has licensees in place in every market globally, “thus completing our transition to a security software and services company,” Thurber added.
BlackBerry’s shift from being a hardware company to a brand licensor mirrors that of fellow former mobile phone giant Nokia, which has set up a business vehicle called HMD Global with a view toward bringing Nokia-branded handsets to market again.
India, in particular, has become a key battleground for technology companies as the country has a population of around 1.3 billion people, most of whom still aren’t online.
The likes of Google and Facebook are tackling things from a services perspective, introducing data-saving apps and funding public internet initiatives.
But smartphones are still owned by less than 20 percent of the population, which represents a huge growth potential. Estimates indicate that India could soon overtake the U.S.
to become the world’s second-biggest smartphone market in terms of units shipped. Apple is also expected to begin manufacturing the iPhone in India as early as April this year.
Put simply, India could prove to be a major market for BlackBerry as it looks to gain ground on its competitors. “This partnership will allow us to further extend the BlackBerry software experience in a region which is slated to become a hotbed of mobile growth and innovation,” said Thurber. “With our three hardware licensing partners, BlackBerry devices will now reach every corner of the globe.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,064 | 2,017 | "BlackBerry is no longer a phone company | VentureBeat" | "https://venturebeat.com/2017/03/03/blackberry-is-no-longer-a-phone-company" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages BlackBerry is no longer a phone company Share on Facebook Share on X Share on LinkedIn Mobile World Congress 2017 concluded on Thursday, but as usual all the devices had been announced on the opening weekend, starting with TCL’s BlackBerry KeyOne.
Buried in the news of the fourth BlackBerry device running Android was a big change for the company: BlackBerry no longer makes devices; BlackBerry now only makes software. We had the opportunity to speak with Alex Thurber, head of BlackBerry’s mobility solutions unit, about the Canadian company’s transformation.
Over the past year, BlackBerry’s business has increasingly relied on licensing — the division Thurber leads was once called the devices business unit. Now it’s the mobility solutions unit. That’s right — no more devices. Instead, the unit is responsible for supporting handsets and all other hardware. BlackBerry has largely completed its move to a licensing-only model.
“We’re excited how as mobility solutions we fit into the broader BlackBerry story, which is obviously focused on big picture enterprise mobility,” Thurber told VentureBeat. “And again, with our licensing ventures I think we’re in a great position to move forward. You’re going to see a lot more BlackBerries going forward than you have in a long time. Soon our other licensees will start releasing products as well. I think it’s going to be a very exciting year for BlackBerry as we really look at re-expanding our move with smartphones.” So the KeyOne is just the first of many BlackBerry-branded devices coming out this year. Instead of devices, the company sees itself offering security and privacy features, specifically making it easier for companies, governments, and individuals to protect whatever information they deem valuable. How will BlackBerry do this without making any actual devices? By focusing on software.
License out everything, but control the software BlackBerry has handed all the hardware and sales details to its licensees. The KeyOne is the first BlackBerry phone that BlackBerry isn’t pushing itself. The past three devices, the BlackBerry Priv , the DTEK50 , and the DTEK60 , all went through the BlackBerry distribution chain.
Now, licensees are responsible for selling the BlackBerry-branded phones through their own distribution chains. There’s no difference to the end customer, whether it’s a business or an individual user, as the devices are still available in retail stores, from carriers, and on the open market. But for carriers and distributors, they’ll now work with the licensees, as opposed to with BlackBerry.
BlackBerry is no longer making any hardware or manufacturing any devices. The company is still selling some phones, but only those that were already built. You can no longer buy a new BlackBerry from BlackBerry.
This transition has been ongoing for the last month or so. The BlackBerry distribution chain is finito.
When asked about the demand breakdown for BlackBerry devices (enterprises versus individuals), Thurber deftly explained that it really depends on the market. For BlackBerry, Indonesia is very much a consumer market; India is small business, enterprise, and government; Western Europe is more business than government-focused; and South Africa is a lot of consumers. Although the KeyOne is launching globally, licensees will also be building BlackBerry phones catered to specific regions (announced licensees so far include TCL, BB Merah Putih in Indonesia, and Optiemus in Sri Lanka, Nepal, and Bangladesh).
“They are on the ground, they know exactly what their consumers require, be it individuals, be it companies,” Thurber explained. “They are developing the hardware to fit that market at their particular price point and then we’re coming in with that software expertise, the BlackBerry brand, and all awareness and positive vibes behind that. I think it’s a real winning combination.” The licensee is responsible for coming up with the marketing and branding plan, including the name. Although BlackBerry doesn’t design the phone, the company can approve or deny the final design and name.
At the end of the day, BlackBerry has “complete control of the software.” Well, Google still makes Android, but BlackBerry offers its software and apps on top, including monthly security patches.
BlackBerry delivers a “signed and sealed software image” for every device with its name on it. The company will also work with the licensee for particular applications that a given market requires.
“There won’t be anything put on the device that we haven’t approved,” a Blackberry spokesperson confirmed with VentureBeat. “Anything else has to be stock downloaded and installed from the Google Play store.” Never say never Given this new strategy, which has been in the making for over a year, one would think that BlackBerry is entirely focused on software for Android devices and that it will never make hardware again. That would make sense, but the company isn’t quite ready to commit.
When asked where BlackBerry OS fits into all of this, Thurber replied with the same thing BlackBerry has been saying ever since rumors of an Android BlackBerry first arrived: BlackBerry is still supporting BlackBerry OS with regular updates.
That’s right: BlackBerry still employs engineers focused on BlackBerry 10 — the latest 10.3.3 update came out in December 2016. Thurber wouldn’t definitively say there will be no more BlackBerry OS devices, nor would he commit to a future BlackBerry 10 phone. In other words, it’s technically still possible, assuming there’s enough demand from customers.
If a non-Android BlackBerry is released sometime in the near future, however, it will probably be made by a licensee. So, will we never see a BlackBerry-sold phone again? “I’m always cautious in the technology world to never say ‘never’,” said Thurber. “In the foreseeable future, our model is very much focused on the software licensing and working with partners on the hardware development, and then the branding, selling, etc.” BlackBerry is dead, long live BlackBerry! The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,065 | 2,022 | "Global cybersecurity workforce to be short 1.8 million by 2022 | VentureBeat" | "https://venturebeat.com/2017/06/07/global-cybersecurity-workforce-to-be-short-by-1-8-million-personnel-by-2022-up-20-on-2015" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Global cybersecurity workforce to be short 1.8 million by 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The global cybersecurity workforce will be short by around 1.8 million people by 2022, according to a new report by Frost & Sullivan, representing a rise of around 20 percent since 2015.
The Global Information Security Workforce Study (GISWS) is carried out every two years by the Center for Cyber Safety and Education and (ISC)² , with the 2015 report identifying a workforce shortage of around 1.5 million by 2020.
This latest report reveals the outlook isn’t getting any rosier.
The findings from the updated survey, which taps insights from around 19,000 cybersecurity professionals, are being drip-fed throughout 2017 via a series of dedicated reports. But the inaugural results show that around two-thirds of those surveyed currently don’t have “enough workers to address current threats,” while 70 percent of managers responsible for hiring want to bolster their in-house security teams to some degree this year.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Investment surge Cybersecurity investment has gone through the roof in recent years — just today, Illumio announced it had raised $125 million to help protect data centers against cyberthreats. And technology companies are getting increasingly creative in their quest to grow the available cybersecurity talent — last year, Facebook revealed it was open-sourcing its Capture the Flag competition platform that teaches developers about cybersecurity. Why? Because of the anticipated demand for cybersecurity professionals in the coming years.
Elsewhere, networking technology giant Cisco has been going all-in to boost its cybersecurity credentials. Last June, it launched a $10 million scholarship to tackle cybersecurity talent shortage. As part of the two-year program, Cisco said it would provide training and mentoring to present successful graduates with a certification that qualifies them for a security operations analyst role.
“Many CEOs across the globe tell us their ability to innovate is hampered by their security concerns in the digital world,” noted Jeanne Beliveau-Dunn, vice president and general manager at Cisco Services, at the time. “This creates a big future demand for skill sets that don’t exist at scale today.” In addition to scholarships, Cisco has also been acquiring cybersecurity companies, including Sourcefire, which it snapped up for a hefty $2.7 billion ; OpenDNS, which it acquired for $635 million; and Lancope, which it brought on board for a cool $453 million.
Other companies are following similar plans. Microsoft, for example, which has been setting out to build the “intelligent cloud platform” since Nadella took over as CEO back in 2014, has been snapping up cybersecurity startups and launching dedicated facilities to help thwart online chicanery.
“There is a definite concern that jobs remain unfilled, ultimately resulting in a lack of resources to face current industry threats — of the information security workers surveyed, 66 percent reported having too few of workers to address current threats,” said (ISC)² CEO David Shearer. “We’re going to have to figure out how we communicate with each other, and the industry will have to learn what to do to attract, enable and retain the cybersecurity talent needed to combat today’s risks.” One solution to the human cybersecurity shortage is artificial intelligence, another area that is seeing significant investment. Last year, Cylance raised $100 million to help businesses protect themselves from zero-day attacks through automation. Others in the AI cybersecurity space include Fortscale , which uses big data analytics and machine learning to identify malicious user behavior; Jask and Darktrace are doing something similar.
The report also found that 87 percent of cybersecurity workers started their careers doing something different, which is juxtaposed against the 94 percent of hiring managers who indicated they were looking for staff with existing experience in the field. This hints at one possible crux of the recruitment problem: Leadership may not fully understand job requirements, according to the results of the GISWS report.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
17,066 | 2,018 | "BlackBerry's QNX OS to anchor Baidu's Apollo autonomous driving platform | VentureBeat" | "https://venturebeat.com/2018/01/03/blackberrys-qnx-os-to-be-the-bedrock-of-baidus-apollo-autonomous-driving-platform" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages BlackBerry’s QNX OS to anchor Baidu’s Apollo autonomous driving platform Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
BlackBerry and Baidu have announced a collaboration through which the Canadian and Chinese companies will team up for a number of connected and autonomous vehicle initiatives.
The duo have signed a statement of intent to make BlackBerry’s QNX operating system the basis for Baidu’s previously announced Apollo autonomous driving platform.
As part of the tie-up, Baidu said it plans to integrate a number of its own software products into BlackBerry’s QNX Car infotaintment platform , including CarLife, which integrates connected cars with smartphones; Baidu’s DuerOS voice interaction system; and high-definition maps.
“We aim to provide automakers with a clear and fast path to fully autonomous vehicle production, with safety and security as top priorities,” said Li Zhenyu, general manager of Baidu’s intelligent driving group. “By integrating the BlackBerry QNX OS with the Apollo platform, we will enable carmakers to leap from prototype to production systems. Together, we will work toward a technological and commercial ecosystem for autonomous driving, intelligent connectivity, and intelligent traffic systems.” By way of a quick recap, Baidu opened a Silicon Valley arm dedicated to self-driving cars way back in 2016.
A year later, it launched its open-source Project Apollo platform with the goal of testing it on urban roads sometime in 2018 and moving to full autonomy on highways by 2020. In July of last, year Baidu unveiled the project’s first vehicle manufacturing partners, declaring Apollo to be the “ Android of the autonomous driving industry.
” The company now claims dozens of partners from the technology, automotive, and AI realms, including Ford, Intel, Nvidia, and Microsoft.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! BlackBerry acquired QNX from Harman International way back in 2010, and though the Unix-like OS was used in a number of mobile devices and formed the basis of BlackBerry 10, BlackBerry’s ultimate downfall in mobile hardware has guided QNX on a path toward the automotive realm.
Indeed, Ford ditched Microsoft Auto for QNX back in 2014, and the two companies expanded their partnership a couple of years later.
In late 2016, BlackBerry opened its very own autonomous vehicle research hub in Ottawa. It later signed up automotive giant Delphi , which committed to using BlackBerry QNX on its own autonomous driving platform.
Qualcomm , Denso , and Visteon are other recent examples of automotive wins for BlackBerry’s QNX.
It’s been clear for a while that BlackBerry is betting big on the automotive — more specifically, autonomous driving — industry. But as one of the world’s biggest technology and AI companies, Baidu represents a major win for BlackBerry as it continues to transition from being a hardware company to one focused on software.
“Joining forces with Baidu will enable us to explore integration opportunities for multiple vehicle subsystems, including ADAS, infotainment, gateways, and cloud services,” added John Wall, senior vice president and GM of BlackBerry QNX. “Baidu has made tremendous strides in Artificial Intelligence and deep learning. These advancements, paired with their high-definition maps and BlackBerry’s safety-critical embedded software and expertise in security, will be crucial ingredients for autonomous vehicles.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |