source
stringclasses
2 values
author
stringlengths
0
824
title
stringlengths
0
475
description
stringlengths
0
32.8k
url
stringlengths
13
713
urlToImage
stringlengths
0
2k
publishedAt
stringlengths
20
20
content
stringlengths
0
32.8k
category_nist
stringlengths
5
160
category
stringlengths
5
239
id
stringlengths
6
7
subreddit
stringlengths
2
21
score
int64
0
30.2k
num_comments
int64
0
2.13k
created_time
timestamp[ns]
top_comments
stringlengths
3
32.7k
news
Apundir
101° - Machine Learning The complete Math Guide to Master Data Science with Python & Developing Artificial Intelligence Kindle Edition free Amazon
Machine Learning is rapidly changing the world and it is the way of the Future of Human Technology. Art, information, processes, calculations, emotions will be rapidly learned and discover from machines.Do you want to learn more about the world of Machine Learning and its Applications? Would you like to improve and refine your Python skills? Would you like to become Computer Savvy.If the answer is “YES”, then keep reading. In this complete and exhaustive collection of two books you will discover:What does Machine Learning and Artificial Intelligence meanMachine Learning EvolutionHow to Automate Machine Learning EffectivelyPython Programming and Advanced Programming TechniquesEverything You Need to Know about Neural Networks and Data PipelinesConnection between Machine Learning and Big DataThe Steps of Data AnalysisPredictive Analysis with Data Science and Data AnalysisWhat are the Best Libraries for Machine Learning in Python… & Much More!
https://www.hotukdeals.com/deals/kindle-edition-machine-learning-the-complete-math-guide-to-master-data-science-with-python-and-developing-artificial-intelligence-3782412
https://images.hotukdeal…60/3782412_1.jpg
2021-08-23T12:48:28Z
hotukdeals.com - The Largest Deal Community in the UKhotukdeals is a community for deal seekers. Find and share the best deals, promotional codes and vouchers from on and off the web.*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, it means we can keep our platform free to use, without compromising on impartiality. Whether or not we're paid doesn't change how hot a deal can potentially get - that's only up to hotukdeals members. We believe it should always be up to our community to decide which deals fly and which fall. Read more about how hotukdeals makes money.Copyright © 2004-2021 hotukdeals. All rights reserved.hotukdeals is a credit broker, not a lender. This is in relation to some products falling under Broadband and Phone Contracts, Finance & Insurance and Car & Motorcycle.hotukdeals is a trading name of Pepper Deals Ltd (FRN 798319) an Appointed Representative of Funding Falcon (FRN: 743100) who is authorised and regulated by the Financial Conduct Authority (FCA). Pepper Deals Ltd is registered England and Wales. Number 9729292. Registered office: First Floor, 2-4 Holywell Lane, Hackney, London EC2A 3ET.Note: products under: Home & Living, Electronics, Culture & Leisure, Fashion & Accessories, Family & Kids, Sports & Outdoors, Groceries, Gaming, Travel, Health & Beauty, Services, Garden & DIY are not regulated by the FCA. If you wish to make a complaint or contact us, you can use the address above, or follow our complaints procedure.
Content Synthesis/Digital Assistance/Information Retrieval Or Search
Computer and Mathematical/Education, Training, and Library
null
null
null
null
null
null
news
Wall Street Reporter
Next Super Stocks on the Move: Logiq, Reliq Health Tech, AI/ML Innovations and ESE Entertainment. Emerging Leaders in E-Sports, HealthTech, E-Commerce and AdTech
NEW YORK, Sept. 02, 2021 (GLOBE NEWSWIRE) -- Wall Street Reporter, the trusted name in financial news since 1843, is highlighting the latest CEO comments and...
https://finance.yahoo.com/news/next-super-stocks-move-logiq-132700800.html
https://s.yimg.com/cv/ap…go-1200x1200.png
2021-09-02T13:27:00Z
NEW YORK, Sept. 02, 2021 (GLOBE NEWSWIRE) -- Wall Street Reporter, the trusted name in financial news since 1843, is highlighting the latest CEO comments and news from companies recently presenting at its highly acclaimed NEXT SUPER STOCK livestream investor conferences, and investor LiveChats on social media streams. Over 170,000 investors have participated in Wall Street Reporters livestream events in the past 90 days.AI/ML Innovations (OTC: AIMLF) (CSE: AIML) Chairman, Tim Daniels: "Mental Health App Expands AI/ML Digital Health Ecosystem - Targeting Multi-Billion Dollar Market OpportunitiesNEXT SUPER STOCK conference presenter AI/ML Innovations (OTC: AIMLF) (CSE: AIML) is rapidly expanding its portfolio of HealthTech assets. AIMLF chairman Tim Daniels updated investors on the companys latest digital healthcare growth initiatives, which now includes Tech2Health, a European mental health app innovator. Tech2Health is positioned for explosive revenue growth as European healthcare mandates now provide about 2,500 Euro per patient annually for mental wellness. Tech2Health has just signed with a French multinational manufacturer, to provide mental wellness support to their 170,000 employees globally, and additional Enterprise contracts are in the pipeline.Watch AI/ML Innovations (OTC: AIMLF) (CSE: AIML) NEXT SUPER STOCK Video: https://bit.ly/3dAI6k9AIMLF Chairman Tim Daniels shared with investors how AIMLF is expanding its global digital healthcare footprint with synergistic acquisitions of innovative HealthTech companies. Tim also updated investors on progress at AIMLFs HealthGauge platform which uses AI and machine learning for applications ranging from remote patient monitoring, to fitness/health tracking and more. AIMLF focus is on scaling revenue growth, by offering its services to enterprise and consumers via a SaaS recurring revenue subscription model. Tim Daniels also updated investors on AIMLs growing pipeline of M&A opportunities in the HealthTech space, which could have a positive impact on maximizing shareholder value in coming months.Watch AI/ML Innovations (OTC: AIMLF) (CSE: AIML) NEXT SUPER STOCK Video: https://bit.ly/3dAI6k9Logiq, Inc. (OTC: LGIQ) (NEO: LGIQ) President, Brent Suen: On Path to $100 Million RevenuesNEXT SUPER STOCK conference presenter Logiq, Inc. (OTC: LGIQ) (NEO: LGIQ) President Brent Suen recently shared with investors how LGIQ is now positioned to more than double revenues - to a potential $100 million run rate - within the next 18 months fueled, by M&A, organic growth and increasing profit margins. LGIQ enables global ecommerce and fintech services for small to medium size businesses worldwide. LGIQs DataLogiq AI-driven adtech business is expected to be a major driver of revenue growth and profit margin expansion in the next 12 months, as more digital marketing agencies are joining the platform.September 2: LGIQ is presenting at Wall Street Reporters NEXT SUPER STOCK livestream at 1:00PM EST. Join here:https://bit.ly/2PX0SpHWatch (OTC: LGIQ) (NEO: LGIQ) NEXT SUPER STOCK video: https://bit.ly/3kafujXBrent Suen articulated how LGIQ has compelling upside, based on valuation comparables to its peers in the e-commerce/fintech space. While LGIQ trades at about 2X revenues, its peers such as SHOP, SE, STNE, JMIA and others are often trading at 20-30X revenues. An additional upside catalyst for investors is the potential spinout of LGIQs Indonesia fintech and ecommerce business as a stand-alone public entity.September 2: LGIQ is presenting at Wall Street Reporters NEXT SUPER STOCK livestream at 1:00PM EST. Join here:https://bit.ly/2PX0SpHWatch (OTC: LGIQ) (NEO: LGIQ) NEXT SUPER STOCK video:https://bit.ly/3kafujXESE Entertainment (TSX.V: ESE) (OTC: ENTEF) CEO Konrad Wasiela: On Track for $100 Million E-Sports RevenuesESE Entertainment (TSX.V: ESE) (OTC: ENTEF) CEO Konrad Wasiela, a featured presenter at Wall Street Reporters NEXT SUPER STOCK investors livestream conference, recently updated investors on his goal of building ESE into a billion dollar global enterprise. Wasiela shared that ESE now has a growing M&A pipeline with over $100 million annual revenues and expected to close a significant number of these potential transactions in the coming months. ENTEF just announced the acquisition of e-sports company Auto Simulation Limited T/A Digital Motorsports, an Ireland-based provider of advanced simulation racing (sim racing) infrastructure, technology, and support. Sim racing is one of the hottest growth categories in the multi-billion dollar global e-sports market.ENTEF recently closed the acquisition of e-sports and gaming infrastructure company, WPG. In 2020, WPGs assets generated revenue in excess of C$14,000,000. This transaction is anticipated to make ENTEF one of the largest esports infrastructure companies in the world, bridging esports companies with their fans and customers.Watch ESE (OTC: ENTEF) (TSX.V: ESE) Next Super Stock livestream video: https://bit.ly/3tdhcVVIn his interview with Wall Street Reporter, ESE CEO Konrad Wasiela, says the company is now ready to scale - expanding its global footprint, with new partnerships with global brands like Porsche, driving revenue growth with aggressive focus on top line sales and margin expansion, and M&A opportunities. ESE is now rapidly expanding, with multiple revenue streams including, E-Sports infrastructure software powering global tournaments, exclusive digital media distribution, broadcast rights, and owning world-class leagues and teams, including its K1CK global E-Sports franchise.Watch ESE (OTC: ENTEF) (TSX.V: ESE) Next Super Stock livestream video: https://bit.ly/3tdhcVVReliq Health Technologies (OTC: RQHTF) (TSX.V: RHT) CEO Lisa Crossley: 2021 is Breakout Year for Reliq Telehealth PlatformReliq Health Technologies (OTC:RQHTF) is now at an inflection point for explosive revenue growth and profitability shared CEO Lisa Crossley during a recent presentation at Wall Street Reporters NEXT SUPER STOCK livestream. RQHTFs iUGO telehealth remote patient monitoring platform has gained significant traction over the past 6 months, and now has 200,000 patients under contract to be onboarded over the next 18-24 months - which represents over $120 Million in recurring annual revenue at full deployment.Watch Reliq Health Tech (OTC:RQHTF) (TSX.V:RHT) NEXT SUPER STOCK Video: https://bit.ly/3BcFkLiRQHTF has just turned the corner to profitability and revenues are expected to reach $2 million per month revenues, hitting a $24 million run rate by the end of December - and keep increasing as more contracted patients are onboarded. Lisa added that RQHTF is now starting to throw off significant cash flow, enabling the company to fund growth internally, without the need for capital raises in the near future. A NASDAQ uplisting remains a possibility for 2022.Lisa explained how new patient contract growth is now snowballing - powered by expanded medicare and medicaid coverage and reimbursement amounts for virtual care services like RQHTF provides. RQHTFs powerful iUGO telemedicine platform supports care coordination and community-based virtual healthcare, allows complex patients to receive high quality care at home, improving health outcomes, and reducing the cost of care delivery. iUGO Care provides real-time access to remote patient monitoring data, allowing for timely interventions by the care team to prevent costly hospital readmissions and ER visits.Watch Reliq Health Tech (OTC:RQHTF) (TSX.V:RHT) NEXT SUPER STOCK Video: https://bit.ly/3BcFkLiWALL STREET REPORTERWall Street Reporter (Est. 1843) is the leading financial news provider, focused on giving investors direct access to CEO's of promising, publicly-traded companies, and market experts. www.WallStreetReporter.com. Nothing in this news summary shall be construed as investment advice. Quotes/content may be edited for brevity and context.Full disclaimer, and relevant SEC 17B disclosures here: http://bit.ly/39kkE7KAbout Wall Street Reporters Next Super Stock conference:Wall Street Reporter's NEXT SUPER STOCK Live! conference is dedicated to featuring select companies that have near-term catalysts in place which can drive transformational growth (and stock appreciation) in the months ahead. Click here to join next livestream event: https://www.wallstreetreporter.com/next-superstock-online-investor-conference/CONTACT:WALL STREET REPORTER(212) 871-2057 ext 7www.WallStreetReporter.com
Content Synthesis/Decision Making/Recommendation
Healthcare Practitioners and Support/Management/Business and Financial Operations
null
null
null
null
null
null
news
How, where and why telco is using enterprise open source
According to "The State of Enterprise Open Source" report Red Hat published earlier this year, open source will continue to play an important role in the future of telecommunications. Let’s see how it’s positioning telecommunications providers to keep up with their technology revolution.
https://www.redhat.com/en/blog/how-where-and-why-telco-using-enterprise-open-source
https://www.redhat.com/c…ng?itok=G20sno56
2021-08-23T04:00:00Z
While the telecommunications industry is familiar with enterprise open source95% of our survey respondents are already using itit also stands at an inflection point with the rise of edge computing, artificial intelligence and machine learning (AI/ML) and the rapid deployment of 5G.According to "The State of Enterprise Open Source" report we published earlier this year, open source will continue to play an important role in the future of telecommunications. With data collected from 13 countries, the report shows a picture of how, where, and why IT leaders across the globe and a range of sectors use enterprise open source. Lets see how its positioning telecommunications providers to keep up with their technology revolution.Security is a big factor in choosing enterprise open sourceAcross the industry, enterprise open source is being used in Application development (66%), IT infrastructure modernization (63%) and Digital transformation (59%). As service providers modernize their infrastructure to provide connectivity from the core all the way out the edge, they need secure pathways to scale efficiently.Telco IT leaders cited "better security" and the "ability to safely leverage open source technologies" as some of their top reasons for choosing enterprise open source, and 74% say enterprise open source is a key part of their organizations security strategy. Open sources expanded use for emerging technologiesMany respondents expect the use of enterprise open source for emerging technology to increase in the next two years, with two-thirds of IT leaders identifying AI/ML as key growth areasand to an even greater extent, edge computing / Internet of Things (IoT).Kubernetes role in Telcos cloud-native strategyThe industry is shifting to containers and capitalizing on virtualized RAN in its process of revolutionizing radio access networks (RAN). In our survey, an overwhelming 94% of respondents say Kubernetes is important to their cloud-native application strategy. Additionally, a majority of telco leaders indicate they prefer to use multiple cloud vendors. Red Hats partner ecosystem and our work to drive open source innovation and can help communications service providers find that flexibility theyre looking for. Not only is flexibility an important factor for telco leaders, so is open source participation: 85% say they are more likely to select a vendor who contributes to the open source community.Want more insights from the telecommunications industry? Get highlights in The State of Enterprise Open Source: Telecommunications infographic.
Unknown
Management/Computer and Mathematical
null
null
null
null
null
null
news
Dario D'Amico, Dario D'Amico
Visualize and animate flow in MapView with a custom WebGL layer
Learn how to animate streamlines using WebGL and a custom layer.
https://www.esri.com/arcgis-blog/products/js-api-arcgis/developers/visualize-and-animate-flow-in-mapview-with-a-custom-webgl-layer/
https://www.esri.com/arc…1/08/826x465.png
2021-09-01T00:30:08Z
IntroductionThis article discusses visualizing wind and water currents through animated streamlines; the out-of-the-box capabilities of the ArcGIS API for JavaScript (ArcGIS JS API) are combined with custom WebGL code to create compelling animated visualizations of real-world atmospheric and marine data.See it live or check out the source code on GitHub.Custom layers are an advanced topic; familiarity with WebGL and custom WebGL layer views is recommended. A good place to get started with extending the ArcGIS JS API with custom WebGL layer views is the official SDK sample. And remember that your fellow developers at community.esri.com are always happy to help!With that said… everyone deserves amazing maps! Streamlines are a natural drawing style for flow datasets, and our team is considering adding them to the ArcGIS JS API. Join the discussion on community.esri.com and share your ideas on how to bring this awesomeness to a larger audience!The power of animationsAwesome datasets need awesome visualizations. The ArcGIS JS API ships with 2D and 3D support and a variety of layer types, renderers, effects, and blend modes that should cover the requirements of most users most of the time.In this blog post, we focus on animated visualizations; animations can capture and communicate the dynamicity of certain datasets more effectively than static graphics. The ArcGIS JS API supports several forms of animations out-of-the-box and my colleague Anne Fitz penned down a great writeup covering several useful techniques that are applicable in a broad range of situations. With that said, certain applications call for a more customized experience; this is where custom WebGL layers come into play.The images below show the same area of the map rendered in three different ways. For this article, we are focusing on an imagery tile layer containing wind magnitude and direction for the continental United States.Predefined ImageryTileLayer with "raster-stretch" renderer (left). This option does a good job at visualizing wind speed, but the direction information is lost.Predefined ImageryTileLayer with "vector-field" renderer (center). Using arrow symbols and size and rotation visual variables, this renderer can visualize both aspects of the wind. ImageryTileLayer support for this renderer is shipping with version 4.21 of the ArcGIS JS API; before that, this new functionality is available in the next build.Custom WebGL layer displaying animated flow lines, as described in this article (right). Our custom visualization provides a more intuitive representation of wind currents; the different animation speed of the lines maps to the underlying concept of speed magnitude, and the continuous nature of the visualization makes it easier to spot patterns in the data, like that rotational flow near the Canadian border. Also, it looks pretty cool.This article describes in depth the implementation of a custom WebGL layer that displays animated streamlines. A streamline is the path that a massless particle would take when immersed in the fluid. Please note that the focus of the article is on the flow visualization algorithm and integration with the ArcGIS JS API; whether the particular technique is suitable for a given dataset, audience, or application, needs to be evaluated by a domain expert, and the implementation modified as needed.Load, transform and renderAs anything that shows up on a computer screen, GIS visualizations are the result of:Loading the data;Optionally, transforming/processing/preparing the data;Rendering the data!Each of the predefined layer types that ship with the ArcGIS JS API is a software component that bundles together two capabilities.The ability to access data, either from local memory or from a remote source;The ability to render the retrieved data, often both in 2D MapView and 3D SceneView.In the ArcGIS JS API, the implementation of a layer type consists of a layer class and one or two layer view classes. With relation to the three phases discussed above, the layer is mostly responsible for accessing the data (1) while 2D layer views and 3D layer views take care of the rendering (3). Data transformation (2) can be required for different reasons and, to some degree, it is carried out both by layers and layer views.Polylines with timestampsAt the core of a visualization such as the one that we are setting out to build, there is the concept of a m-aware polyline feature. This is a polyline where each vertex, in addition to its coordinates, carries one or more m-values. An m-value can be thought of as a position dependent attribute of the polyline, that varies smoothly along its path; for vertices the m-value is given explicitly, while for any points between two vertices it can be obtained by linearly interpolating the value at the vertices.A common application of m-aware polylines is representing paths or, as in this article, streamlines; in this case the m-values are the timestamps at which that vertex is visited by the particle.If your data is already stored in an m-aware polyline feature layer, FeatureLayer.queryFeatures() can be used to retrieve the polylines and access the m-value information.Catching windIn this article we will not ingest polyline features; we will build the polylines starting from flow data contained in an ImageryTileLayer. This layer type is similar to ImageryLayer in the sense that it contains raster LERC2D data, but the way that the data is stored on the server is different; imagery tile layers store data into static files called tiles, that are cloud and cache-friendly. Each tile contains many pixels, and each pixel contains one or many bands of data. In the case of an imagery tile layer that stores wind data, there are two bands that together describe wind direction and speed.As it is the case with any other predefined layer, an ImageryTileLayer can be added to a map and displayed by a MapView. The simplest way to visualize an imagery tile layer is with a "raster-stretch" renderer. The default settings are usually sensible, and the resulting visuals are easy to interpret with the help of a legend. See the "raster-stretch" renderer in CodePen.We will focus our attention on imagery tile layers that store flow information, such as wind and marine currents. For these kind of layers, the "vector-field" renderer is supported and used as the default renderer. See the "raster-stretch" renderer in CodePen.Watch this space!The input to a rendering algorithm is visual in nature, or at the very least can be interpreted as visual. For instance, positions are expressed as numbers and the rendering algorithm is responsible for drawing geographic entities in the right places according to those numbers. However, numbers by themselves are meaningless, and they only get their positional semantics from being associated to particular spaces.In the course of this article we are going to refer to three spaces; each space plays its own role in the implementation of the streamlines visualization.Map spaceMap space is used to locate the data on the server. In the case of the wind imagery tile layer that we are considering in this article, it is given by the EPSG:4326 reference system. Map space is right-handed and its coordinates are expressed in map units.Screen spaceScreen space is the left handed frame where +X is right, +Y is down, the origin is in the upper-left corner of the MapView, while the lower-right corner has for coordinates the width and height of the MapView. Screen space coordinates are expressed in pixels.Particle space (aka model space, aka object space aka local space)The animated lines are the result of a particle simulation. The position of a particle as it is swept by the simulated wind traces the polyline that will be later animated. The position of a particle is expressed in particle space. The units of the particle space are called cells and are related to pixels by a multiplicative constant, called cellSize. In the figure above we assume cellSize of 10 so that a 1920x1080 pixels MapView results in a 192x108 particle space.We will see that streamlines ultimately will be rendered using triangle meshes; the coordinates of mesh vertices will be expressed in particle space; in computer graphic, this space is often called model space, object space, or sometimes even local space.Vector fieldsA field is a position-dependent property of a space. Temperature, pressure, humidity are all examples of scalar fields common in meteorology. In other words, a scalar field maps a point to a numeric value. In GIS systems scalar fields are often represented using one band of an imagery tile layer.Some properties are multidimensional in nature, and they are called vector fields. This article focuses on a specific type of vector field, useful to represent wind data, called a velocity field; a velocity field associates a velocity vector to each point of a space. A velocity is a directional speed and can be represented using two bands of an imagery tile layer. There are two different ways to encode 2D velocities.Magnitude and directionThis is reported as dataType: esriImageServiceDataTypeVector-MagDir in the raster info section of the layer.The direction of the wind is measured in degrees, with 0 being North and 90 being East. The magnitude is a speed.UVThis is reported as dataType: esriImageServiceDataTypeVector-UV in the raster info section of the layer.U is a the west-east component of the speed and V is the south-north component.Fetching the LERC2D wind dataIn the ArcGIS JS API, layers do not have to be added to a map to be useful; several layer types offer data fetching capabilities.This is especially remarkable for ImageryTileLayer.fetchPixels(), which enables querying arbitrary extents in spite of the fact that the queried data is broken into tiles on the CDN. The implementation of ImageryTileLayer takes care of downloading the (possibly cached) tiles and stitches them together in a single image that spans the required extent.An instance of ImageryTileLayer can be used to fetch the data that a custom WebGL layer view needs. Every time that the MapView becomes stationary, the custom WebGL layer view triggers a refresh of the visualization. The current extent of the visualization is passed to ImageryTileLayer.fetchPixels() together with the size of the desired output LERC2D image. Instead of querying an image of the same size of the MapView, we ask for a downsampled image by a factor of cellSize, which is fixed at 5 in the current implementation. As an example, if the MapView was 1280×720, the code would fetch a 256×144 image from the image server. We do this to save bandwidth, reduce processing time, and regularize the data; full-resolution wind data may have high-frequency components that are likely to destabilize the simulator. The downsampled image represents the wind in particle space and each data element is called a cell.The rawCellData stores the two bands of the wind separately, in two distinct, row-major, equally sized matrices. Whether these two bands are MagDir or UV can be determined by examining the rasterInfo.dataType property. We chose to normalize the values to particle space UV, which is more convenient for the simulation, and interleave the two bands into a single matrix. Note that the original values would have probably been expressed in knots or meters per second, but from now on we will assume that all velocities are in particle space.There are a couple of things worth mentioning about the conversion formulas.In the “MagDir to UV case” the magnitude is multiplied by sine and cosine factors to get the U and V components respectively. Note how the direction is first converted to radians, and then it is delayed by Math.PI / 2; this is needed because in particle space positive angles rotate the +X axis over the +Y axis, while in map space positive angles rotate the +Y axis (the “North”) over the +X axis (the “East”). These two conventions are both clockwise but angles in map space are a quarter of a full circle late.In the “UV to UV case”, the U value can be copied but the V value needs to be flipped, because in particle space +Y goes down while in map space it goes up.The output of the fragment of code above is the data variable that essentially is a discretized velocity field in particle space; i.e., each cell in particle space is mapped to a horizontal velocity (+X is right) and a vertical velocity (+Y is down), also in particle space.In the rest of the article, FlowData denotes a type that holds the data variable, together with the grid dimensions and the cellSize factor. FlowData is the input to the particle simulation code that traces the streamlines in particle space.Pro tip: smooth the dataWe already smoothed the flow data by requesting the image server a lower resolution image than the MapView itself. In addition to this, for certain datasets an explicit smoothing using a separable Gaussian kernel can help obtaining more robust and aesthetically pleasing results.Turning the particle space vector velocity field into a functionFor ease of use by the particle simulator, we wrap the flowData.data typed array into a closure; the closure takes a point (x, y) in particle space and returns a velocity again in particle space.Pro tip: adopt bilinear interpolationA simulated particle will have more often than not a fractional position, i.e. (42.23, 89.54). With such input, the field closure defined above would take the velocity in cell (43, 90) and return that. A better way of coming up with a value for fractional positions is to use bilinear interpolation; this allows for the velocity field to vary smoothly across a single cell and could be required to get acceptable results from the particle simulation when the wind data is low resolution.Bilinear interpolation works by selecting the four closest cells; first two adjacent pairs of cells are interpolated vertically, based on the distance of the desired point from the cells’ centers, and then the results of the first interpolation are interpolated again horizontally. See bilinear interpolation for more details.Particle simulationParticles are immersed in the closure-wrapped FlowData field at a pseudo-random (but repeatable, using a seeded generator) positions and simulated using the trace() function.Simulating the movement of a particle in a velocity field vf is an iterative process, for which there are a couple of different approaches. The simplest one uses a fixed time step and increments the position of the particle by an amount proportional to the velocity at that point and the time step. Each iteration produces a new vertex of the polyline and, together with the previous vertex, a new segment.Fixing the time step is a perfectly sound approach but for the demo app we opted for a fixed segment length instead. This approach causes line lengths to vary less and also guarantees that adjacent vertices are not too close to each other; we think that both these properties are desirable for this kind of visualization, but the reader is encouraged to experiment with different iteration strategies.In the demo app, each particle is update 30 times using a for loop and the segment length is 1 cell; together with a cellSize of 5 pixels, this leads to each individual polyline having a length of about 150 pixels. At each iteration the speed of the particle is tested and, if found to be too low (because the particle entered a cell where the wind is weak), the simulation for that particle is terminated.Building the triangle meshWebGL has poor built-in support for line drawing; all the line rendering capabilities that ship the ArcGIS JS API are based on triangle meshes, and for this custom flow WebGL we are taking a similar approach. For more information, check out the SDK sample about animated lines. Also, the reader should be aware that the ArcGIS JS API ships with tessellation helper methods that can convert arbitrary geometries into triangle meshes.For representing streamlines our visual requirements are quite basic and we are going to stick with a very simple method; each segment of a polyline is represented by a thin rectangle, at most 2-3 pixels wide, lying along the original polyline; the original polyline becomes the centerline of this “rectangle chain”. Each rectangle is made of 4 vertices, 6 indices, and renders as a pair of right gl.TRIANGLES sharing their hypothenuse. This approach leads to gaps and regions of overlaps, but it is a very fast and robust algorithm, and for thin lines, the artifacts are not noticeable.In the next paragraphs, we call polyline vertex a vertex of the original polyline, as produced by the particle simulation algorithm. Such vertex is extruded in directions perpendicular to the centerline to produce mesh vertices; these are the vertices that are written to the WebGL vertex buffer.The most powerful feature of low-level, GPU-accelerated graphic APIs like WebGL, that really sets them apart from Canvas2D, GDI+ and all the other higher-level solutions, is the ability to define custom visual properties, and use them in custom programs called shaders. This enables applications to describe complex scenes, with many different, possibly animated, objects, and render them with little assistance from the CPU. This approach greatly reduces the load on the CPU, and the GPU is free to run at full speed.There are going to be about a thousand streamlines in a typical flow visualization; we want to render all of them using a single draw call. To achieve this, we need to store all triangle meshes for all polylines in the same vertex and index buffers. The code snippet below does just that. Note that since this could be a long-running process, the algorithm is designed to transfer control back to the event loop once in a while so that the UI thread does not freeze (lines 12-15).The custom properties that we need to define fall into 4 categories.Per-frameThese properties are shared by all polylines and are updated every frame. They are passed to the shaders using uniform values. Since they do not need to be stored in the vertex buffer, we will talk about them when discussing rendering.Per-featureThese properties are associated with the polyline itself. The only way to implement this kind of property in WebGL 1, while at the same time maintaining the ability to draw all polylines with a single draw call, is to actually make them vertex attributes, and repeat the same values for all vertices of the same feature.There are 2 per-feature properties:– totalTime is the total runtime in seconds for the polyline. The fragment shader needs this value at every fragment in order to properly loop the animation.– random is a pseudo-random value, again needed in the fragment shader to introduce some temporal differences in the animation, so that streamlines with the same length do not end up synchronized.They are highlighted in orange in the code snippet above.Per-polyline-vertexThese properties are associated with the polyline vertex itself. There are 2 per-polyline-vertex properties:– x, y, the vertex position, in particle space units, i.e., cells.– t, the vertex timestamp.They are marked in green in the code snippet.Per-mesh-vertexEach polyline vertex is extruded into two mesh vertices, one per side of the polyline. There are 2 per-mesh-vertex properties.– ex, ey is the extrusion vector. This is computed at lines 32-34 by normalizing the segment vector and rotating it 90 degrees. Being normalized, its magnitude is meaningless but you can imagine it being expressed in particle space units, i.e., cells.– a +1/-1 constant that we call side, and identifies an extruded vertex as lying on the right side of the centerline, or on the left side.They are marked in blue in the code snippet.For more information about constructing and rendering line meshes, see the SDK sample about animated lines.RenderingNow is time to put the GPU to work and render our beautiful streamlines!Before discussing rendering there is one additional space that needs to be introduced; it is called clip space and just like screen space describes positions on the screen, but using a different coordinate system. In clip space, the origin is in the center and the drawing area is seen as a 2x2 rectangle, where +X is right and +Y is up. This space is the space of the gl_Position variable.The rendering process takes the particle space mesh and draws it to the screen. As the user pans and zooms, the visualization needs to be repositioned to reflect the change in view point, until a new mesh is ready that again can be rendered full screen.Vertex shaderThe vertex shader converts the coordinates of mesh vertices from particle space to screen space using the u_ScreenFromLocal matrix, extrudes them according to the extrusion vector, and then transforms everything to clip space using the u_ClipFromScreen matrix. Note how the extrusion vectors are rotated by the u_Rotation matrix and scaled by half line width but are not subject to the same transformation that particle space coordinates go through, which also includes a scale factor when zooming in and out; this separation between position and extrusion vectors is responsible for the anti-zoom behavior of lines, which always exhibit the same width. The value u_PixelRatio is used to display lines always of the desired width, even when the DPI of the screen is very high or very low.All other attributes are passed verbatim to the fragment shader using varying variables.Fragment shaderThe fragment shader creates and animates the trail effect.It does so by taking advantage that each fragment is associated with a timestamp, computed automatically by the GPU interpolator based on the timestamps at the two closest vertices.The next snippet shows the fragment shader source.The base color of the streamlines is taken from uniform u_Color. The fragment shader modifies the opacity of the fragments to implement the animated trail effect.A fragment tends to be opaque when its timestamp is close but not greater than the current time, which is passed to the shader as the uniform u_Time; this is done at lines 16-23 using an exponential function applied to a periodized time-dependent signal.A fragment is also more opaque on the centerline than near the edges of the rectangle; this effect is applied at line 14 by taking advantage of the fact that the a_Side attribute has an absolute value of 1 near the edges, and 0 on the centerline.Finally, at line 25 the output color is premultiplied because MapView will not composite the layer correctly otherwise.Configuring WebGL and issuing the draw callWe are ready for rendering! The rendering algorithm consists of about 80 lines of WebGL state setting and a single draw call.– Bind the mesh (lines 7-23)– Build the u_ClipFromScreen matrix, that transforms from screen space to clip space (lines 27-33).– Build the u_ScreenFromLocal matrix, that transforms from particle space to screen space (lines 38-49).– Build the u_Rotation matrix, used to rotate extrusion vectors (lines 35-36).– Configure the shader program (lines 51-76).– Enable premultiplied alpha (lines 78-79).– Draw all the streamlines at once (line 81).Putting everything togetherIn the source repository the code is chiefly organized around two packages; core and flow.coreThe core package contains generic classes and utilities that are useful to create custom visualizations; it provides a simpler abstraction on top of BaseLayerViewGL2D and implements the resource loading/unloading lifecycle. It contains the following modules.– types. Type definitions and interfaces used by the entire codebase. Most importantly, it introduces the concept of global resources and local resources. Global resources are loaded at startup and do not need to be updated when the extent changes; local resources are tied to a particular extent and needs to destroyed and reloaded as the user navigates the map.– settings. Constants that define some of the built-in behaviors of the flow package; in a real-world app these should probably be configurable at runtime.– util. Functions of general utility.– rendering. Defines the abstract class VisualizationStyle, a class that defines how to load global resources, how to load local resources for a given extent, and how to render a visualization. It is an abstraction over the attach()/render()/detach() contract offered by BaseLayerViewGL2D and its concrete implementations can be ran and tested without a running MapView. You can even create a static image out of a visualization style, for instance to be used as a thumbnail, by calling method VisualizationStyle.createImage().– view. Defines the abstract class VisualizationLayerView2D, a subclass of BaseLayerViewGL2D. It offers a simplified programming model and resource management scheme designed around the concept of visualizations, which are basically graphic objects that cover the current extent. To implement a new custom layer, inherit from VisualizationLayerView2D and override method createVisualizationStyle(). If the custom visualization is animated, set the animate flag to true.flowThe flow package depends on core and implements the streamline rendering logic.With relation to the architecture of a geographic visualization app, the flow package provides an implementation for each of the 3 required steps.Loading the data;Transforming/processing/preparing the data;Rendering the data.The flow package contains the following modules.– types. Type definitions and interfaces used by the flow package.– settings. Constants that define some of the built-in behaviors of the flow package; in a real-world app these should probably be configurable at runtime.– sources. Defines different strategies for loading (1) flow data; two strategies are supported at present time: ImageryTileLayerFlowSource that fetches LERC2D datasets from an imagery tile layer, and VectorFieldFlowSource that supports the analytic definition of a global velocity field in map units.– processors. Defines the entire data transformation (2) pipeline, starting from flow data, particle simulation, conversion to streamlines and generation of the triangle mesh. Class MainFlowProcessor uses the main process, while WorkerFlowProcessor uses the workers framework.– shared. Particle simulation and mesh generation code that can be invoked both by the main process, when useWebWorkers is false, and by the worker when it is true.– layer. Defines the FlowLayer class by inheriting it from esri/layers/Layer. This class overrides method createLayerView() to return an instance of FlowLayerView2D.– rendering. Defines three classes: FlowGlobalResources, FlowLocalResources and FlowVisualizationStyle. These are concrete implementations of the abstract concepts defined in the same name module in the core package.– view. Defines the FlowLayerView2D class by inheriting it from VisualizationLayerView2D. This class overrides method createVisualizationStyle() to return an instance of FlowVisualizationStyle. The codebase contains a couple more interesting things that we have not been able to cover in this blog post, due to space constraints; first, FlowLayer defines a way to specify client-side flow data, which can be very useful for education and what-if scenarios where real data is not available.Finally, FlowLayer supports running the particle simulation and mesh generation on workers, to reduce the load on the main thread that could lead to possible UI freezes. Workers are enabled by default and are controlled by the useWebWorkers flag.The main application fileThe main application file declares a VectorTileLayer to be used as a basemap, an ImageryTileLayer that will be displayed by the standard vector field renderer, and our brand-new FlowLayer pointed to the same imagery tile layer URL.And… we are done!It is quite remarkable that a bunch of blue rectangles and less than 60 lines of shading code can look so pretty. The secret is that there is more shading going on behind the scenes; the FlowLayer that we just created is compatible with blend modes and layer effects. A large share of the visual appeal comes from specifying effect: "bloom(1.5, 0.5px, 0.2)" when creating the FlowLayer instance.The image below shows the positive influence of the bloom effect on our custom visualization. We encourage you to try other effects and blend modes, as well as stacking other predefined operational layers on top or below FlowLayer.ConclusionWe hope you enjoyed this deep dive into flow visualization and animation using the ArcGIS JS API and custom WebGL layers. Check out the source repository and try to modify FlowVisualizationStyle to create your own dream layer.On behalf of the ArcGIS JS API team, we thank you for your interest in flow visualization; we are thinking that this drawing style is important enough that it should be a native capability of the ArcGIS JS API. We would love for you to join the discussion on community.esri.com and share your use case, workflow, or requirements with us.Happy coding!
Content Creation/Content Synthesis
Computer and Mathematical/Life, Physical, and Social Science
null
null
null
null
null
null
news
Michelle Horton
Deciphering Ancient Texts with AI
Using traditional machine learning methods and visual psychophysics, Notre Dame researchers are developing AI models capable of transcribing ancient manuscripts.
https://developer.nvidia.com/blog/deciphering-ancient-texts-with-ai/
https://developer-blogs.…3817835_1920.jpg
2021-08-12T17:57:09Z
Using traditional machine learning methods and visual psychophysics, Notre Dame researchers are developing AI models capable of transcribing ancient manuscripts.Looking to reveal secrets of days past, historical scholars across the globe spend their life’s work translating ancient manuscripts. A team at the University of Notre Dame looks to help in this quest, with a newly developed machine learning model for translating and recording handwritten documents centuries old.  Using digitized manuscripts from the Abbey Library of Saint Gall, and a machine learning model that takes into account human perception, the study offers a notable improvement in the capabilities of deep learning transcription.“We’re dealing with historical documents written in styles that have long fallen out of fashion, going back many centuries, and in languages like Latin, which are rarely ever used anymore. You can get beautiful photos of these materials, but what we’ve set out to do is automate transcription in a way that mimics the perception of the page through the eyes of the expert reader and provides a quick, searchable reading of the text,” Walter Scheirer, senior author and an associate professor at Notre Dame said in a press release. Founded in 719, the Abbey Library of Saint Gall holds one of the oldest and richest library collections in the world. The library houses approximately 160,000 volumes and 2,000 manuscripts, dating back to the eighth century. Hand-written on parchment paper in languages rarely used today, many of these materials have yet to be read—a potential fortune of historical archives, waiting to be unearthed.Machine learning methods capable of automatically transcribing these types of historical documents have been in the works, however challenges remain. Up until now, large datasets have been necessary to boost the performance of these language models. With the vast number of volumes available, the work takes time, and relies on a relatively small number of expert scholars for annotation. Missing knowledge, such as the Medieval Latin dictionary that has never been compiled, poses even greater obstacles. The team combined traditional machine learning methods with the science of visual psychophysics, which studies the relationship between the physical world and human behavior, to create more information-rich annotations. In this case, they incorporated the measurements of human vision into the training process of the neural networks when processing the ancient texts.“It’s a strategy not typically used in machine learning. We’re labeling the data through these psychophysical measurements, which comes directly from psychological studies of perception—by taking behavioral measurements. We then inform the network of common difficulties in the perception of these characters and can make corrections based on those measurements,” Scheirer said.To train, validate, and test the models the researchers used a set of digitized handwritten Latin manuscripts from St. Gall dating back to the ninth century. They asked experts to read and enter manual transcriptions from lines of text into custom designed software. Measuring the time for each transcription, gives insight into the difficulty of words, characters, or passages. According to the authors, this data helps reduce errors in the algorithm and provides more realistic readings.  All of the experiments were run using the cuDNN-accelerated PyTorch deep learning framework and GPUs. “We definitely could not have accomplished what we did without NVIDIA hardware and software,” said Scheirer.The research introduces a novel loss formulation for deep learning that incorporates measurements of human vision, which can be applied to different processing pipelines for handwritten document transcription. Credit: Scheirer et al/IEEEThere are still areas the team is working to improve. Damaged and incomplete documents, along with illustrations and abbreviations pose a special challenge for the models. “The inflection point AI reached thanks to Internet-scale data and GPU hardware is going to benefit cultural heritage and the humanities just as much as other fields. We’re just scratching the surface of what we can do with this project,” said Scheirer.  Read the full article in IEEE Transactions on Pattern Analysis and Machine Intelligence  >>Read more >>
Process Automation/Content Creation
Unknown
null
null
null
null
null
null
news
AnChain.AI Raises $10 Million and Wins SEC Contract to Monitor Crypto and Digital Assets
AnChain.AI, a blockchain security company that specializes in AI-powered platforms, today announced that it raised a $10 Million funding round led by SIG Asia Investments, LLLP, an affiliate of Susquehanna International Group (SIG) in an oversubscribed Series A. Fin VC, Nima Capital, Hard Yaka and Amino Capital also participated in the round.
https://www.globalsecuritymag.com/AnChain-AI-Raises-10-Million-and,20210902,115662.html
http://www.globalsecuritymag.com/squelettes/media/logo.png
2021-09-02T19:28:00Z
AnChain.AI Raises $10 Million and Wins SEC Contract to Monitor Crypto and Digital Assets AnChain.AI, a blockchain security company that specializes in AI-powered platforms, today announced that it raised a $10 Million funding round led by SIG Asia Investments, LLLP, an affiliate of Susquehanna International Group (SIG) in an oversubscribed Series A. Fin VC, Nima Capital, Hard Yaka and Amino Capital also participated in the round.AnChain.AI also announced today that it has been awarded the multi-year SEC contract to provide a platform for deep analysis and tracing on smart contracts to support the SECs ongoing efforts to monitor risk, improve compliance and inform commission policy on digital assets and cryptocurrencies.Founded in 2018 by CEO Dr. Victor Fang and COO Ben Wu, AnChain.AI provides blockchain security and regulatory compliance solutions to secure leading crypto exchanges, protocols, and DeFi worldwide for $81 Billion in daily transaction volume. It serves clients in over 10 countries across the financial and enterprise industry, blockchain Virtual Asset Service Providers (VASP), public sector companies and governments.Its technology establishes transparency, trust, and legitimacy to allow all stakeholders to interact confidently and securely with the developing digital economy and the next iteration of global technological infrastructure. The AnChain.AI platform proactively protects crypto assets by providing proprietary AI, knowledge graphs, threat intelligence and data analytics on blockchain transactions, and in-depth cryptocurrency transaction monitoring on a wide variety of public and private chains.With the market exceeding $1 Trillion in 2021, and a looming billion-dollar crypto AML problem, demand for revamped regulatory frameworks and technologies have never been more critical. AnChain.AI’s machine learning powered forensic capabilities, which detected the first Blockchain APT (BAPT) hack in history, is now helping international law enforcement and emerging government regulatory efforts with both preventive screening, as well as post-incident investigation.Speaking on the announcement Ye Li, Investment Manager at SIG said, "AnChain.AI has made great progress in developing its market-leading crypto security technology to meet its customers’ broad demand in regulatory compliance and transaction intelligence."The past year has seen a 400% growth in revenue for AnChain.AI and its solutions now capture over 98.5% of the cryptocurrency market share, securing billions of dollars in daily transactions across many of the major exchanges by delivering real-time alerts and detect the precursors of criminal activity well before conventional forms of identification.Managing General Partner and Founder of Fin VC, Logan Allin, said, "we are at an inflection point, and a crossing the chasm moment, as it relates to institutional and government adoption of digital assets. The only way to bridge that gap is through robust solutions like AnChain.AI. We look forward to bringing these capabilities to the Financial Services and FinTech industries and continue to safely and securely democratize access to these innovative technologies and asset classes."AnChain.AI will use the capital to accelerate product development and recruitment across research and development, engineering, customer success and sales.AnChain.AI is an AI-powered cybersecurity company enhancing blockchain security, risk, and compliance strategies. AnChain.AI, San Jose, California, was founded in 2018 by cybersecurity and enterprise software veterans from FireEye and Mandiant. Backed by both Silicon Valley and Wall Street VC’s; and selected in the Berkeley Blockchain Xcelerator, the company is trusted by 100+ customers from over 10+ countries in these sectors: VASP, financial institutions and government, including the SEC (Securities and Exchange Commission). Featured by CBS News, MIT Tech Review, Coindesk and DEFCON, AnChain.AI’s AML engine screens over $1 billion in daily crypto transactions.
Detection and Monitoring/Decision Making
Business and Financial Operations/Computer and Mathematical/Management
null
null
null
null
null
null
news
[email protected] (Emma McGowan)
Q&A With Andrew Gardner | Avast
Andrew Gardner, Ph.D., has collected comic books since he was a kid. Back then, his favorite character was Iron Man because — unlike other superheroes — Iron Man created his special abilities: he designed and built his suit from scratch, and then he used it to explore and protect the world. And that, Gardner says, is the ideal artificial intelligence (AI) embodiment to have.
https://blog.avast.com/qa-with-andrew-gardner-avast
https://f.hubspotusercon…AI.jpg?width=900
2021-08-26T05:54:02Z
Gardner explains how AI can rebalance the computer security game and teach us about human identityAndrew Gardner, Ph.D., has collected comic books since he was a kid. Back then, his favorite character was Iron Man because unlike other superheroes Iron Man created his special abilities: he designed and built his suit from scratch, and then he used it to explore and protect the world. And that, Gardner says, is the ideal artificial intelligence (AI) embodiment to have.Andrew recently joined Avast as its new VP of Research and AI. Hes fascinated by cool technology, both fictional and real, and is a leading researcher in the AI and machine learning (ML) communities. At Avast, he hopes to move the industry forward by helping shape our future conception of computer security, moving beyond the traditional idea of protection from file, script and email threats, to systems which protect transactions, interactions, conversations and attention.And hed also really love a garbage can that emptied itself.A conversation with Gardner reveals, however, that while the tech is fascinating, its the people hes really interested in. Keep reading to learn more about what one AI expert is excited about, what keeps him up at night, and where he thinks all of this is ultimately headed. Theres a lot of hype around AI, but its a poorly understood field. What are three things you wish the general public knew about artificial intelligence?Well, firstly, theres no universal definition for AI. That is one contributor to hype because so many things ranging from mundane to fictional get lumped into the AI umbrella and create confusion, and ultimately disappointment.  So starting with a good definition helps with hype. I think a good, general definition for AI is an intelligent system or program thats doing things that humans would do. The system has to be able to sense its environment or collect data in some way and process that data. AI then makes decisions based on that data.  The decision-making bit is the real hallmark of AI.For example, a self-driving car is heading to an intersection. It records and processes video and sensor data, computes its velocity and checks fuel levels. This is all amazing, and highly technical, but the AI aspect is bringing it all together into moving from point A to point B. Safely.  For that, the car has to make choices. Does it turn left? Right? Stop? Go forward? What if theres a pedestrian? How does it prioritize decisions? The decision-making is really important and historically under-emphasized. Without decisions you are probably talking about machine learning, or something simpler. Decision-making is hard and, frankly, we really dont understand well enough how humans do it to be successful at mimicking them.This understanding gap mirrors where we see the biggest struggle with AI and society. The AI community is aware of ethical challenges, for example, but not formally set up to tackle these. Were still in very nascent stages of addressing this scientifically. Originally, researchers and developers just focused on functionality, not bigger ethical questions. The impact of AI on society is complex and we need to have lots of stakeholders participating.Second, Id love for people to have some perspective on AI. On the one hand, a lot of what people think of as AI isnt what a researcher would consider AI. Its not SkyNet, Terminator stuff. And the media presents it two ways. On the one hand, its magic and could be the end of the world. On the other hand, its not magic because my smart toaster still doesnt make my toast right. It can be really hard to determine which is which and the average person gets confused.  And third, AI is for everyone, not just practitioners. It is going to change our world for decades to come and everyone will interact with it in some form, in a way that is similar to how electrification changed the world over the course of a century. Were just at the beginning of that change with AI. What are you most excited about when it comes to the future of AI?Its all kind of exciting! But the most interesting thing about AI, for me, is that it teaches me about humanity. And thats a really cool thing. AI reproduces how people behave and act or how we should behave and act. It makes you think a lot about what makes us human. Humans are very, very complex machines. We far eclipse what were currently dreaming about for AI. I want AI to change my world, but in ways that I can touch and see and feel. Theres a real trend right now of merging robotics and AI to create consumer products. How cool would it be if you had a litter box that scoops itself or trash that takes itself out? Were even seeing delivery bots and drones.It really gets interesting when AI starts interacting with the real world. What does a control system for power and traffic look like when the roads are full of self-driving cars? When will we have robots assembling the next generation of robots? Im excited for us to move faster towards things that benefit society and help us. In my big vision of the longer term, robotics would help us innovate and invent as a species. Im thinking of things like powerful AI that could do drug discovery or medical discovery. Things that would augment our human efforts in a synergistic way, instead of today where we give them specific tasks to complete. Id like AI to be more of a partner than a very, very junior lab assistant. Iron Man instead of Microsoft Clippy.What are you most worried about?Not specific to AI, but to science and technology in general, Im worried that people dont give enough consideration to What if? We can build self-driving cars, but we start by solving technical problems. People raise potential ethical issues and the community will give a nod to that. But I dont think they put enough effort into thinking about outlier events. For example, imagine theres a self-driving car economy that flips on overnight. What if 10 million jobs are replaced with that flip? What if the cost of self driving car rides increases the gap between poor and middle class, or across different countries?  What if new crimes are enabled or committed with self-driving cars?  We dont always think about what the cost of success could be; we just want to win the race and get there as fast as we can. Then there are the ways that bad guys can exploit AI, which is where Avast sits. Historically, there have been real deterrents to exploiting security gaps at scale. Things like access to technology and knowledge. These acted as a gating mechanism which made the white hat vs. black hat war somewhat balanced. Thats all changed now. With AI, bad actors can target and automate to create exploits at scale and at machine speeds. Their ability to search for new vulnerabilities has grown exponentially. If its been a cat and mouse game in computer security, its now tilting toward the cat right now, if the cats are the bad guys. We need AI to help rebalance the game: cat vs. mouse becomes cat vs. robo-dog.What are your hopes for the future of AI in computer security?We have to be really disruptive in how we even think about security. We need to think differently: How do we go from a box to a sphere? How do we even change the idea of security? What even is security?Computer security used to mean and probably to a lot of people still does mean antivirus on the computer. But these days we use phones, IoT, tablets, and so on. Our interactions with other devices and other people are amplified by social media, ecommerce and digital transformation in our daily lives. So computer security now is more about making sense of how we, as humans, interact, where we place trust, where we spend our attention. I think of the future of security as a guardian angel that sits on our shoulder and protects us from both clear threats and less clear threats across these new interactions, without requiring a lot of explicit direction.At the same time, if we really do our jobs well, traditional security products are designed to be forgotten: the user doesnt hear from us, unless we are alerting them, which is rare. The user experience for security products in this model is atrocious: we basically make a grudge purchase to buy insurance through security software purchase. Can we change this? We need to change this! We have to be able to interact with the user and engage more meaningfully, consistently and usefully. If I could set a goal for this industry, it would be to revise how people view security products. I want them to be something more like a personal assistant or advisor that users trust and are actually interested in engaging with.Other than AI and machine learning, whats the topic you can nerd out about for hours? My favorite thing about AI, and what I dream and aspire to do, is AI for storytelling. Its a really hard problem. You have to study how authors or creators go about setting out a story, how its organized, even sentence planning. So far, AI doesnt come close to touching what humans can do. But imagine, though, if you could have a quick conversation with an AI that could generate entirely new books, movie or game worlds with compelling and realistic characters and plot development in the style you like  That dream is a way off. Today theres not really much intelligence in AI, at least not in the general intelligence sense one ascribes to people. Typical systems work like this: you give the AI a prompt like, it was a bright and sunny day, and it starts completing the text, maybe a few sentences, for example. If you dont like the completion you try again and get a new result. The remarkable thing about it to lay people is that the generated text will usually have no grammatical or syntactical errors. But that doesnt mean its sensible. It will generate correct, complete sentences, but they dont really all hang together. Still, there are some fun examples out there. AI Dungeon, for example, is a neat mobile game that uses state-of-the-art AI for an interesting choose-your-own-story approach. Hollywood is interested in AI, too. Im a big fan of sci-fi and I enjoyed the television show Stargate SG-1. I learned recently that the cast and producer are doing an experiment where theyre having an AI generate a screenplay for an episode and then the cast is going to act it out. My expectations are low, but it should be fun.Just to circle back to storytelling for a moment, and AII love this marriage because it really makes you ask, How do people think and how do they reason.. How do humans think?  How should (or do) AI systems think?Storytelling is so fundamental to our human existence and identity. Thats an area where Id like to see AI really bloom.
Unknown
Computer and Mathematical
null
null
null
null
null
null
news
BS Reporter
Saas startup NeuroPixel.AI raises seed fund of $825,000 led by IPV
NeuroPixel.AI was founded in late 2020 by Arvind Nair (CEO) and Amritendu Mukherjee (CTO)
https://www.business-standard.com/article/companies/saas-startup-neuropixel-ai-raises-seed-fund-of-825-000-led-by-ipv-121090600522_1.html
https://bsmedia.business…0913826-0562.jpg
2021-09-06T07:37:00Z
Deeptech SaaS startup NeuroPixel.AI has raised $825,000 in a seed round led by Inflection Point Ventures (IPV). Other investors in the round included Entrepreneur First, Huddle, Dexter Angels, and Rishaad Currimjee.The startup will use the funds raised for scaling up the R&D team to accelerate the transition of its product from beta to production, and for expanding its ‘training set’, a crucial piece of the puzzle for every machine learning algorithm.Ankur Mittal, co-founder, Inflection Point Ventures, said, “As e-commerce will expand, so will the need to put up quality and realistic product pictures online. In fashion commerce it is a big part of the buyer’s purchase decision. However, it is not a seamless process and is both time-consuming and expensive, especially for SMEs and social sellers, two segments which are growing exponentially. NeuroPixel is trying to solve this problem by building a product that can transform online fashion storefronts through catalog image-based personalisation and virtual try-ons, helping the average consumer make a far more informed purchase decision.”NeuroPixel.AI was founded in late 2020 by Arvind Nair (CEO) and Amritendu Mukherjee (CTO). The venture originated at Entrepreneur First, a leading international talent investor, which helps aspiring entrepreneurs find co-founders and supports them in building technology companies.NeuroPixel.AI’s first product – an AI powered cataloguing tool – will enable users to shoot any apparel on just a mannequin, and their technology will render the apparel on models of different sizes in different poses. In the near term, they will reduce cataloguing spends by 30%, and reduce process times by 90%, claims the company.“What Arvind and Amritendu are building today is a world-class, innovative, technology-led startup that can change the way consumers shop online. I’m excited to see how NeuroPixel.AI evolves and disrupts the online fashion ecosystem to pave the way for more intuitive solutions and much-needed disruption of the online customer experience,” said Esha Tiwary, general manager at EF India.NeuroPixel.AI was also among the six startups selected for investment by the ISB D-Labs incubator, under their seed support programme in collaboration with the Department of Science and Technology. The startup has also been selected into the Huddle accelerator, which will commence from the closure of this round of funding.With global spends on apparel cataloguing estimated to be roughly $7 billion today and growing at 16 per cent CAGR, and the virtual fitting room market valued at approximately $2.5 billion today and growing at 25 per cent CAGR, NeuroPixel.AI is confident of tapping into a large, high-value international market with their technology soon.Business Standard has always strived hard to provide up-to-date information and commentary on developments that are of interest to you and have wider political and economic implications for the country and the world. Your encouragement and constant feedback on how to improve our offering have only made our resolve and commitment to these ideals stronger. Even during these difficult times arising out of Covid-19, we continue to remain committed to keeping you informed and updated with credible news, authoritative views and incisive commentary on topical issues of relevance.We, however, have a request.As we battle the economic impact of the pandemic, we need your support even more, so that we can continue to offer you more quality content. Our subscription model has seen an encouraging response from many of you, who have subscribed to our online content. More subscription to our online content can only help us achieve the goals of offering you even better and more relevant content. We believe in free, fair and credible journalism. Your support through more subscriptions can help us practise the journalism to which we are committed.Support quality journalism and subscribe to Business Standard.Digital Editor
Personalization/Image Analysis
Unknown
null
null
null
null
null
null
news
PR Newswire
Remark Holdings Announces Fiscal Second Quarter 2021 Financial Results
Remark Holdings, Inc. (NASDAQ: MARK), a diversified global technology company with leading artificial intelligence ("AI") solutions and digital media...
https://finance.yahoo.com/news/remark-holdings-announces-fiscal-second-201500892.html
https://s.yimg.com/uu/api/res/1.2/1jV7ZZ5DeelaaVA94B9EmQ--~B/aD01NDt3PTQwMDthcHBpZD15dGFjaHlvbg--/https://media.zenfs.com/en/prnewswire.com/b608ab19025c31f98cea7d4f278a61f8
2021-08-23T20:15:00Z
Second Quarter 2021 Revenue Increased 75% to $4.0 Million Compared to Second Quarter 2020LAS VEGAS, Aug. 23, 2021 /PRNewswire/ -- Remark Holdings, Inc. (NASDAQ: MARK), a diversified global technology company with leading artificial intelligence ("AI") solutions and digital media properties, today announced financial results for its fiscal second quarter ended June 30, 2021.Our second quarter was highlighted by a near doubling of revenue coming from the United StatesManagement Commentary"Our second quarter was highlighted by a near doubling of revenue coming from the United States, driven by our AI data intelligence platform," noted Kai-Shing Tao, Chairman and Chief Executive Officer of Remark Holdings. "Momentum from our Chinese operations continued despite periodic regional lockdowns associated with COVID-19 and a slowdown in business activities due to the 100th Anniversary of the CCP, growing quarterly revenue in 2021 by more than 40% compared with the same period of last year. In the first six months of 2021, we have almost achieved our full-year 2020 revenue, and we expect additional significant growth in the second half of the year."Second Quarter 2021 Business HighlightsDuring the second quarter, Remark continued to build its data intelligence business using its AI Data Intelligence Platform. Based on initial success, the company is looking forward to the start of the fall sports season and to additional growth opportunities with other online sports gaming and iGaming businesses.China Mobile continues to implement Remark's KanKan AI Platform and Smart Queueing System throughout their retail locations. Additionally, Remark is developing an Artificial Intelligence of Things project to intelligently manage in-store ambient environmental equipment.Remark is also preparing to bid on the second phase of China Mobile's Smart Community business. The company would provide its AI solution to enforce COVID-19 protection rules for communities by enforcing health codes, conducting real-time temperature checks, ensuring mask wearing, allowing access only to residents or authorized persons, controlling vehicle access, and helping to protect the elderly and children.Remark's Digital Marketing Platform ("DMP") was deployed with Bank of China at their Guangzhou branch and China Construction Bank's Yunnan branch during the second quarter, providing additional large opportunities across multiple banks and other retailers. The design phase for Lotus Supermarket's DMP in the Changhe shopping center, Xi'an City, has been completed and is expected to be deployed later this year.During the second quarter, Smart Campus solutions were deployed across more than three dozen campuses bringing total installations to more than 300 campuses. Sales efforts and new partnerships are targeting continued expansion of the Smart Campus solution to several new provinces.Fiscal Second Quarter 2021 Financial ResultsRevenue for the fiscal second quarter of 2021 totaled $4.0 million, up from $2.3 million during fiscal second quarter of 2020.Gross profit improved to $1.8 million in the second quarter of 2021 from $1.1 million in the second quarter of 2020, commensurate with increased revenue. The overall gross profit margin for the second quarter of 2021 was 43.9%.The company incurred an operating loss of $2.5 million in the second quarter of 2021 compared to an operating loss of $2.8 million in the comparable quarter of 2020. An increase in general and administrative expense of $0.6 million, when netted against $0.3 million of decreases in other operating expense categories, partially offset the improved gross profit, and were the primary reason for the operating loss.Net loss totaled $1.6 million, or $0.02 per diluted share in the second quarter ended June 30, 2021, compared to a net loss of $9.8 million, or $0.11 per diluted share in the second quarter ended June 30, 2020. The decrease in the company's stock price between December 31, 2020 and June 30, 2021 led to a $1.3 million gain on the change in liability associated with certain outstanding warrants. In the second quarter of 2020, the company recorded a $6.3 million loss on the change in the fair value of warrant liability due to stock price changes during that period of the prior year.At June 30, 2021, the cash balance totaled $0.1 million, compared to a cash balance of $0.9 million at December 31, 2020. Proceeds of $4.8 million from a short-term debt issuance and $0.8 million from stock option exercises were offset by $6.3 million of cash used in operations."Finally, subsequent to June 30, 2021, Sharecare, Inc. completed its merger with Falcon Acquisition, providing us with initial liquidity of $2.3 million plus approximately 9.4 million shares of Sharecare, Inc. We anticipate that monetizing our position will fund our balance sheet while simultaneously supporting working capital needs to meet our growth goals and new initiatives," concluded Mr. Tao. Sharecare, Inc. trades on The Nasdaq Stock Market (SHCR - $7.40).Conference Call InformationManagement will hold a conference call this afternoon at 4:30 p.m. Eastern Time (1:30 p.m. Pacific Time) to discuss the Company's financial results and provide an update on recent business developments. A question and answer session will follow management's presentation.The live conference may be accessed via telephone or online webcast.Toll-Free Number: 888.394.8218International Number: 323.701.0225Conference ID: 3005370Online Webcast: http://public.viavid.com/index.php?id=146197Participants are advised to login for the live webcast 10 minutes prior to the scheduled start time. A replay of the call will be available after 7:30 p.m. Eastern time on the same day through August 28, 2021.Toll-Free Replay Number: 844.512.2921International Replay Number: 412.317.6671Replay ID: 3005370Remark Holdings, Inc. (PRNewsFoto/Remark Media, Inc.)About Remark Holdings, Inc.Remark Holdings, Inc. (NASDAQ: MARK) delivers an integrated suite of AI solutions that enable businesses and organizations to solve problems, reduce risk and deliver positive outcomes. The company's easy-to-install AI products are being rolled out in a wide range of applications within the retail, public safety and workplace arenas. The company also owns and operates an e-commerce digital media property focused on a luxury beach lifestyle. The company is headquartered in Las Vegas, Nevada, with additional operations in Los Angeles, California and in Beijing, Shanghai, Chengdu and Hangzhou, China. For more information, please visit the company's website at http://www.remarkholdings.com/.Forward-Looking StatementsThis press release may contain forward-looking statements, including information relating to future events, future financial performance, strategies, expectations, competitive environment and regulation. Words such as "may," "should," "could," "would," "predicts," "potential," "continue," "expects," "anticipates," "future," "intends," "plans," "believes," "estimates," and similar expressions, as well as statements in future tense, identify forward-looking statements. These statements involve known and unknown risks, uncertainties and other factors, including those discussed in Part I, Item 1A. Risk Factors in Remark Holdings' Annual Report on Form 10-K and Remark Holdings' other filings with the SEC. Any forward-looking statements reflect Remark Holdings' current views with respect to future events, are based on assumptions and are subject to risks and uncertainties. Given such uncertainties, you should not place undue reliance on any forward-looking statements, which represent Remark Holdings' estimates and assumptions only as of the date hereof. Except as required by law, Remark Holdings undertakes no obligation to update or revise publicly any forward-looking statements after the date hereof, whether as a result of new information, future events or otherwise.Company ContactsE. Brian HarveySenior Vice President of Capital Markets and Investor RelationsRemark Holdings, [email protected] TianVice President of Investor [email protected](+1) 626.623.2000(+86) 13702108000REMARK HOLDINGS, INC. AND SUBSIDIARIESConsolidated Balance Sheets(dollars in thousands, except share and per share amounts)June 30, 2021December 31, 2020(Unaudited)AssetsCash (includes VIE $60 and $278, respectively)$122$854Trade accounts receivable, net (includes VIE $7,788 and $4,850, respectively)8,0455,027Inventory, net (includes VIE $58 and $112, respectively)1,925874Prepaid expense and other current assets (includes VIE $819 and $248, respectively)1,4362,043Total current assets11,5288,798Property and equipment, net (includes VIE $ and $43, respectively)264321Operating lease assets (includes VIE $173 and $281, respectively)330492Investment in unconsolidated affiliate1,0301,030Other long-term assets (includes VIE $29 and $68, respectively)581670Total assets$13,733$11,311Liabilities and Stockholders' DeficitAccounts payable (includes VIE $5,631 and $3,655, respectively)$11,112$8,589Accrued expense and other current liabilities (includes VIE $3,386 and $3,782, respectively)7,5396,660Contract liability (includes VIE $187 and $147, respectively)590310Notes payable, net of unamortized discount and debt issuance cost6,1671,500Total current liabilities25,40817,059Loans payable1,4251,425Operating lease liabilities, long-term (includes VIE $26 and $79, respectively)98194Warrant liability2,0131,725Total liabilities28,94420,403Commitments and contingenciesPreferred stock, $0.001 par value; 1,000,000 shares authorized; zero issuedCommon stock, $0.001 par value; 100,000,000 shares authorized; 99,918,941 and 99,505,041 shares issued and outstanding at June 30, 2021 and December 31, 2020, respectively100100Additional paid-in-capital352,394351,546Accumulated other comprehensive income(171)(226)Accumulated deficit(367,534)(360,512)Total stockholders' deficit(15,211)(9,092)Total liabilities and stockholders' deficit$13,733$11,311REMARK HOLDINGS, INC. AND SUBSIDIARIESConsolidated Statements of Operations and Comprehensive Loss(dollars in thousands, except per share amounts)Three Months Ended June 30,Six Months Ended June 30,2021202020212020Revenue$4,016$2,299$8,422$2,730Cost and expenseCost of revenue (excluding depreciation and amortization)2,2521,2105,0041,231Sales and marketing3984861,399902Technology and development1,3051,4772,8552,125General and administrative2,4821,8985,1794,638Depreciation and amortization4966115156Total cost and expense6,4865,13714,5529,052Operating loss(2,470)(2,838)(6,130)(6,322)Other income (expense)Interest expense(380)(775)(615)(1,236)Other income, net657757Change in fair value of warrant liability1,322(6,260)(288)(6,203)Gain on lease termination1,538Other income (loss), net(30)13(73)Total other income (expense), net918(6,978)(883)(5,917)Loss from operations$(1,552)$(9,816)$(7,013)$(12,239)Provision for income taxes(9)(9)Net loss$(1,561)$(9,816)$(7,022)$(12,239)Other comprehensive lossForeign currency translation adjustments1315655338Comprehensive loss$(1,548)$(9,660)$(6,967)$(11,901)Weighted-average shares outstanding, basic and diluted99,91789,26499,83871,527Net loss per share, basic and diluted$(0.02)$(0.11)$(0.07)$(0.17)View original content to download multimedia:https://www.prnewswire.com/news-releases/remark-holdings-announces-fiscal-second-quarter-2021-financial-results-301360922.htmlSOURCE Remark Holdings, Inc.
Decision Making/Process Automation
Management/Business and Financial Operations
null
null
null
null
null
null
news
Pasha Finkelshteyn
Data Engineering Annotated Monthly – August 2021
August is usually a quiet month, with vacations taking their toll. But data engineering never stops. I’m Pasha Finkelshteyn and I will be your guide through this month’s news, my impressions of the developments, and ideas from the wider community. If you think I missed something worthwhile, ping me on Twitter and suggest a topic, […]
https://blog.jetbrains.com/big-data-tools/2021/09/06/data-engineering-annotated-monthly-august-2021/
https://blog.jetbrains.c…x800-cropped.png
2021-09-06T12:29:26Z
August is usually a quiet month, with vacations taking their toll. But data engineering never stops. Im Pasha Finkelshteyn and I will be your guide through this months news, my impressions of the developments, and ideas from the wider community. If you think I missed something worthwhile, ping me on Twitter and suggest a topic, link, or anything else.NewsA lot of engineering is about learning new things and keeping a finger on the pulse of new technologies. Heres whats happening in data engineering right now.Fairlens 0.1.0 Ethical ML is huge right now. But it is incredibly hard to determine whether a dataset is ethical, unbiased, and not skewed manually. Given this is a hot topic and theres a boatload of money in it, you would expect there to be a wealth of tools to verify data ethics but youd be wrong. At least until Fairlens came on the scene. It hasnt had its first release yet, but the promise is that it will un-bias your data for you! How cool is that?Kafka 3.0.0-rc0 If you like to try new releases of popular products, the time has come to test Kafka 3 and report any issues you find on your staging environment! Support for Scala 2.12 and Java 8 still exists but is deprecated. There are also several changes in KRaft (namely Revise KRaft Metadata Records and Producer ID generation in KRaft mode), along with many other changes. Unfortunately, the feature that was most awaited (at least by me) tiered storage has been postponed for a subsequent release.ClickHouse v21.8 This release of ClickHouse is massive. For fans of open-source instruments, the most interesting change is support for the MaterializedPostgreSQL table engine, which lets you copy a whole Postgres table/database to ClickHouse with ease.MLflow 1.12.0 This minor release of a popular ML Ops framework allows you to store and serve ML models. One of the changes that look exciting to me is Add pip_requirements and extra_pip_requirements to mlflow.*.log_model and mlflow.*.save_model for directly specifying the pip requirements of the model to log / save.Apache Pinot 0.8.0Apache Pinot is a real-time distributed OLAP datastore, designed to answer OLAP queries with low latency. In some sense, it competes with ClickHouse, as both target the same workflow. There are multiple differences, of course; for example, Pinot is intended to work in big clusters. There are a couple of comparisons on the internet, like this one, but its worth mentioning that they are quite old and both systems have changed a lot, so if youre aware of more recent comparisons, please let me know! One of the interesting changes here is support for Bloom filters for IN predicates.LakeFS 0.48.0 We described LakeFS in the July issue of our Annotated. Now it has added support for having multiple AWS regions for underlying buckets. While this may be more expensive in terms of both money and performance, it still sounds like a nice disaster recovery option. Even if a meteorite hits your data center, your big data is still going to be safe!Future improvementsData engineering technologies are evolving every day. This section is about whats in the works for technologies that you may want to keep on your radar.Cache for ORC metadata in Spark ORC is one of the most popular binary formats for data storage, featuring awesome compression and encoding capabilities. But what if we need to query the same dataset multiple times? Reading file metadata is costly because it is an IO operation, which is slow. And more files means more time. With caching, though, execution times may be decreased dramatically (on some workloads).Custom netty HTTP request inbound/outbound handlers in Flink Sometimes we need to perform HTTP requests while processing with Flink. But sometimes we need to do more than just make an HTTP request sometimes we need to customize it, for example, by adding authentication or custom headers, which may be especially helpful in strict corporate environments. It looks like this will be available soon in Flink!Cassandra Paxos Improvements Cassandras Paxos implementation is known to be good, but not perfect. For example, Lightweight Transactions (LWT) are known to suffer from poor performance. Dont take it from me this comes from Cassandra developers themselves. So, theyve decided to improve this in the foreseeable future and the work is already underway, which I think is awesome.ArticlesThis section is about inspiration. Well try to list some great articles and posts that can help us all learn from the experience of other people, teams, and companies dealing with data engineering.Change Data Capture at DeviantArt I think we all know what Debezium is. But while it is a tool for streaming data from DBs to Kafka, it cannot cover all CDC needs or scenarios. In this article, the folks from DeviantArt describe the whole architecture of their CDC solution, with concrete recipes and tips.How Uber Achieves Operational Excellence in the Data Quality Experience Uber is known for having a huge Hadoop installation in Kubernetes. This blog post is more about data quality, though, describing how they built their data quality platform. Who would have thought that building a data quality platform could be this challenging and exciting? 100% test coverage sounds amazing, too, so good job!Apache Hudi The Data Lake Platform Quasi-mutable data storage formats are not only trending, but also mysterious. How do they really work under the hood? At what cost do we get this mutability? In this detailed post, Hudi developers meticulously describe how Apache Hudi works and why its good for streaming.Hive Metastore It didnt age well The folks from LakeFS continue to delight us with interesting articles about data engineering. This time they describe what is wrong with the popular Hive Metastore and explain how it works in detail.Toolssqlglot I often found myself digging the web for specific SQL dialect details. Should I backtick the identifiers here? Should I use double quotes or single ones? And dont get me started on formatting. Sometimes I just didnt want to launch my favorite DataGrip to format a single SQL statement. Then I discovered sqlglot, a tool that can transpile my syntax from one dialect to another in an instant. Thats one less headache for me!ConferencesSmartData 2021 This international conference on data engineering is organized by a Russian company, but it aims to have at least 30% of the talks in English. Most of the topics, from data quality to DWH architecture, are hot! Speakers from Databricks, Microsoft, Netflix, and other huge companies are going!That wraps up Augusts Annotated. Follow JetBrains Big Data Tools on Twitter and subscribe to our blog for more news! You can always reach me, Pasha Finkelshteyn, at [email protected] or send a DM to my personal Twitter, or you can get in touch with our team at [email protected]. Wed love to know about any other interesting data engineering articles you come across!
Content Synthesis/Discovery
Computer and Mathematical
null
null
null
null
null
null
news
quantlet added to PyPI
QuantLET - an event driven framework for large scale real-time analytics
https://pypi.org/project/quantlet/
https://pypi.org/static/…ter.6fecba6f.jpg
2021-08-30T03:25:53Z
QuantLET - an event driven framework for large scale real-time analytics.Copyright (C) 2006 Jorge M. Faleiro Jr.QuantLET is an open source, event-driven framework for rapid development and deployment of real-time analyticalmodels intended to be executing in large scale, in terms of data intensiveness or computing power (your spreadsheet can't do that).You can see a few examples of the framework outlining the use of signals in a moving average cross-over strategy or how to define and use 'infinite spreadsheets'.There is also a large number of examples produced during my doctorate research and sprinkled across many articles. The Black Magic paper describes an end-to-end investigation of the use of data to detect profit opportunities in equities using price momentum. The financial language SIGMA also part of the same research borrowed some ideas from QuantLET, and vice-versa.The nature of any quantitative framework require a number of quite heavy auxiliary libraries and resources. QuantLET is no exception. You can pick and choose a specific extensions (as python extras) based on what you intend to do with the framework.DevelopmentIf you intend to try out the source code please make yourself aware of the license. It is recommended the use of containers and cloud services. At the time of this writing I used VSCode and Remote Containers. You will also need poetry and pre-commit.git clone [email protected]:jfaleiro/quantlet.gitcd quantletpoetry installAll code check and quality procedures are done as part of pre-commit. These checks are mandatory and are a condition for automatic build and release.poetry shellpre-commit installGit pre commit hooks are installed and from this point on all checks are done locally as a condition for a git commit to succeed. CI-CD is done by gitlab. You can find the spec for each component in the source tree.UseTypical setuptools use through pip. You can use the bare bones version:pip install quantletOr any of the extensions (extras). If you need one single extension, say strats:pip install quantlet[strats]If you want multiple extensions, like reactives and deep learning for example, you add each extension separated by comma:pip install quantlet[reactives,dl]You don't want to use the wildcard quantlet[*] and install all extras. Python is not really an environment geared toward large scale software development and this will bring in all depenedencies, across all extensions. In pip and poetry for example this might lead to a few hours of dependency resolution alone. There are way more uses and features in QuantLET than we would like to admit and you can possibly need for one application, so be parcimonious.Each extension is defined in a project named quantlet-[extension]. Dependencies on QuantLET's pyproject.toml are defined like this:"quantlet.reactives"={git="https://gitlab.com/jfaleiro/quantlet-reactives.git",rev="release/0.0.1",develop=true,optional=true}This type of dependency is resolved through git. In each case you might need read access to the specific gitlab repository. Feel free to investigate and get in touch if you need access or details.quantlet-streamsQuantLET elements of stream processing (filtering, grouping, selection, functional operations) on canonical and data frames format.[1,3,4,7,8]>>apply(lambdax:dict(x=x)))==[{'x':1},{'x':3},{'x':4},{'x':7},{'x':8}]This is the streaming facet defined as part of the financial language SIGMA.quantlet-reactivesFast and simple framework for reactive programming. A declarative paradigm that allows the definition of what has to be done through reactive relationships, letting the computational representation automatically take care of when to do it, and which results are produced, similar to cells in an electronic spreadsheet representing values and a formula.v=[R(i)for_inrange(10000)]c=sum(*v)foriinv:i.v=normal()print(c.v)>>0.0035This is the reactives facet defined as part of the financial language SIGMA.quantlet-big-reactivesSupport for reactive use cases that must reply on very large data: infinite reactive graphs (infinite spreadsheets) associated to non-structured repositories. Reactives are organized in distributed nodes, allowing for automatic persistence and in memory allocation beyond the limits of one single computer.quantlet-timeseriesFast timeseries functions and transformations. Large store and retrievals of sequencial datasets in fastparquet through tsstore.quantlet-agentsSynchronous and asynchronous agents for discrete-event simulation. This is related to the distribution and simulation facets defined as part of the financial language SIGMA.quantlet-stratsFinancial strategies and analytics. Elements of numeric processing, data analysis, plotting and tabular transformations. Basically strats are classified in bands,BandsDefine higher a lower limits around an ongoing signal, e.g., for Bollinger and fixed bands:# Bollinger bandsa=(simple_dataframe>>std(price_tag='price')>>bollinger(ma_tag='price'))assertround(a.upper.mean(),2)==1.94assertround(a.lower.mean(),2)==-2.02# Fixed bandsa=(simple_dataframe>>fixed(ma_tag='price'))assertround(a.upper.mean(),2)==-0.05assertround(a.lower.mean(),2)==-0.03FiltersDerive a new sequence based on a original signal, e.g.# RMA, recursive moving averageassertlist(map(lambdax:dict(y=x),[1.0,2.0,3.0,4.0,5.0,6.0])>>rma(m=3))==[{'y':1.0,'rma':1.0},{'y':2.0,'rma':1.5},{'y':3.0,'rma':2.0},{'y':4.0,'rma':3.0},{'y':5.0,'rma':4.0},{'y':6.0,'rma':5.0}]# EWMA, exponentially weighted moving averageassertlist(list(map(lambdax:dict(y=x),[1.0,2.0,3.0,4.0,5.0,6.0]))>>ewma(input_tag='y'))==[{'y':1.0,'ewma':1.0},{'y':2.0,'ewma':1.1},{'y':3.0,'ewma':1.29},{'y':4.0,'ewma':1.561},{'y':5.0,'ewma':1.9049},{'y':6.0,'ewma':2.31441}]Financial engineeringCommon financial calculation QLets.Returns and cash flow streams: Absolute, single and multiple periods. Continous and discrete compounding.Options: Binomial lattice, single and multiple period binomial reactive option pricing. Black scholes model. Put-call parity pricing. Greeks.Hedging: Delta hedging. Stop price hedging.SeedingGenerators of financial sequences.Timeseries seedingRandom walk and brownian motions. Random uniform seedingStatsStatistical transformations.Uniform distributionAutocorrelation metricsInflection pointsquantlet-mlOperations related to machine learning transformations: feature engineering, interpolations, incremental and batch learning. This article is an example of [nowcasting][https://en.wikipedia.org/wiki/Nowcasting_(economics)] of trading signals using a robot trader using incremental learning in quantlet-ml:(retrieve('XXXX',start='2013-01-01',end='2017-12-31')[['Adj.Close','Adj.Volume']]>>apply(adjust_columns)>>scale(['adj_price','adj_volume'],scalers=[price_scaler,volume_scaler])>>one_hot(["dow","dom","month"])>>window_shift(['adj_price','adj_volume'],5,separator='-')>>online_fit_predict(model,'predicted_adj_price',error_type='squared',response_variable_tag='adj_price',ignore_tags=['Date'])>>ewma('error',alpha=.2,output_tag='ewme')>>unscale(['adj_price','predicted_adj_price','adj_price-1','adj_price-2','adj_price-3','adj_price-4','adj_price-5'],scalers=[price_scaler]*7,index_column='Date'))It uses QLets for basic operations of window shifting, scaling, one-hot encoding, and online fit and predict in one step for streams.quantlet-dlExtension of quantlet-ml to support deep-learning libraries and algorithms. Currently Keras and TensorFlow.quantlet-scratchpadSupport for interactive use and visualization of resources in Jupyter notebooks.Final NotesQuantLET is an open source project that I put together and have been using for a very long time to test ideas, hold discussions with fellow practitioners, and extend my doctorate research in scientific crowds and the theory of enablers. The doctorate thesis was finished many years ago, in 2018, and is available online if you are curious and want to learn more about the subject.Bear in mind that the materialization of QuantLET was a result of volunteering my time in one of my many passions: investigations in technology, engineering, humans, and incentives that make humans do what they do. Nevertheless, unless I feel a compeling reason for a change, QuantLET is basically unsupported.This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. The license file is also shipped as part of the source code.Last, but not least, it is important to note that QuantLET was the entry point to a number of successful commercial frameworks, such as Platform and Hydra. If you have an idea on how to leverage these frameworks, or extend QuantLET, the power of large scale computing, AI, and crowds, feel free to get in touch.
Content Creation/Process Automation
Business and Financial Operations/Computer and Mathematical
null
null
null
null
null
null
news
eWEEK EDITORS
Four Key Steps to Simplify Enterprise AI Success
IT plays a critical role in setting up companies for success in artificial intelligence. Learning from early adopters’ best practices can help enterprises sidestep common pitfalls when starting new AI projects. A few predictable issues are often at play when new AI initiatives stall out. Some of the most common challenges are hitting snags that […]The post Four Key Steps to Simplify Enterprise AI Success appeared first on eWEEK.
https://www.eweek.com/big-data-and-analytics/four-key-steps-to-simplify-enterprise-ai-success/
https://www.eweek.com/wp…oding-scaled.jpg
2021-08-24T15:11:45Z
IT plays a critical role in setting up companies for success in artificial intelligence. Learning from early adopters best practices can help enterprises sidestep common pitfalls when starting new AI projects.A few predictable issues are often at play when new AI initiatives stall out. Some of the most common challenges are hitting snags that delay projects from getting started, not having the right AI infrastructure and tools, workflow bottlenecks that stifle data scientist productivity, and failing to control costs.Companies seeing the most value from AI have implemented a number of best practices across systems, software, and trusted advisors. These lessons can speed AI deployments across a broad range of use cases, such as computer vision to enhance safety and manufacturing uptime with predictive maintenance, recommender systems to help grow sales, and conversational AI services to boost customer satisfaction.Here are four things innovators are doing to succeed and boost the bottom line impact of AI.1) Don’t reinvent the wheel: Use proven tools to save developer cycles and kickstart projectsAI model prototyping, development and testing can be very time- and resource-intensive. Starting from scratch when building a new model can add months to the project timeline. Leveraging proven tools can enhance productivity while speeding ROI.Ready-made AI software, including pretrained models and scripts for popular use cases such as speech recognition, computer vision, and recommender systems, reduces the amount of software engineering required, so projects can be ready for production faster.Additionally, purpose-built AI infrastructure ensures that IT can offer the resources needed for AI development, supporting the unique demands of AI workloads. Unlike legacy infrastructure, purpose-built AI infrastructure achieves the optimal design-balance of compute, networking, and storage to speed AI model training, while ensuring data scientists dont waste valuable cycles moonlighting as systems integrators, software engineers, and tech support.2) Tap proven expertise and platforms that can grow with youAI systems are built by training increasingly complex models on large datasets that tend to grow exponentially. This means that enterprise AI requires powerful infrastructure to deliver the fastest model training and real-time inference once AI is running in production applications. To ensure AI-infused businesses can grow, IT needs a path to scale – and expert assistance along the way.While AI development requires advanced computing infrastructure, not every organization has access to an AI-ready data center or the facilities to support scaled infrastructure. There are now many options that help enterprises test projects before making a big commitment, as well as partners who can offer permanent infrastructure hosting to power your enterprise.Colocation providers who are certified in running AI infrastructure are ideal for those who don’t have an AI-ready data center of their own. Some even provide infrastructure on a rental basis to help companies experience high-performance AI development infrastructure before making a big investment.Expertise is also essential, especially as questions arise related to use cases, models, frameworks, libraries, and more. Having direct access to experts in full-stack AI can ensure the fastest path to getting answers that keep your project moving forward.Qualified AI companies and solution delivery partners can help enterprises right-size their system requirements to help them get started. Look for vendors who work with other trusted technology providers to make sure your needs will be met across the entire spectrum of high performance computing, networking and storage.3) Own the base, rent the spike to avoid blowing the budgetGiven that AI is powered by data, its critical to consider where that data is stored when developing your platform and infrastructure strategy. Not only is having large amounts of data the fuel for AI model development, the process of model training and retraining never truly ends, since production applications can drift and lose accuracy over time. Therefore, IT teams need to consider the data pipeline and the amount of time and effort that is continually spent moving large datasets from where theyre created to where compute resources reside.Data gravity datas ability to attract additional applications and services comes into play here. As models become more complex and data scientists iterate more on their models, enterprises hit an inflection point where moving data around starts to significantly drive up costs. This is especially true if the organization is cloud-first or cloud-only in its approach. Organizations can keep costs in check by training where their data lives to achieve the lowest cost-per-training run.When the need arises, such as when the development cycle moves from productive experimentation into scaled, ongoing training, a hybrid model that can straddle both cloud and on-premises resources may make sense. In hybrid architectures, an organization will size its own on-prem infrastructure according to the steady-state demand from the business, and additionally procure cloud resources to support temporary demands that exceed that capacity.This own the base, rent the spike approach offers the best of both worlds: lowest fixed-cost infrastructure for day-to-day demands, and on-demand scalability in the cloud for temporary or seasonal surges.4) Build an AI center of excellence, and make AI a team sportAI is a rapidly growing field, but it can still be tough to source professionals who already have deep domain expertise. In fact, a recent Deloitte study found that 68 percent of surveyed executives described their organizations skills gap as moderate to extreme, with 27 percent rating it as major or extreme.The reality is, the experts who can build your best AI applications are already working for you. Theyre inside your business units, and they know your problems and data better than anyone. Many of them want to evolve into data scientists, but need mentoring and an environment where they can learn valuable skills while shadowing other experts in your organization.Establishing an AI center of excellence creates an environment in which your organization can consolidate people, processes, and platforms, enabling you to groom and scale data science expertise from within, saving a lot of money compared to bringing in new hires.Organizations that have successfully adopted AI are distinguished by their ability to de-risk their AI projects with the right partners, tools, software, and AI infrastructure from the start.With this solid foundation in place, companies can make their data scientists and AI developers productive immediately, enabling them to innovate without worrying about costs or resource availability.Adopting these four best practices will help IT lead their companies to uncover insights faster and speed the success of their AI initiatives.About the Author: Tony Paikeday, senior director of AI Systems at NVIDIA
Unknown
Computer and Mathematical/Business and Financial Operations
null
null
null
null
null
null
news
Kyle Wiggers
Foundation models risk exacerbating ML’s ethical challenges
Foundation models, or models capable of generating a range of media, pose risks, according to a new report from Stanford.
https://venturebeat.com/2021/08/18/foundation-models-risk-exacerbating-mls-ethical-challenges/
https://venturebeat.com/…w=1200&strip=all
2021-08-18T15:00:07Z
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!Machine learning is undergoing a paradigm shift with the rise of models trained at massive scale, including Googles BERT, OpenAIs DALL-E, and AI21 Labs Jurassic-1 Jumbo. Their capabilities and dramatic performance improvements are leading to a new status quo: a single model trained on raw datasets that can be adapted for a wide range of applications. Indeed, OpenAI is reportedly developing a multimodal system trained on images, text, and other data using massive computational resources, which the companys leadership believes is the most promising path toward AGI AI that can learn any task a human can.But while the emergence of these foundational models presents opportunities, it also poses risks, according to a new study released by the Stanford Human-Centered Artificial Intelligences (HAI) Center for Research on Foundation Models (CRFM). CFRM, a new initiative made up of an interdisciplinary team of roughly 160 students, faculty, and researchers, today published a deep dive into the legal ramifications, environmental and economic impact, and ethical issues surrounding foundation models. The report, whose coauthors include HAI codirector and former Google Cloud AI chief Fei Fei Li, examines existing challenges built into foundation models, the need for interdisciplinary collaboration, and why the industry should feel a grave sense of urgency.Foundation models are an emerging paradigm for building AI systems that lead to an unprecedented level of homogenization: a single model serving as the basis for a wide range of downstream applications, Percy Liang, Stanford HAI faculty and computer science professor, told VentureBeat via email. This homogenization generates enormous leverage for many new applications, but they also pose clear risks such as the exacerbation of historical inequities and centralization of power.Foundation modelsCRFMs report defines foundation models as models adaptable to applications that are trained in a task-agnostic way on raw data. Theoretically, foundation models can process different modalities e.g., language and vision to affect the physical world and perform reasoning, or even interact with humans.The word foundation specifies the role these models play: A foundation model is itself unfinished but serves as the common basis from which many task-specific models are built via adaptation, the report reads. We also chose the term foundation deliberately to communicate the gravity of these models: Poorly constructed foundations are a recipe for disaster, and well-executed foundations are reliable bedrock for future applications.From a technical point of view, foundation models arent new. Theyre based on deep neural networks and self-supervised learning, both of which have existed for decades. Semi-supervised learning accepts data thats partially labeled or where the majority of the data lacks labels. An algorithm determines the correlations between data points, using a small amount of labeled data to mark points and train based on the newly applied labels.The sheer scope of foundation models over the last few years stretches the boundaries of whats possible, however. For example, OpenAIs GPT-3 can do a passable and occasionally exceptional job on challenging natural language tasks that it hasnt seen before. At the same time, existing foundation models have the potential to inflict harm and their characteristics are, in general, poorly understood.These models, which are trained at scale, result in emergent capabilities, making it difficult to understand what their biases and failure modes are. Yet the commercial incentives are for this technology to be deployed to society at large, Liang said.ImpactsFoundation models are academically interesting, due to their stellar performance on popular benchmarks, but what makes them critical to study is the fact that theyre being deployed with far-reaching consequences. For example, Google Search, which has 4 billion users, relies heavily on BERT. And GPT-3 is now being used in over 300 apps by tens of thousands of developers and producing 4.5 billion words per day.As AI systems become deeply embedded in society, there have been growing concerns about their potential negative effects. Machine learning can perpetuate inequality as the trained models amplify biases in datasets. (Last year, an algorithm the U.K. government had adopted downgraded hundreds of thousands of students grades, disproportionately impacting those from tuition-free schools.) Another concern is foundation models ability to generate realistic text, images, and videos, which has the potential to scale disinformation in already polluted social media networks.Foundation models could have other negative impacts, particularly from an environmental standpoint, the reports coauthors point out. The effects of model training on the environment have been brought into relief in recent years. In June 2020, researchers at the University of Massachusetts at Amherst released a study estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly 5 times the lifetime emissions of the average U.S. car. OpenAI itself has conceded that models like GPT-3 require significant amounts of compute on the order of hundreds of petaflops per day which contributes to carbon emissions.Foundation models are also likely to have substantial labor market impacts and rest on tenuous legal footing. By 2022, an estimated 5 million jobs worldwide will be lost to automation technologies, with 47% of U.S. jobs at risk of being automated. Moreover, how the law bears on the development and deployment of foundational models remains unclear in the absence of unifying legal and regulatory frameworks.It should be noted that preliminary work to address the liability questions is underway. Amsterdam and Helsinki have launched AI registries to detail how each city uses algorithms to deliver services. And the EU recently released tough draft rules on the use of AI, including a ban on most surveillance and strict safeguards for algorithms employed in recruitment, critical infrastructure, credit scoring, migration, and law enforcement.Research ecosystemBeyond the societal implications, foundation models introduce new hurdles in research and development, owing to the strong economic incentives companies have to deploy models developed for science. As an example, the coauthors cite GPT-3, which began as a research vehicle for OpenAI but later became a product widely used by software developers.At the research communitys peril, the distinction between theory and deployment is sometimes lost. Research models are under construction in the sense that theyre often not extensively tested. Unfortunately, companies dont always place warning labels indicating this on their prototypes. To ensure safety, many more precautions should be taken when in-development models are made available commercially, the coauthors argue.Taking the 10,000-foot-view, the coauthors note that while trained models may be available, the actual training of foundation models is impossible for the vast majority of AI researchers, due to their high computational cost and engineering requirements. This lack of accessibility and thus reproducibility risks hindering innovation and impacting the health of AI as a scientific field. It could also lead to a centralization of power among wealthier organizations, the coauthors say aside from community efforts like EleutherAI and Hugging Faces BigScience project.While some meaningful research can still be done with training smaller models or studying preexisting large models, neither will be sufficient to make adequate progress on the difficult sociotechnical challenges, the report reads. Due to the emergent nature of these models, some functionalities like in-context learning have only been demonstrated in models of sufficient size. Having access to existing models can be useful for powering downstream applications or to identify problems (e.g., bias), but this will not help us design better architectures or training objectives for foundation models.As an antidote to the many problematic aspects of foundation models, CRFMs report suggests building infrastructure for public AI projects like the Hubble Space Telescope and Large Hadron Collider. The coauthors point to the National Research Cloud, a nonprofit initiative to provide researchers with compute power and government datasets for education, as a step in the right direction. But they say that much more investment will be needed to fulfill the vision of an open community-based effort that shapes the future of foundation models.Much still remains unclear in spite of our efforts, and we reiterate that this is just the beginning of a paradigm shift: Foundation models have only just begun to transform the way AI systems are built and deployed in the world, the reports coauthors wrote. To ensure the responsible development and deployment of these models on durable foundations, we envision collaboration between different sectors, institutions, and disciplines from the onset to be especially critical.Liang added: Were very much in the early days so the professional norms are underdeveloped. Its therefore imperative that we, as a community, act now to ensure that this technology is developed and deployed in an ethically and socially responsible fashion.VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:up-to-date information on the subjects of interest to youour newslettersgated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn Morenetworking features, and moreBecome a member
Content Creation/Content Synthesis/Detection and Monitoring/Prediction/Recommendation/Process Automation
Unknown
null
null
null
null
null
null
news
Symphony IndustrialAI Appoints Barry Johnson President of Digital Manufacturing
WOBURN, Mass., Aug. 11, 2021 /PRNewswire/ -- Symphony IndustrialAI announced today the appointment of Barry Johnson as president of Digital Manufacturing. The move supports Symphony IndustrialAI's rapid growth and its expansion into enterprise AI solutions for plant operations,...
https://www.prnewswire.com/news-releases/symphony-industrialai-appoints-barry-johnson-president-of-digital-manufacturing-301352860.html
2021-08-11T10:00:00Z
WOBURN, Mass., Aug. 11, 2021 /PRNewswire/ -- Symphony IndustrialAI announced today the appointment of Barry Johnson as president of Digital Manufacturing. The move supports Symphony IndustrialAI's rapid growth and its expansion into enterprise AI solutions for plant operations, visibility, and performance. The announcement comes on the heels of two additional key hires for the Symphony IndustrialAI's digital manufacturing division, Vice President of Products Prashant Jagarlapudi and Chief Revenue Officer Ron Posey.Johnson is an experienced senior executive with more than 25 years of demonstrated success in the industrial software sector, driving revenue growth and improving business performance internationally. Johnson previously served in multiple executive roles at Rockwell Automation, including global vice president of software sales. Before Rockwell, Barry held numerous software roles at GE, driving growth organically and inorganically."Enterprise AI in industrial applications has reached an inflection point, and Barry and his team are on the leading edge," said Symphony IndustrialAI Chief Executive Officer Dominic Gallello. "As a transformational change leader, Barry plays a key role in fueling growth through close work with sales, product management, engineering, and professional services teams. Prashant and Ron bring added power to our work in digital manufacturing and Enterprise AI.""With the launch of the EurekaAI industrial platform and Symphony IndustrialAI's digital manufacturing solutions, we are working with leaders across industrial applications to make strides in enterprise AI adoption," said Johnson. "2021 is a year of immense transformation as we lay the digital foundation for the success of tomorrow's industrial and manufacturing champions. There's no team better suited to accelerate this evolution than Symphony IndustrialAI."These appointments follow Symphony IndustrialAI's introduction of the end-to-end Eureka AI platform for manufacturing and the acquisition of Savigent.About Symphony IndustrialAISymphony IndustrialAI is an innovator in industrial insight, accelerating autonomous plant operations. The industry-leading EurekaAI/IoT platform and industrial optimization solutions connect tens of thousands of assets and workflows in manufacturing plants globally and process billions of data points daily, pushing new plateaus in operational intelligence. Symphony IndustrialAI digital manufacturing solutions connect devices, processes, people, and systems enabling harmonizing plant automation and control. Symphony IndustrialAI plant performance applications span asset predictive maintenance and process health and optimization, maintaining high availability of equipment, extending the life of capital assets, and reducing process variability.  Symphony IndustrialAI solutions provide high value to its users by driving variability out of processes and optimizing operations for throughput, yield, energy efficiency, and sustainability. About SymphonyAISymphonyAI is building the leading enterprise AI company for digital transformation across the most important and resilient growth verticals, including life sciences, healthcare, retail, consumer packaged goods, financial services, manufacturing, and media. In each of these verticals, SAI businesses have many of the leading enterprises as clients. SAI is backed by a $1 billion commitment from Dr. Romesh Wadhwani, a successful entrepreneur and philanthropist. Since its founding in 2017, SymphonyAI has grown rapidly to a combined revenue run rate of more than $300 million and over 2,200 talented leaders, data scientists, and other professionals.PR contact: Tylor Fenhaus[email protected]SOURCE Symphony IndustrialAI
Unknown
Management/Computer and Mathematical
null
null
null
null
null
null
news
Ying Zeng
abstract,data,ying zeng,paper,problem,deep neural networks,communication,applications,deep learning models,adversarial examples,spiking neural network,analysis,wireless communications,existing,performance
https://arxiv.org/search/cs?searchtype=author&query=Zeng%2C+Y
2021-09-07T04:17:00Z
Hybrid Contrastive Learning of Tri-Modal Representation for Multimodal Sentiment AnalysisAuthors:Sijie Mai, Ying Zeng, Shuangjia Zheng, Haifeng HuAbstract:The wide application of smart devices enables the availability of multimodal data, which can be utilized in many tasks. In the field of multimodal sentiment analysis (MSA), most previous works focus on exploring intra- and inter-modal interactions. However, training a network with cross-modal information (language, visual, audio) is still challenging due to the modality gap, and existing methods s… ▽ More The wide application of smart devices enables the availability of multimodal data, which can be utilized in many tasks. In the field of multimodal sentiment analysis (MSA), most previous works focus on exploring intra- and inter-modal interactions. However, training a network with cross-modal information (language, visual, audio) is still challenging due to the modality gap, and existing methods still cannot ensure to sufficiently learn intra-/inter-modal dynamics. Besides, while learning dynamics within each sample draws great attention, the learning of inter-class relationships is neglected. Moreover, the size of datasets limits the generalization ability of existing methods. To address the afore-mentioned issues, we propose a novel framework HyCon for hybrid contrastive learning of tri-modal representation. Specifically, we simultaneously perform intra-/inter-modal contrastive learning and semi-contrastive learning (that is why we call it hybrid contrastive learning), with which the model can fully explore cross-modal interactions, preserve inter-class relationships and reduce the modality gap. Besides, a refinement term is devised to prevent the model falling into a sub-optimal solution. Moreover, HyCon can naturally generate a large amount of training pairs for better generalization and reduce the negative effect of limited datasets. Extensive experiments on public datasets demonstrate that our proposed method outperforms existing works. △ LessSubmitted 4 September, 2021; originally announced September 2021.Comments:Under ReviewChannel Knowledge Map for Environment-Aware Communications: EM Algorithm for Map ConstructionAuthors:Kun Li, Peiming Li, Yong Zeng, Jie XuAbstract:Channel knowledge map (CKM) is an emerging technique to enable environment-aware wireless communications, in which databases with location-specific channel knowledge are used to facilitate or even obviate real-time channel state information acquisition. One fundamental problem for CKM-enabled communication is how to efficiently construct the CKM based on finite measurement data points at limited u… ▽ More Channel knowledge map (CKM) is an emerging technique to enable environment-aware wireless communications, in which databases with location-specific channel knowledge are used to facilitate or even obviate real-time channel state information acquisition. One fundamental problem for CKM-enabled communication is how to efficiently construct the CKM based on finite measurement data points at limited user locations. Towards this end, this paper proposes a novel map construction method based on the \emph{expectation maximization} (EM) algorithm, by utilizing the available measurement data, jointly with the expert knowledge of well-established statistic channel models. The key idea is to partition the available data points into different groups, where each group shares the same modelling parameter values to be determined. We show that determining the modelling parameter values can be formulated as a maximum likelihood estimation problem with latent variables, which is then efficiently solved by the classic EM algorithm. Compared to the pure data-driven methods such as the nearest neighbor based interpolation, the proposed method is more efficient since only a small number of modelling parameters need to be determined and stored. Furthermore, the proposed method is extended for constructing a specific type of CKM, namely, the channel gain map (CGM), where closed-form expressions are derived for the E-step and M-step of the EM algorithm. Numerical results are provided to show the effectiveness of the proposed map construction method as compared to the benchmark curve fitting method with one single model. △ LessSubmitted 16 August, 2021; originally announced August 2021.Comments:7 pages, 7figuresAsteria: Deep Learning-based AST-Encoding for Cross-platform Binary Code Similarity DetectionAuthors:Shouguo Yang, Long Cheng, Yicheng Zeng, Zhe Lang, Hongsong Zhu, Zhiqiang ShiAbstract:Binary code similarity detection is a fundamental technique for many security applications such as vulnerability search, patch analysis, and malware detection. There is an increasing need to detect similar code for vulnerability search across architectures with the increase of critical vulnerabilities in IoT devices. The variety of IoT hardware architectures and software platforms requires to capt… ▽ More Binary code similarity detection is a fundamental technique for many security applications such as vulnerability search, patch analysis, and malware detection. There is an increasing need to detect similar code for vulnerability search across architectures with the increase of critical vulnerabilities in IoT devices. The variety of IoT hardware architectures and software platforms requires to capture semantic equivalence of code fragments in the similarity detection. However, existing approaches are insufficient in capturing the semantic similarity. We notice that the abstract syntax tree (AST) of a function contains rich semantic information. Inspired by successful applications of natural language processing technologies in sentence semantic understanding, we propose a deep learning-based AST-encoding method, named ASTERIA, to measure the semantic equivalence of functions in different platforms. Our method leverages the Tree-LSTM network to learn the semantic representation of a function from its AST. Then the similarity detection can be conducted efficiently and accurately by measuring the similarity between two representation vectors. We have implemented an open-source prototype of ASTERIA. The Tree-LSTM model is trained on a dataset with 1,022,616 function pairs and evaluated on a dataset with 95,078 function pairs. Evaluation results show that our method outperforms the AST-based tool Diaphora and the-state-of-art method Gemini by large margins with respect to the binary similarity detection. And our method is several orders of magnitude faster than Diaphora and Gemini for the similarity calculation. In the application of vulnerability search, our tool successfully identified 75 vulnerable functions in 5,979 IoT firmware images. △ LessSubmitted 13 August, 2021; originally announced August 2021.Journal ref: 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)Domain Adaptation for Autoencoder-Based End-to-End Communication Over Wireless ChannelsAuthors:Jayaram Raghuram, Yijing Zeng, Dolores García Martí, Somesh Jha, Suman Banerjee, Joerg Widmer, Rafael Ruiz OrtizAbstract:The problem of domain adaptation conventionally considers the setting where a source domain has plenty of labeled data, and a target domain (with a different data distribution) has plenty of unlabeled data but none or very limited labeled data. In this paper, we address the setting where the target domain has only limited labeled data from a distribution that is expected to change frequently. We f… ▽ More The problem of domain adaptation conventionally considers the setting where a source domain has plenty of labeled data, and a target domain (with a different data distribution) has plenty of unlabeled data but none or very limited labeled data. In this paper, we address the setting where the target domain has only limited labeled data from a distribution that is expected to change frequently. We first propose a fast and light-weight method for adapting a Gaussian mixture density network (MDN) using only a small set of target domain samples. This method is well-suited for the setting where the distribution of target data changes rapidly (e.g., a wireless channel), making it challenging to collect a large number of samples and retrain. We then apply the proposed MDN adaptation method to the problem of end-of-end learning of a wireless communication autoencoder. A communication autoencoder models the encoder, decoder, and the channel using neural networks, and learns them jointly to minimize the overall decoding error rate. However, the error rate of an autoencoder trained on a particular (source) channel distribution can degrade as the channel distribution changes frequently, not allowing enough time for data collection and retraining of the autoencoder to the target channel distribution. We propose a method for adapting the autoencoder without modifying the encoder and decoder neural networks, and adapting only the MDN model of the channel. The method utilizes feature transformations at the decoder to compensate for changes in the channel distribution, and effectively present to the decoder samples close to the source distribution. Experimental evaluation on simulated datasets and real mmWave wireless channels demonstrate that the proposed methods can quickly adapt the MDN model, and improve or maintain the error rate of the autoencoder under changing channel conditions. △ LessSubmitted 2 August, 2021; originally announced August 2021.Comments:Under Review. 22 pages, 8 figuresComplexity-Free Generalization via Distributionally Robust OptimizationAuthors:Henry Lam, Yibo ZengAbstract:Established approaches to obtain generalization bounds in data-driven optimization and machine learning mostly build on solutions from empirical risk minimization (ERM), which depend crucially on the functional complexity of the hypothesis class. In this paper, we present an alternate route to obtain these bounds on the solution from distributionally robust optimization (DRO), a recent data-driven… ▽ More Established approaches to obtain generalization bounds in data-driven optimization and machine learning mostly build on solutions from empirical risk minimization (ERM), which depend crucially on the functional complexity of the hypothesis class. In this paper, we present an alternate route to obtain these bounds on the solution from distributionally robust optimization (DRO), a recent data-driven optimization framework based on worst-case analysis and the notion of ambiguity set to capture statistical uncertainty. In contrast to the hypothesis class complexity in ERM, our DRO bounds depend on the ambiguity set geometry and its compatibility with the true loss function. Notably, when using maximum mean discrepancy as a DRO distance metric, our analysis implies, to the best of our knowledge, the first generalization bound in the literature that depends solely on the true loss function, entirely free of any complexity measures or bounds on the hypothesis class. △ LessSubmitted 21 June, 2021; originally announced June 2021.Multi-User Communication with Extremely Large-Scale MIMOAuthors:Haiquan Lu, Yong ZengAbstract:Extremely large-scale multiple-input multiple-output (XL-MIMO) communication aims to further boost the antenna size significantly than current massive MIMO systems, for which conventional far-field assumption with uniform plane wave (UPW) model may become invalid. This paper studies the modelling and performance analysis for multi-user XL-MIMO communication. With the spherical wavefront phase mode… ▽ More Extremely large-scale multiple-input multiple-output (XL-MIMO) communication aims to further boost the antenna size significantly than current massive MIMO systems, for which conventional far-field assumption with uniform plane wave (UPW) model may become invalid. This paper studies the modelling and performance analysis for multi-user XL-MIMO communication. With the spherical wavefront phase modelling, and also by taking into account the variations of signal amplitude and projected aperture across array elements, the performance of the three typical beamforming schemes are analyzed, namely the maximal-ratio combining (MRC), zero-forcing (ZF), and minimum mean-square error (MMSE) beamforming. For the special case of two-users, we analytically show that the signal-to-interference-plus-noise ratio (SINR) of all the three beamforming schemes increases as the channels' correlation coefficient decreases. Furthermore, compared to existing UPW model where inter-user interference (IUI) can only be suppressed in angular domain, XL-MIMO enables a new degree-of-freedom (DoF) for IUI suppression by distance separation, even for users along the same direction. Simulation results are provided to validate the modelling and performance analysis of multi-user XL-MIMO communications. △ LessSubmitted 12 June, 2021; originally announced June 2021.Comments:5 pages, 8 figuresCell-Free Symbiotic Radio: Channel Estimation Method and Achievable Rate AnalysisAuthors:Zhuoyin Dai, Ruoguang Li, Jingran Xu, Yong Zeng, Shi JinAbstract:Cell-free massive MIMO and symbiotic radio are promising beyond 5G (B5G) networking architecture and transmission technology, respectively. This paper studies cell-free symbiotic radio systems, where a number of distributed access points (APs) cooperatively send primary information to a receiver, and simultaneously support the backscattering communication of the secondary backscatter device (BD).… ▽ More Cell-free massive MIMO and symbiotic radio are promising beyond 5G (B5G) networking architecture and transmission technology, respectively. This paper studies cell-free symbiotic radio systems, where a number of distributed access points (APs) cooperatively send primary information to a receiver, and simultaneously support the backscattering communication of the secondary backscatter device (BD). An efficient two-phase uplink-training based channel estimation method is proposed to estimate the direct-link channel and cascaded backscatter channel, and the achievable primary and secondary communication rates taking into account the channel estimation errors are derived. Furthermore, to achieve a flexible trade-off between the primary and secondary communication rates, we propose a low-complexity weighted-maximal-ratio transmission (weighted-MRT) beamforming scheme, which only requires local processing at each AP without having to exchange the estimated channel state information. Simulation results are provided to show the impact of the channel training lengths on the performance of the cell-free symbiotic radio systems. △ LessSubmitted 10 June, 2021; originally announced June 2021.Comments:6 pages, 3 figures, conferenceWireless Communication with Extremely Large-Scale Intelligent Reflecting SurfaceAuthors:Chao Feng, Haiquan Lu, Yong Zeng, Shi Jin, Rui ZhangAbstract:Intelligent reflecting surface (IRS) is a promising technology for wireless communications, thanks to its potential capability to engineer the radio environment. However, in practice, such an envisaged benefit is attainable only when the passive IRS is of a sufficiently large size, for which the conventional uniform plane wave (UPW)-based channel model may become inaccurate. In this paper, we purs… ▽ More Intelligent reflecting surface (IRS) is a promising technology for wireless communications, thanks to its potential capability to engineer the radio environment. However, in practice, such an envisaged benefit is attainable only when the passive IRS is of a sufficiently large size, for which the conventional uniform plane wave (UPW)-based channel model may become inaccurate. In this paper, we pursue a new channel modelling and performance analysis for wireless communications with extremely large-scale IRS (XL-IRS). By taking into account the variations in signal's amplitude and projected aperture across different reflecting elements, we derive both lower- and upper-bounds of the received signal-to-noise ratio (SNR) for the general uniform planar array (UPA)-based XL-IRS. Our results reveal that, instead of scaling quadratically with the increased number of reflecting elements M as in the conventional UPW model, the SNR under the more practically applicable non-UPW model increases with M only with a diminishing return and gets saturated eventually. To gain more insights, we further study the special case of uniform linear array (ULA)-based XL-IRS, for which a closed-form SNR expression in terms of the IRS size and transmitter/receiver location is derived. This result shows that the SNR mainly depends on the two geometric angles formed by the transmitter/receiver locations with the IRS, as well as the boundary points of the IRS. Numerical results validate our analysis and demonstrate the importance of proper channel modelling for wireless communications aided by XL-IRS. △ LessSubmitted 10 June, 2021; originally announced June 2021.Comments:6 pages, 5 figures, conferenceA Unified Framework for Task-Driven Data Quality ManagementAuthors:Tianhao Wang, Yi Zeng, Ming Jin, Ruoxi JiaAbstract:High-quality data is critical to train performant Machine Learning (ML) models, highlighting the importance of Data Quality Management (DQM). Existing DQM schemes often cannot satisfactorily improve ML performance because, by design, they are oblivious to downstream ML tasks. Besides, they cannot handle various data quality issues (especially those caused by adversarial attacks) and have limited a… ▽ More High-quality data is critical to train performant Machine Learning (ML) models, highlighting the importance of Data Quality Management (DQM). Existing DQM schemes often cannot satisfactorily improve ML performance because, by design, they are oblivious to downstream ML tasks. Besides, they cannot handle various data quality issues (especially those caused by adversarial attacks) and have limited applications to only certain types of ML models. Recently, data valuation approaches (e.g., based on the Shapley value) have been leveraged to perform DQM; yet, empirical studies have observed that their performance varies considerably based on the underlying data and training process. In this paper, we propose a task-driven, multi-purpose, model-agnostic DQM framework, DataSifter, which is optimized towards a given downstream ML task, capable of effectively removing data points with various defects, and applicable to diverse models. Specifically, we formulate DQM as an optimization problem and devise a scalable algorithm to solve it. Furthermore, we propose a theoretical framework for comparing the worst-case performance of different DQM strategies. Remarkably, our results show that the popular strategy based on the Shapley value may end up choosing the worst data subset in certain practical scenarios. Our evaluation shows that DataSifter achieves and most often significantly improves the state-of-the-art performance over a wide range of DQM tasks, including backdoor, poison, noisy/mislabel data detection, data summarization, and data debiasing. △ LessSubmitted 9 June, 2021; originally announced June 2021.BackEISNN: A Deep Spiking Neural Network with Adaptive Self-Feedback and Balanced Excitatory-Inhibitory NeuronsAuthors:Dongcheng Zhao, Yi Zeng, Yang LiAbstract:Spiking neural networks (SNNs) transmit information through discrete spikes, which performs well in processing spatial-temporal information. Due to the non-differentiable characteristic, there still exist difficulties in designing well-performed SNNs. Recently, SNNs trained with backpropagation have shown superior performance due to the proposal of the gradient approximation. However, the performa… ▽ More Spiking neural networks (SNNs) transmit information through discrete spikes, which performs well in processing spatial-temporal information. Due to the non-differentiable characteristic, there still exist difficulties in designing well-performed SNNs. Recently, SNNs trained with backpropagation have shown superior performance due to the proposal of the gradient approximation. However, the performance on complex tasks is still far away from the deep neural networks. Taking inspiration from the autapse in the brain which connects the spiking neurons with a self-feedback connection, we apply an adaptive time-delayed self-feedback on the membrane potential to regulate the spike precisions. As well as, we apply the balanced excitatory and inhibitory neurons mechanism to control the spiking neurons' output dynamically. With the combination of the two mechanisms, we propose a deep spiking neural network with adaptive self-feedback and balanced excitatory and inhibitory neurons (BackEISNN). The experimental results on several standard datasets have shown that the two modules not only accelerate the convergence of the network but also improve the accuracy. For the MNIST, FashionMNIST, and N-MNIST datasets, our model has achieved state-of-the-art performance. For the CIFAR10 dataset, our BackEISNN also gets remarkable performance on a relatively light structure that competes against state-of-the-art SNNs. △ LessSubmitted 27 May, 2021; originally announced May 2021.BSNN: Towards Faster and Better Conversion of Artificial Neural Networks to Spiking Neural Networks with Bistable NeuronsAuthors:Yang Li, Yi Zeng, Dongcheng ZhaoAbstract:The spiking neural network (SNN) computes and communicates information through discrete binary events. It is considered more biologically plausible and more energy-efficient than artificial neural networks (ANN) in emerging neuromorphic hardware. However, due to the discontinuous and non-differentiable characteristics, training SNN is a relatively challenging task. Recent work has achieved essenti… ▽ More The spiking neural network (SNN) computes and communicates information through discrete binary events. It is considered more biologically plausible and more energy-efficient than artificial neural networks (ANN) in emerging neuromorphic hardware. However, due to the discontinuous and non-differentiable characteristics, training SNN is a relatively challenging task. Recent work has achieved essential progress on an excellent performance by converting ANN to SNN. Due to the difference in information processing, the converted deep SNN usually suffers serious performance loss and large time delay. In this paper, we analyze the reasons for the performance loss and propose a novel bistable spiking neural network (BSNN) that addresses the problem of spikes of inactivated neurons (SIN) caused by the phase lead and phase lag. Also, when ResNet structure-based ANNs are converted, the information of output neurons is incomplete due to the rapid transmission of the shortcut path. We design synchronous neurons (SN) to help efficiently improve performance. Experimental results show that the proposed method only needs 1/4-1/10 of the time steps compared to previous work to achieve nearly lossless conversion. We demonstrate state-of-the-art ANN-SNN conversion for VGG16, ResNet20, and ResNet34 on challenging datasets including CIFAR-10 (95.16% top-1), CIFAR-100 (78.12% top-1), and ImageNet (72.64% top-1). △ LessSubmitted 26 May, 2021; originally announced May 2021.CARLS: Cross-platform Asynchronous Representation Learning SystemAuthors:Chun-Ta Lu, Yun Zeng, Da-Cheng Juan, Yicheng Fan, Zhe Li, Jan Dlabal, Yi-Ting Chen, Arjun Gopalan, Allan Heydon, Chun-Sung Ferng, Reah Miyara, Ariel Fuxman, Futang Peng, Zhen Li, Tom Duerig, Andrew TomkinsAbstract:In this work, we propose CARLS, a novel framework for augmenting the capacity of existing deep learning frameworks by enabling multiple components -- model trainers, knowledge makers and knowledge banks -- to concertedly work together in an asynchronous fashion across hardware platforms. The proposed CARLS is particularly suitable for learning paradigms where model training benefits from additiona… ▽ More In this work, we propose CARLS, a novel framework for augmenting the capacity of existing deep learning frameworks by enabling multiple components -- model trainers, knowledge makers and knowledge banks -- to concertedly work together in an asynchronous fashion across hardware platforms. The proposed CARLS is particularly suitable for learning paradigms where model training benefits from additional knowledge inferred or discovered during training, such as node embeddings for graph neural networks or reliable pseudo labels from model predictions. We also describe three learning paradigms -- semi-supervised learning, curriculum learning and multimodal learning -- as examples that can be scaled up efficiently by CARLS. One version of CARLS has been open-sourced and available for download at: https://github.com/tensorflow/neural-structured-learning/tree/master/research/carls △ LessSubmitted 26 May, 2021; originally announced May 2021.Communicating with Extremely Large-Scale Array/Surface: Unified Modelling and Performance AnalysisAuthors:Haiquan Lu, Yong ZengAbstract:Wireless communications with extremely large-scale array (XL-array) correspond to systems whose antenna sizes are so large that conventional modelling assumptions, such as uniform plane wave (UPW) impingement, are longer valid. This paper studies the mathematical modelling and performance analysis of XL-array communications. By deviating from the conventional modelling approach that treats the arr… ▽ More Wireless communications with extremely large-scale array (XL-array) correspond to systems whose antenna sizes are so large that conventional modelling assumptions, such as uniform plane wave (UPW) impingement, are longer valid. This paper studies the mathematical modelling and performance analysis of XL-array communications. By deviating from the conventional modelling approach that treats the array elements as sizeless points, we explicitly model their physical area/aperture, which enables a unified modelling for the classical discrete antenna arrays and the emerging continuous surfaces. As such, a generic array/surface model that accurately takes into account the variations of signal phase, amplitude and projected aperture across array elements is proposed. Based on the proposed model, a closed-form expression of the resulting SNR with the optimal single-user MRC/MRT beamforming is derived. The expression reveals that instead of scaling linearly with the antenna number M as in conventional UPW modelling, the SNR with the more generic model increases with M with diminishing return, which is governed by the collective properties of the array, such as the array occupation ratio and the physical sizes of the array along each dimension, while irrespective of the properties of the individual array element. Additionally, we have derived an alternative insightful expression for the optimal SNR in terms of the vertical and horizontal angular spans. Furthermore, we also show that our derived results include the far-field UPW modelling as a special case. One important finding during the study of far-field approximation is the necessity to introduce a new distance criterion to complement the classical Rayleigh distance, termed uniform-power distance (UPD), which concerns the signal amplitude/power variations across array elements, instead of phase variations as for Rayleigh distance. △ LessSubmitted 27 April, 2021; originally announced April 2021.Comments:15 pages, 13 figuresRethinking the Backdoor Attacks' Triggers: A Frequency PerspectiveAuthors:Yi Zeng, Won Park, Z. Morley Mao, Ruoxi JiaAbstract:Backdoor attacks have been considered a severe security threat to deep learning. Such attacks can make models perform abnormally on inputs with predefined triggers and still retain state-of-the-art performance on clean data. While backdoor attacks have been thoroughly investigated in the image domain from both attackers' and defenders' sides, an analysis in the frequency domain has been missing th… ▽ More Backdoor attacks have been considered a severe security threat to deep learning. Such attacks can make models perform abnormally on inputs with predefined triggers and still retain state-of-the-art performance on clean data. While backdoor attacks have been thoroughly investigated in the image domain from both attackers' and defenders' sides, an analysis in the frequency domain has been missing thus far. This paper first revisits existing backdoor triggers from a frequency perspective and performs a comprehensive analysis. Our results show that many current backdoor attacks exhibit severe high-frequency artifacts, which persist across different datasets and resolutions. We further demonstrate these high-frequency artifacts enable a simple way to detect existing backdoor triggers at a detection rate of 98.50% without prior knowledge of the attack details and the target model. Acknowledging previous attacks' weaknesses, we propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability. We show that existing defense works can benefit by incorporating these smooth triggers into their design consideration. Moreover, we show that the detector tuned over stronger smooth triggers can generalize well to unseen weak smooth triggers. In short, our work emphasizes the importance of considering frequency analysis when designing both backdoor attacks and defenses in deep learning. △ LessSubmitted 9 April, 2021; v1 submitted 7 April, 2021; originally announced April 2021.3D Human Body Reshaping with Anthropometric ModelingAuthors:Yanhong Zeng, Jianlong Fu, Hongyang ChaoAbstract:Reshaping accurate and realistic 3D human bodies from anthropometric parameters (e.g., height, chest size, etc.) poses a fundamental challenge for person identification, online shopping and virtual reality. Existing approaches for creating such 3D shapes often suffer from complex measurement by range cameras or high-end scanners, which either involve heavy expense cost or result in low quality. Ho… ▽ More Reshaping accurate and realistic 3D human bodies from anthropometric parameters (e.g., height, chest size, etc.) poses a fundamental challenge for person identification, online shopping and virtual reality. Existing approaches for creating such 3D shapes often suffer from complex measurement by range cameras or high-end scanners, which either involve heavy expense cost or result in low quality. However, these high-quality equipments limit existing approaches in real applications, because the equipments are not easily accessible for common users. In this paper, we have designed a 3D human body reshaping system by proposing a novel feature-selection-based local mapping technique, which enables automatic anthropometric parameter modeling for each body facet. Note that the proposed approach can leverage limited anthropometric parameters (i.e., 3-5 measurements) as input, which avoids complex measurement, and thus better user-friendly experience can be achieved in real scenarios. Specifically, the proposed reshaping model consists of three steps. First, we calculate full-body anthropometric parameters from limited user inputs by imputation technique, and thus essential anthropometric parameters for 3D body reshaping can be obtained. Second, we select the most relevant anthropometric parameters for each facet by adopting relevance masks, which are learned offline by the proposed local mapping technique. Third, we generate the 3D body meshes by mapping matrices, which are learned by linear regression from the selected parameters to mesh-based body representation. We conduct experiments by anthropomorphic evaluation and a user study from 68 volunteers. Experiments show the superior results of the proposed system in terms of mean reconstruction error against the state-of-the-art approaches. △ LessSubmitted 5 April, 2021; originally announced April 2021.Comments:ICIMCS 2017(oral). The final publication is available at Springer via https://doi.org/10.1007/978-981-10-8530-7_10Journal ref: In International Conference on Internet Multimedia Computing and Service (pp. 96-107). Springer, Singapore (2017)Aggregated Contextual Transformations for High-Resolution Image InpaintingAuthors:Yanhong Zeng, Jianlong Fu, Hongyang Chao, Baining GuoAbstract:State-of-the-art image inpainting approaches can suffer from generating distorted structures and blurry textures in high-resolution images (e.g., 512x512). The challenges mainly drive from (1) image content reasoning from distant contexts, and (2) fine-grained texture synthesis for a large missing region. To overcome these two challenges, we propose an enhanced GAN-based model, named Aggregated CO… ▽ More St
Content Synthesis/Decision Making
Unknown
null
null
null
null
null
null
news
[email protected] (Dom Nicastro)
Verint Acquires Conversocial, Salesforce Wants to Be Netflix for B2B & More CX News
Verint acquires Conversocial, NICE acquires GoMoxie, and more from the world of customer experience and digital marketing news.Continue reading...
https://www.cmswire.com/customer-experience/verint-acquires-conversocial-salesforce-wants-to-be-netflix-for-b2b-more-cx-news/
https://www.cmswire.com/-/media/9f97b64dc9a94e6f8ecc51583b98e08a.ashx
2021-08-13T15:24:41Z
PHOTO:New Africa Verint, which provides customer engagement software, has acquired Conversocial for $50 million. Verints support for digital customer engagement will be boosted in the acquisition with connections to messaging channels including Apple Business Chat, Facebook Messenger, Twitter and WhatsApp. The Verint Cloud Platform features:Conversational channelsConversational AI that automates personalized communications on the customers channel of choiceOrchestration of customer journeys across channels of choiceConnections to AI-powered knowledge management across all channelsCapturing conversation, interaction and experience data from all channels and applying advanced analyticsConversocial has approximately 80 employees with offices in New York and London. The acquisition is expected to close in Verints third fiscal quarter.In other customer experience and digital marketing software news...NICE Acquires GoMoxieNICE, a provider of digital customer experience software, has announced the acquisition of GoMoxie, which offers digital assistance tools. With the addition of GoMoxie, NICE is expanding beyond the contact center and into smart conversational self-service. This move further extends NICEs set of digital CX assets, including CXone Expert, an AI-powered knowledge management solution for digital self-service, CXone SmartReach, a conversational AI solution, and CXone Omnichannel Routing, supporting experiences over 35 digital channels. All are offered as part of CXone, a digital customer engagement platform powered by Enlighten AI, NICEs self-learning AI engine.Salesforce Offers Video Streaming ServiceNetflix, Hulu and... Salesforce? Wait, what? Salesforce has announced Salesforce+, a streaming service with live and on-demand content. Salesforce+ includes live experiences, original series, podcasts and other programming. Salesforce officials say it will inspire users of its software to learn new skills, pursue new career opportunities and "drive change in the world.""Just as brands like Disney, Netflix and Peloton have done with streaming services for consumers, Salesforce+ is providing an always-on, business media platform that builds trusted relationships with customers and a sense of belonging for the business community," Sarah Franklin, president and chief marketing officer for Salesforce, said in a statement.The current Salesforce+ lineup features:Leading Through Change launched in March 2020 as a weekly program focusing on how business leaders were dealing with the global pandemic. Connections, showcasing marketers from companies like IBM, Levis, and GoFundMe. The Inflection Point, featuring CEOs from brands such as Coca-Cola, PayPal, Honeywell and Workday sharing how their personal backstories, professional influences and values inform their leadership. Salesforce+ will be available to a global audience just as Dreamforce arrives in September.Influitive Releases Multilingual CapabilitiesInfluitive Corporation, which provides customer advocacy, community and engagement software, has announced the full release of its multilingual capabilities. Companies can have customer-facing touchpoints delivered in the language that best suits a companys global audience, from the invitation email and sign-up page to the homepage, content and rewards.Influitive supports eight languages out-of-the-box:FrenchGermanSpanishPortugueseItalianChinese (Simplified)JapaneseKoreanLeveraging Influitives Profile Fields, program managers can edit and invite members in their preferred language or have members select a language during their gamified onboarding experience. It also includes previews of the members experience in their preferred language through the Influitive feature, Lenses.Influitives multilingual features are available immediately.Shutterstock Announces Integration With OpenTextShutterstock has announced an API integration with OpenText. The integration will offer Shutterstock Enterprise and OpenText customers direct access to 380 million-plus Shutterstock images via OpenText Media Management. OpenText Media Management is a digital asset management (DAM) solution for brands and publishers. Aprimo Adds DAM FeatureAprimo, a provider of digital asset management and work management solutions, has announced the addition of Content Return on Effort (ROE) to its SaaS content operations platform. Content ROE showcases how assets perform across campaigns, in the context of the effort to create and distribute it. Content Return on Effort gives content and creative teams a more complete picture of performance than ROI does on its own, according to Aprimo officials. Content Return on Effort is calculated for assets stored in Aprimo Digital Asset Management, natively capturing impressions that can be analyzed and viewed by source, medium or other tracking parameters.LogMeIn Names Bill Robinson as Chief Revenue OfficerLogMeIn, a provider of cloud-based SaaS solutions such as GoToConnect, GoToMeeting, LastPass and Rescue, has appointed software sales veteran Bill Robinson to its newly created Chief Revenue Officer (CRO) role. At LogMeIn, Robinson will lead Global Sales, Customer Experience and Business Operations.Bill joins LogMeIn from Contact Center as a Service (CCaaS) company NICE, where he served as executive vice president of sales. That included a strategic alliance with LogMeIns Unified Communications as a Service (UCaaS) product GoToConnect.
Digital Assistance/Content Synthesis/Process Automation
Management/Business and Financial Operations
null
null
null
null
null
null
news
Kerry O'Shea Gorgone
How to Implement Artificial Intelligence in Marketing: Rajkumar Venkatesan on Marketing Smarts [Podcast]
Rajkumar Venkatesan, co-author of "The AI Marketing Canvas: A Five-Stage Road Map to Implementing Artificial Intelligence in Marketing," gives us a sneak peek at the road map and offers insight into how marketers can get started using AI. Read the full article at MarketingProfs
https://www.marketingprofs.com/podcasts/2021/45523/artificial-intelligence-raj-venkatesan-marketing-smarts
https://i.marketingprofs…enkatesan-lg.jpg
2021-08-19T14:00:00Z
Artificial intelligence (AI) and machine-learning (ML) have quickly grown beyond a few major tech companies and hardcore academic researchers. Every marketing organization can tap into the power of AI to streamline operations and grow the business.The new book The AI Marketing Canvas: A Five-Stage Road Map to Implementing Artificial Intelligence in Marketing provides a growth framework for business and marketing leaders to implement AI using a five-stage model called the “AI Marketing Canvas.” On this episode of Marketing Smarts, I speak with co-author Rajkumar Venkatesan about how he and his co-writer developed those stages by studying leading global brands. We cover examples of brands―including Google, Lyft and Coca-Cola―that have successfully woven AI into their marketing strategies.This is not a conversation about coding for AI models. Raj and I talk about how marketing leaders can go from “zero to hero” with AI in marketing, and what that means for your team and your company culture.Listen to the entire show now from the link above, or download the mp3 and listen at your convenience. Of course, you can also subscribe to the Marketing Smarts podcast in iTunes or via RSS and never miss an episode.This episode brought to you by PayPal. PayPal makes financial services and commerce more convenient, affordable, and secure."Marketing Smarts" theme music composed by Juanito Pascual of Signature Tones.Kerry O'Shea Gorgone: Welcome to the Marketing Smarts Podcast. I'm here with Raj Venkatesan, co-author of the book The AI Marketing Canvas: A Five Stage Roadmap for Implementing Artificial Intelligence in Marketing. He's the Ronald Trzcinski professor of business administration in the Darden Graduate School of Business Administration in Virginia. His writing has appeared in The Journal of Marketing and The Harvard Business Review. Let's just say he's very smart and knows a lot of things about a lot of things. How did you realize, Raj, that AI was going to be powerful for marketing specifically? When you learned it, did you learn it in the context of marketing, or did you learn it in some other context first?Rajkumar Venkatesan: Thank you, Kerry, for having me. It's a pleasure to be here. Thank you for your kind introduction. The story about how I started working on AI in marketing, it's a really interesting question you ask about how I learned about AI first, it goes back to the mid-90s when I was doing my undergrad in computer engineering. In my final year, I learned about neural networks, and my final year project was on genetic algorithms. Those are some of the things that are in the AI toolkit now. I started there. When I did my PhD, my advisor recognized that computers and technology are going to shape marketing in the future and asked if I wanted to work on that. I didn't know much about how things were going to turn out, and I said I'd do that. That's how I started. So, I came into this as a technology computer engineer, but as I started taking classes in economics, marketing, and consumer behavior, I started realizing it was very fascinating to hear about all of these new subjects and also it gave me a sense of where these tools that I was learning could be applied and how they can actually be valuable. I pursued and I started teaching, I came to UVA in 2006 after teaching in University of Connecticut. I started teaching marketing analytics. Darden is a special case where we value practice and connection with a practicing manager and we do the case method, which really teaches me about marketing and also understanding how managers are using data and what challenges they're solving every day. As I was teaching my course, and writing my case studies, and doing my research, I started seeing that more and more you now have data and technology influencing marketing. That's how the worlds kind of came together for me and I started working on this book.Kerry: I think a lot of us as marketers probably see the real world results of AI kind of at work. Like Starbucks knows I always get this certain thing and it will show me the thing every time I open the app, and it will suggest other things based on my history and that kind of thing. I can see that in the B-to-C context really clearly. What does AI look like for B-to-B marketers?Raj: Great question. In fact, when I started working on data in marketing and I did my work on customer lifetime value, it was in the context of B-to-B. I think one of the things with B-to-B which is really an advantage is that they have a direct relationship with the customers most of the time. Of course, they can sell through resellers, but to a large extent B-to-B firms know their customers. With Salesforce and other software today, there is a lot of information about how salespeople are connecting with their customers and what the customers are talking about. Of course, there are challenges, like salespeople actually putting the data in and how much is the coverage and all of that. I think where it can be really useful is the fact that you know about who your customers are and what they are buying, and you know a lot more detail about your customers in terms of how they're using your product because of your client relationship specialists and managers. You're really embedded with your customers, which really gives you information about how they are using your products. With Cloud and IoT, there's a lot more information coming these days about usage. Kerry: And lack of usage.Raj: Or lack of usage, yes. I think the potential is there. A lot of potential is there for B-to-B in actually doing sales call planning, whys of customer, coming up with new product updates, enabling a robust customer community through educational videos, FAQs, or even understanding if a customer is coming to a webinar, or if they ask some questions in the webinar, or have been to your website, browsed on the website. Information about that is really useful for salespeople when they go and talk to the customer after that in a follow up call, they can start becoming relevant and personal. I think B-to-B is definitely transforming. There's a lot more interaction, like web 2.0 and whitepapers and content marketing, and all of that is rich with data and are ripe, fertile areas for B-to-B marketers to use data to better improve the effectiveness of their marketing activities.Kerry: So, you could create a B-to-B content hub and then analyze the heck out of people's interaction with it? Raj: Yes, absolutely. Think about if you have a product, let's say you're Siemens and you're making parts, maybe you're making engines for trains or aircraft, as an example. There are many different parts that go into it and you're also supplying the parts. Knowing what kind of whitepapers your customer is downloading can tell you whether they're interested in maintenance, or whether they're interested in customizing your product, or understanding how your part fits within their project. It really helps your salespeople to then really hone in on that when they go on that follow up call. Also, on the emails you send them you can really start personalizing how you present your company to the customers.Kerry: Can you talk to me about some of the research that went into the book?Raj: With my co-author Jim Lecinski, I think we all came in with our own perspectives. He worked in Google for a while before he joined Northwestern. I was teaching marketing analytics and I started teaching digital marketing in a course called Marketing Technology Products where I would spend a week in San Francisco with my students talking, listening, and visiting companies. It gave me a look into how the world is evolving, what new technologies are coming. When we put together all of this, we started out with here's a dump of what we think we know, and then we started talking to brands who were really in this journey. All of the brands are featured in our book from Washington Post, Coca-Cola, Unilever, Ancestry.com, Carmax and several others. It started giving us a picture about in-depth interviews with managers who are really trying to bring data into their marketing. It gives us a consistent pattern on how they go about building this AI capability in their functions. That's kind of what led to us coming up with this canvas on where to begin and what the steps are to gain this transformative capability for their organizations.Kerry: Not to give away the store, because we want everybody to buy the book, obviously, but can you talk about the five-stage roadmap at a high level? Raj: Absolutely. I'm happy to. The five steps are foundation, experimentation, expansion, transformation, and monetization. What we mean by that is for AI to work, the raw material is data. But it's not any data, it is data that is focused on the customers. There's an inventory database, there's a finance database, there's a procurement database, but all of that needs to connect, and that's a challenge. For marketers, what that really means is Customer A, how are they interacting with the firm? When did they first start buying? What are the installations? Which salespeople did they talk to? Having a foundation of that kind of first-party customer-focused data is important. Then we talk about letting a thousand flowers bloom. Trying different things but looking at one aspect of customer engagement, either acquisition, retention, growth, or advocacy. You're really trying different things, looking at ROI and seeing where the biggest bang for your buck is. Once you learn what was working for you, then you're slowly expanding into other aspects of customer engagement. Eventually, you will reach a place where so far you've used vendors and off-the-shelf products and you have to really invest, and you get into a board level position of investing in your own data science team, building it in-house or buying an AI company. The last stage is very fascinating, which we saw. Specifically for AI companies, they have started taking all of these capabilities they've built in-house and they've turned around and built it into a platform that they are now selling AI as a service for other companies and building a new revenue stream. That is something which was really interesting to see how every company who reached this transformation has turned around and done a services platform.Kerry: What industries then lend themselves best to this, which can benefit the most from this? It definitely seems like data can benefit about any industry, but when I'm hearing you talk about services, that seems like an area really rich to benefit from AI.Raj: Absolutely. Services definitely. I think data services especially. In the book, we talk about Washington Post and Starbucks in detail in terms of what they did. I think Caterpillar is definitely getting into that world in terms of its ability to fit the tractors with cameras that can understand and detect which crops need pesticide and can target the crops in terms of pesticide spraying. One is you can see it as a big savings for the farmers, but it's also less pesticides on crops and that's a good thing. Then you're building on top of that now other capabilities of how you can operate better, algorithms for big farms, how to optimize the way you use your spraying capability. These things can become services that they can then provide big industrial farms. In another example, there's a company where they have this IoT device that's a microphone, where people use it in oil rigs and really hazardous situations. Where it is really helpful, first, is it was done to ensure somebody can know where they are and they can communicate with each other. But what this company found is that based how these microphones are and people are hovering on their movements, you can see if some issue is really a dangerous situation or not, and whether you should react or not to actually then improve the safety of your employees in the oil rigs. Once they found that this data can be used by their clients, they're building a platform which then provides a data service on top of the microphones for improving worker safety.Kerry: It's possible there are some people who have heard of the Internet-of-Things but don't know what it is. Internet connected things that are not computers, basically. Right? Raj: Yes.Kerry: How far does that go and how weirded out do people get? I would be a little embarrassed if I was flipping the wrong switch on a thing 167 times. I guess it's valuable data for a product developer to know that I can't figure it out. Raj: I think that what you're saying is absolutely true. I think that is important information from the perspective of product developers to understand that. It's not just one Kerry doing it, if there are a lot of customers who are doing that, then that is certainly something your product developers need to pay attention to. That's the beauty of what I call extending the purchasing funnel. So far, we've only looked at awareness, interest, desire, and action, before the internet and before data and technology. Basically, the purchase funnel stops at action, when the customers buy the product. Then you don't know how they're using it. You just wait for them to come back and buy the next product. Once you know about how consumption is happening, how people are using the product, then it's an ongoing relationship of you know what needs to be fixed, what new things people want, where they spend the most time, what they find valuable. All of this insight is what is really allowing for new innovations to happen.Kerry: Where are the missed opportunities? Are there areas now that are still relatively untapped that people could get into and be the early adopters and really benefit from that first-mover advantage?Raj: Great question. Right now, I feel like what differentiates companies from doing this better than the others, the differentiation I feel is in the culture and the process of the company. I think data science as a service is getting commoditized, but the differentiation is how much you've invested in knowing about your customers, how much you've invested in collecting the data, and how much your process of marketing decision-making and marketing strategy is attuned to getting these customer insights and implementing personalized marketing actions based on those insights. Kerry: If you were to talk to someone who is really just starting out, they have a brand new business and they can build their systems and their technology stack any way they want, what would you tell them, to take the best advantage of AI what should they do?Raj: Great question. I think the first thing I would say is good for you, because you don't have legacy issues. You are starting in a place where you can build it as you want, and you can really think about customer-first. I think that is number one. You can really start thinking about omnichannel. I think that's the first thing I would begin with. If you are from scratch building your systems, think about serving your customers across channels and being able to see them across those channels, because customers will engage with you in all the places and you need to be able to see them. The technology system that allows you to do that, whether they're online or in the store, or talking to your salesperson, you're able to track it. Even in a tradeshow, there's a good system for doing all of that. The second thing I would say is almost everybody is a web 2.0, web 3.0 company now, everybody has a website, everybody has either a LinkedIn or a forum. Tradeshows are all virtual now, and maybe there's a hybrid tradeshow that will happen in the future. You are tracking, even if you're a startup, all of your interactions with your customers. You will have Google Analytics, or you may have HubSpot or Adobe in the back end, giving you data on how your customers are engaging with you. The same vendors will also be happy to give you reports on any kind of analytics you want, basic analytics. That's where I would begin. Once I collect the data, I would begin with customizing an email campaign, or customizing your sales call plan, or customizing how your salesperson approaches your customers, arming your salesforce with information about your customers, and see where that goes.Kerry: You mentioned a couple of times that data is the raw fuel for AI. How can people keep their data clean? I know that's one of the biggest problems. Sales isn't paid to input data, they're paid to make sales, so they're just getting it in as fast as they can. How do you keep your data clean?Raj: We talk about 80% of the work is data and 20% is actually the estimation of the models, and that is true. The digitization helps, but there are always places where there is human input, that's where there are challenges. But human input is necessary because that also gives you qualitative nuanced information that can be a differentiator. It is something you have to struggle with and you cannot ignore. I wouldn't say only use data that is completely coming from digital sources, because the real rich information is actually in people's minds and that's what you want to tap into.Kerry: How deep an understanding of the technology does a marketer need to have? I think they're a little scared. We can get a little scared.Raj: Absolutely. That's a great question. This is something that is so relevant. I say in my class when I teach marketing analytics and digital marketing that I want you to be smart consumers of analytics. What I mean by that is it's not like you need to go and program and learn Python or R or any other new thing that comes up. You need to understand what data can do and what is possible. I think the biggest skill marketers can have is being open-minded and understanding about the power of data. And I think marketers are there and I think a lot of marketers understand it. I think the ones I see are successful are people who are collaborators and are people who are able to work across different functions. Marketers who are really successful form really good relationships with the technology folks, with the data science folks, with the operations folks, with the finance folks. The way I think about marketing, I think marketers need to think of themselves as the chief advocate of customers within an organization. They are the ones who are talking about and really focused on doing what is right for the customers across the organization. If an organization recognizes the marketer's role like that and if the marketer themselves recognizes that's their role, they will see that for them to be better, to be good at their job, they need the help across all divisions. That's what we talk about in the book, also, in the expansion stage when we talk about the marketing AI champion. You're really looking at the skills of a person who is a connector and a collaborator. Being able to put together teams and being able to understand and harness the knowledge of a team, I think, is the most important skill marketers need, more than coding or analytics. Kerry: Raj, where can people learn more and where can they get their copy of The AI Marketing Canvas? Raj: The AI Marketing Canvas is on Amazon. We would love for you to go check it out, give us your feedback, and send us your reviews. Jim and I are on LinkedIn. We'd love to hear from you. We continue a conversation there about new topics and this is an ongoing effort. This is just the beginning for marketing, so there are going to be more and more fabulous stories that are going to come about how brands have used AI.Kerry: And there's a website for the book as well?Raj: There is a website, thank you, at AIMCbook.com. Kerry: Great. Thank you so much for joining today. I learned a lot. I hope everybody buys the book through your site so that you can get all that rich data on what they do with it. Raj: Absolutely. That's right. Thank you, Kerry. Thank you for having me. It has been a pleasure. Kerry: Thank you for listening here to the very end. This has been the Marketing Smarts Podcast. Talk with you again soon.
Content Synthesis/Decision Making/Recommendation
Management/Business and Financial Operations
null
null
null
null
null
null
news
Quintin Pope
New GPT-3 competitor
Published on August 12, 2021 7:05 AM GMTAI21 has trained a new language model, Jurassic-1, whose largest version has 178 billion parameters (GPT-3 had 175 billion). This paper gives limited technical details.There already were several models that used far more parameters than GPT-3, but they were either mixture of expert models or only word embeddings. They required much less compute to train/use, but were less powerful than a dense transformer like GPT-3 or the new Jurassic-1. The interesting thing about Jurassic-1 is that it really doesn’t go much beyond GPT-3. It has a larger vocabulary and slightly optimized architecture. Jurassic-1 only has a bit more parameters than GPT-3, whereas prior trends indicated that any GPT-3 successor would use at least an order of magnitude more parameters. Since GPT-3, much work has gone towards improving transformer architecture (e.g., linear time self attention and neural architecture search), but little of that is visible in Jurassic-1. Maybe companies don’t think it’s economically viable to scale beyond GPT-3 or run many experiments with different architectures at that scale?Also, Jurassic-1 is a unidirectional model, like GPT-3 (meaning it's forced to process text from left-to-right). This means GPT-3 can only process a given word using the context provided by the previous words. This causes unidirectional models problems for most tasks other than text generation. For example, other than GPT-3, all the top models in the SuperGLUE benchmark leaderboard are bidirectional models. It's interesting AI21 chose to compete with OpenAI using a model that provides the same class of service (text generation) as GPT-3, rather than specialize in, e.g., text classification, where a bidirectional model would be better.Discuss
https://www.lesswrong.com/posts/2BCpdyHzzw4BZeodR/new-gpt-3-competitor
https://res.cloudinary.c…ijqgsop7xwa0.jpg
2021-08-12T07:05:49Z
AI21 has trained a new language model, Jurassic-1, whose largest version has 178 billion parameters (GPT-3 had 175 billion). This paper gives limited technical details.There already were several models that used far more parameters than GPT-3, but they were either mixture of expert models or only word embeddings. They required much less compute to train/use, but were less powerful than a dense transformer like GPT-3 or the new Jurassic-1. The interesting thing about Jurassic-1 is that it really doesnt go much beyond GPT-3. It has a larger vocabulary and slightly optimized architecture. Jurassic-1 only has a bit more parameters than GPT-3, whereas prior trends indicated that any GPT-3 successor would use at least an order of magnitude more parameters. Since GPT-3, much work has gone towards improving transformer architecture (e.g., linear time self attention and neural architecture search), but little of that is visible in Jurassic-1. Maybe companies dont think its economically viable to scale beyond GPT-3 or run many experiments with different architectures at that scale?Also, Jurassic-1 is a unidirectional model, like GPT-3 (meaning it's forced to process text from left-to-right). This means GPT-3 can only process a given word using the context provided by the previous words. This causes unidirectional models problems for most tasks other than text generation. For example, other than GPT-3, all the top models in the SuperGLUE benchmark leaderboard are bidirectional models. It's interesting AI21 chose to compete with OpenAI using a model that provides the same class of service (text generation) as GPT-3, rather than specialize in, e.g., text classification, where a bidirectional model would be better.
Unknown
Unknown
null
null
null
null
null
null
news
Joanna Ossinger
Mark Cuban-Backed Firm Alethea Is Creating 'Intelligent' NFTs - Bloomberg
Non-fungible tokens are ready to get more interactive, according to one firm that’s working to create “intelligent” versions.
https://www.bloomberg.com/news/articles/2021-08-24/mark-cuban-backed-firm-alethea-is-creating-intelligent-nfts
https://assets.bwbx.io/i…/v0/1200x671.jpg
2021-08-24T12:00:00Z
Non-fungible tokens are ready to get more interactive, according to one firm thats working to create intelligent versions.Alethea AI, which allows users to embed AI animation, interaction, and voice synthesis capabilities into NFTs, is looking to expand in a space with a lot of competition. But the firm has already had some commercial success, selling one intelligent NFT, or iNFT, for $478,000 via Sothebys in June.Alethea has gotten some big names interested. It has closed a $16 million strategic private and restricted token sale where the lead purchasers were Metapurse -- whose chief financier paid $69.3 million for Beeples Everydays: the First 5,000 Days earlier this year -- and Crypto.com Capital. Other strategic purchasers included Mark Cuban, Multicoin, Alameda, Dapper Labs, Galaxy Interactive and Gemini Frontier Fund, according to a statement from the company.While NFTs have continued to be exciting for collectors, I always try to invest in what is coming next, Cuban said. Alethea AI has managed to uniquely combine AI-powered Avatars that are secured on-chain as NFTs. The result is not only fun and entertaining but the foundation for a level of interactivity that is going to advance quickly using Aletheas technologies.Alethea is planning to use the proceeds from the sale to maintain and upgrade the current services and protocol launching in the public domain, CEO Arif Khan said in emailed comments. Potential projects with its technology could include giving a cryptopunk the ability to participate in a digital rap battle, creating interactive gaming characters or formulating interactive real-time chatbot applications.NFTs have surged in popularity this year along with cryptocurrencies, with creators attracted to a format that allows direct access to potential buyers around the globe, and customers finding appeal in owning works they might like or seek to collect. The rolling seven-day total of money spent on completed sales was $164.5 million on Aug. 18, compared with about $2.2 million at the end of last year, according to data from Nonfungible.com.Read More: $500,000 for a Rock NFT Tells Where the Cycle IsRead More: Visa Buys NFT of Digital Avatar With Mohawk for $150,000Before it's here, it's on the Bloomberg Terminal.LEARN MORE
Content Creation/Content Synthesis/Digital Assistance
Unknown
null
null
null
null
null
null

REALM: REAL-World Application of Large Language Models

Dataset Description

Dataset Summary

Large Language Models (LLMs), such as GPT-like models, have transformed industries and everyday life, creating significant societal impact. To better understand their real-world applications, we created the REALM Dataset, a collection of over 73k use cases sourced from Reddit posts and news articles, spanning 2020-06(when GPT was first released) to 2024-06. REALM focuses on two key aspects:

  1. How LLMs are being used: Categorizing the wide range of applications, following AI Use Taxonomy: A Human-Centered Approach.

  2. Who is using them: Extracting the occupation attributes of current or potential end-users, categorized based on the O*NET classification system.

Updates

2024-12-18: Content Update. Paper submitted to WWW resource track.

Languages

English

Data Fields

  • `` (string):

Citation Information

Please consider citing our paper if you find this dataset useful:

@inproceedings{
  \\\
}
Downloads last month
42

Data Sourcing report

powered
by Spawning.ai

Some elements in this dataset have been identified as opted-out, or opted-in, by their creator.

Opted-out
Opted-in