🌐 The Stanford Institute for Human-Centered AI (https://aiindex.stanford.edu/vibrancy/) has released its 2024 Global AI Vibrancy Tool, a way to explore and compare AI progress across 36 countries.
📊 It measures progress across the 8 broad pillars of R&D, Responsible AI, Economy, Education, Diversity, Policy and Governance, Public Opinion and Infrastructure. (Each of these pillars have a number of Sub Indices)
📈 As a whole it is not surprising that the USA was at the top in terms of overall score as of 2023 (AI investment activity is a large part of the economic pillar for example and that is a large part of the overall USA ranking) but drilling in to more STRATEGIC Macro pillars like Education, Infrastructure or R&D reveal interesting growth patterns in Asia (particularly China) and Western Europe that I suspect the 2024 metrics will bear out.
🤖 Hopefully the 2024 Global Vibrancy ranking will break out AI and ML verticals like Computer Vision or NLP and or the AI Agent space as that may also from a global macro level give indications of what is to come globally for AI in 2025.
🤖💻 Function Calling is a key component of Agent workflows. To call functions, an LLM needs a way to interact with other systems and run code. This usually means connecting it to a runtime environment that can handle function calls, data, and security.
Per the Berkeley Function-Calling Leaderboard there are only 2 fully open source models (The other 2 in the top 20 that are not closed source have cc-by-nc-4.0 licenses) out of the top 20 models that currently have function calling built in as of 17 Nov 2024. https://gorilla.cs.berkeley.edu/leaderboard.html
The 2 Open Source Models out of the top 20 that currently support function calling are:
This is a both a huge disadvantage AND an opportunity for the Open Source community as Enterprises, Small Business, Government Agencies etc. quickly adopt Agents and Agent workflows over the next few months. Open Source will have a lot of catching up to do as Enterprises will be hesitant to switch from the closed source models that they may initially build their Agent workflows on in the next few months to an open source alternative later.
Hopefully more open source models will support function calling in the near future.
The Mystery Bot 🕵️♂️ saga I posted about from earlier this week has been solved...🤗
Cohere for AI has just announced its open source Aya Expanse multilingual model. The Initial release supports 23 languages with more on the way soon.🌌 🌍
You can also try Aya Expanse via SMS on your mobile phone using the global WhatsApp number or one of the initial set of country specific numbers listed below.⬇️
🌍WhatsApp - +14313028498 Germany - (+49) 1771786365 USA – +18332746219 United Kingdom — (+44) 7418373332 Canada – (+1) 2044107115 Netherlands – (+31) 97006520757 Brazil — (+55) 11950110169 Portugal – (+351) 923249773 Italy – (+39) 3399950813 Poland - (+48) 459050281
I'm excited to share that Gradio 5 will launch in October with improvements across security, performance, SEO, design (see the screenshot for Gradio 4 vs. Gradio 5), and user experience, making Gradio a mature framework for web-based ML applications.
Gradio 5 is currently in beta, so if you'd like to try it out early, please refer to the instructions below:
---------- Installation -------------
Gradio 5 depends on Python 3.10 or higher, so if you are running Gradio locally, please ensure that you have Python 3.10 or higher, or download it here: https://www.python.org/downloads/
* Locally: If you are running gradio locally, simply install the release candidate with pip install gradio --pre * Spaces: If you would like to update an existing gradio Space to use Gradio 5, you can simply update the sdk_version to be 5.0.0b3 in the README.md file on Spaces.
In most cases, that’s all you have to do to run Gradio 5.0. If you start your Gradio application, you should see your Gradio app running, with a fresh new UI.
💡Andrew Ng recently gave a strong defense of Open Source AI models and the need to slow down legislative efforts in the US and the EU to restrict innovation in Open Source AI at Stanford GSB.
hello everyone, today I have been working on a project Blane187/rvc-demo, a demo of rvc using pip, this project is still a demo though (I don't have a beta tester lol)
Researchers from Auburn University and the University of Alberta have explored the limitations of Vision Language Models (VLMs) in their recently published paper titled "Vision language models are blind." (Vision language models are blind (2407.06581))
Key Findings:🔍 VLMs, including GPT-4o, Gemini-1.5 Pro, Claude-3 Sonnet, and Claude-3.5 Sonnet, struggle with basic visual tasks. Tasks such as identifying where lines intersect or counting basic shapes are challenging for these models. The authors noted, "The shockingly poor performance of four state-of-the-art VLMs suggests their vision is, at best, like of a person with myopia seeing fine details as blurry, and at worst, like an intelligent person that is blind making educated guesses"(Vision Language Models Are Blind; 2024).
Human-like Myopia? 👓 VLMs may have a blind spot similar to human myopia. This limitation makes it difficult for VLMs to perceive details. Suggests a potential parallel between human and machine vision limitations.
Technical Details: 🔧 The researchers created a new benchmark called BlindTest. BlindTest consists of simple visual tasks to evaluate VLMs low-level vision capabilities. Four VLMs were assessed using BlindTest. Many shortcomings were revealed in the models ability to process basic visual information.