SIGGRAPH 2022

non-profit

AI & ML interests

None defined yet.

Recent Activity

SIGGRAPH2022's activity

Abhaykoul 
posted an update 5 days ago
view post
Post
1473
🔥 BIG ANNOUNCEMENT: THE HELPINGAI API IS LIVE! 🔥

Yo, the moment you’ve all been waiting for is here! 🚀 The HelpingAI API is now LIVE and ready to level up your projects! 🔥 We’re bringing that next-level AI goodness straight to your fingertips. 💯

No more waiting— it’s time to build something epic! 🙌

From now on, you can integrate our cutting-edge AI models into your own applications, workflows, and everything in between. Whether you’re a developer, a creator, or just someone looking to make some serious moves, this is your chance to unlock the full potential of emotional intelligence and adaptive AI.

Check out the docs 🔥 and let’s get to work! 🚀

👉 Check out the docs and start building (https://helpingai.co/docs)
👉 Visit the HelpingAI website (https://helpingai.co/)
·
akhaliq 
posted an update 5 days ago
view post
Post
2299
Google drops Gemini 2.0 Flash Thinking

a new experimental model that unlocks stronger reasoning capabilities and shows its thoughts. The model plans (with thoughts visible), can solve complex problems with Flash speeds, and more

now available in anychat, try it out: akhaliq/anychat
AtAndDev 
posted an update 7 days ago
view post
Post
298
@s3nh Hey man check your discord! Got some news.
  • 4 replies
·
akhaliq 
posted an update 27 days ago
view post
Post
3744
QwQ-32B-Preview is now available in anychat

A reasoning model that is competitive with OpenAI o1-mini and o1-preview

try it out: akhaliq/anychat
  • 1 reply
·
akhaliq 
posted an update 27 days ago
view post
Post
3670
New model drop in anychat

allenai/Llama-3.1-Tulu-3-8B is now available

try it here: akhaliq/anychat
akhaliq 
posted an update about 1 month ago
view post
Post
2662
anychat

supports chatgpt, gemini, perplexity, claude, meta llama, grok all in one app

try it out there: akhaliq/anychat
Abhaykoul 
posted an update 4 months ago
view post
Post
2906
Introducing HelpingAI2-9B, an emotionally intelligent LLM.
Model Link : https://huggingface.co/OEvortex/HelpingAI2-9B
Demo Link: Abhaykoul/HelpingAI2

This model is part of the innovative HelpingAI series and it stands out for its ability to engage users with emotional understanding.

Key Features:
-----------------

* It gets 95.89 score on EQ Bench greather than all top notch LLMs, reflecting advanced emotional recognition.
* It gives responses in empathetic and supportive manner.

Must try our demo: Abhaykoul/HelpingAI2
  • 1 reply
·
kbrodt 
updated a Space 6 months ago
akhaliq 
posted an update 7 months ago
view post
Post
20593
Phased Consistency Model

Phased Consistency Model (2405.18407)

The consistency model (CM) has recently made significant progress in accelerating the generation of diffusion models. However, its application to high-resolution, text-conditioned image generation in the latent space (a.k.a., LCM) remains unsatisfactory. In this paper, we identify three key flaws in the current design of LCM. We investigate the reasons behind these limitations and propose the Phased Consistency Model (PCM), which generalizes the design space and addresses all identified limitations. Our evaluations demonstrate that PCM significantly outperforms LCM across 1--16 step generation settings. While PCM is specifically designed for multi-step refinement, it achieves even superior or comparable 1-step generation results to previously state-of-the-art specifically designed 1-step methods. Furthermore, we show that PCM's methodology is versatile and applicable to video generation, enabling us to train the state-of-the-art few-step text-to-video generator.
Abhaykoul 
posted an update 7 months ago
view post
Post
2701
# HelpingAI 9B: Cutting Edge Emotionally Intelligent AI

If you have ever felt that AI not understand your emotions or you not get human like fell while taking to him than this blog is for you!
In this BlogPost we will be exploring [HelpingAI 9B](https://huggingface.co/spaces/Abhaykoul/HelpingAI-9B) is an Highly Emotionally Intelligent AI which beated all top notch ai like GPT4o, GPT4, Claude3 Opus on EQ-Bench.

## What is HelpingAI 9B?
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6612aedf09f16e7347dfa7e1/FrNhr3WhMhvvD-dNplHxZ.png)
HelpingAI-9B is the fine-tuned Llama2 model crafted for emotionally intelligent conversations. This model excels in empathetic engagement, offering understanding and support through dialogue spanning various topics and situations. Its goal is to serve as a supportive AI companion, adept at resonating with users' emotions and communication requirements.

## Method
We gathered a large volume of high-quality human chat data, which was then filtered and refined to create three types of datasets:
1. DPO - Initially, we trained the AI on a substantial DPO dataset to grasp human conversation patterns, enabling it to discern which types of output to generate and which to avoid.
2. Alpaca - Subsequently, we trained it on an Alpaca-type dataset to enhance its human-like responses.
3. SFT - Finally, once the AI fully comprehended human interactions, we trained it on an SFT dataset to broaden its knowledge base.

## Evaluation

![By/KingNish/Banchmark/png](https://cdn-uploads.huggingface.co/production/uploads/6612aedf09f16e7347dfa7e1/xvS57q5kU9f3AX-O-al0k.png)

## Conclusion:
HelpingAI is a large step toward understanding human emoions and give response like them. It can help us lot in making AI better for NLP.
Thanks!

Model Link: - https://huggingface.co/OEvortex/HelpingAI-9B

Demo link: - https://huggingface.co/spaces/Abhaykoul/HelpingAI-9B
·
akhaliq 
posted an update 7 months ago
view post
Post
20892
Chameleon

Mixed-Modal Early-Fusion Foundation Models

Chameleon: Mixed-Modal Early-Fusion Foundation Models (2405.09818)

We present Chameleon, a family of early-fusion token-based mixed-modal models capable of understanding and generating images and text in any arbitrary sequence. We outline a stable training approach from inception, an alignment recipe, and an architectural parameterization tailored for the early-fusion, token-based, mixed-modal setting. The models are evaluated on a comprehensive range of tasks, including visual question answering, image captioning, text generation, image generation, and long-form mixed modal generation. Chameleon demonstrates broad and general capabilities, including state-of-the-art performance in image captioning tasks, outperforms Llama-2 in text-only tasks while being competitive with models such as Mixtral 8x7B and Gemini-Pro, and performs non-trivial image generation, all in a single model. It also matches or exceeds the performance of much larger models, including Gemini Pro and GPT-4V, according to human judgments on a new long-form mixed-modal generation evaluation, where either the prompt or outputs contain mixed sequences of both images and text. Chameleon marks a significant step forward in a unified modeling of full multimodal documents.
akhaliq 
posted an update 8 months ago
view post
Post
6265
A Careful Examination of Large Language Model Performance on Grade School Arithmetic

A Careful Examination of Large Language Model Performance on Grade School Arithmetic (2405.00332)

Large language models (LLMs) have achieved impressive success on many benchmarks for mathematical reasoning. However, there is growing concern that some of this performance actually reflects dataset contamination, where data closely resembling benchmark questions leaks into the training data, instead of true reasoning ability. To investigate this claim rigorously, we commission Grade School Math 1000 (GSM1k). GSM1k is designed to mirror the style and complexity of the established GSM8k benchmark, the gold standard for measuring elementary mathematical reasoning. We ensure that the two benchmarks are comparable across important metrics such as human solve rates, number of steps in solution, answer magnitude, and more. When evaluating leading open- and closed-source LLMs on GSM1k, we observe accuracy drops of up to 13%, with several families of models (e.g., Phi and Mistral) showing evidence of systematic overfitting across almost all model sizes. At the same time, many models, especially those on the frontier, (e.g., Gemini/GPT/Claude) show minimal signs of overfitting. Further analysis suggests a positive relationship (Spearman's r^2=0.32) between a model's probability of generating an example from GSM8k and its performance gap between GSM8k and GSM1k, suggesting that many models may have partially memorized GSM8k.
akhaliq 
posted an update 8 months ago
view post
Post
4756
Octopus v4

Graph of language models

Octopus v4: Graph of language models (2404.19296)

Language models have been effective in a wide range of applications, yet the most sophisticated models are often proprietary. For example, GPT-4 by OpenAI and various models by Anthropic are expensive and consume substantial energy. In contrast, the open-source community has produced competitive models, like Llama3. Furthermore, niche-specific smaller language models, such as those tailored for legal, medical or financial tasks, have outperformed their proprietary counterparts. This paper introduces a novel approach that employs functional tokens to integrate multiple open-source models, each optimized for particular tasks. Our newly developed Octopus v4 model leverages functional tokens to intelligently direct user queries to the most appropriate vertical model and reformat the query to achieve the best performance. Octopus v4, an evolution of the Octopus v1, v2, and v3 models, excels in selection and parameter understanding and reformatting. Additionally, we explore the use of graph as a versatile data structure that effectively coordinates multiple open-source models by harnessing the capabilities of the Octopus model and functional tokens.