Alibaba_Speech_Lab_SG PRO

alibabasglab

AI & ML interests

speech enhancement, separation, and codec

Recent Activity

Organizations

Alibaba-PAI's profile picture

alibabasglab's activity

posted an update about 1 month ago
posted an update about 1 month ago
New activity in alibabasglab/MossFormer2_SR_48K about 1 month ago

Update README.md

#1 opened about 2 months ago by
Pur1zumu
New activity in alibabasglab/LJSpeech-1.1-48kHz about 1 month ago

Add task category, link to paper

#2 opened about 1 month ago by
nielsr
reacted to their post with ๐Ÿค—โค๏ธ๐Ÿš€๐Ÿ”ฅ about 2 months ago
posted an update about 2 months ago
reacted to their post with ๐Ÿคโค๏ธ๐Ÿ”ฅ about 2 months ago
reacted to prithivMLmods's post with ๐Ÿ”ฅ about 2 months ago
view post
Post
3144
ChemQwen-vL [ Qwen for Chem Vision ] ๐Ÿง‘๐Ÿปโ€๐Ÿ”ฌ

๐ŸงชModel : prithivMLmods/ChemQwen-vL

๐Ÿ“ChemQwen-vL is a vision-language model fine-tuned based on the Qwen2VL-2B Instruct model. It has been trained using the International Chemical Identifier (InChI) format for chemical compounds and is optimized for chemical compound identification. The model excels at generating the InChI and providing descriptions of chemical compounds based on their images. Its architecture operates within a multi-modal framework, combining image-text-text capabilities. It has been fine-tuned using datasets from: https://iupac.org/projects/

๐Ÿ“’Colab Demo: https://tinyurl.com/2pn8x6u7, Collection : https://tinyurl.com/2mt5bjju

Inference with the documentation is possible with the help of the ReportLab library. https://pypi.org/project/reportlab/

๐Ÿค—: @prithivMLmods
  • 1 reply
ยท
replied to prithivMLmods's post about 2 months ago
reacted to m-ric's post with ๐Ÿ‘€ about 2 months ago
view post
Post
1373
๐— ๐—ถ๐—ป๐—ถ๐— ๐—ฎ๐˜…'๐˜€ ๐—ป๐—ฒ๐˜„ ๐— ๐—ผ๐—˜ ๐—Ÿ๐—Ÿ๐—  ๐—ฟ๐—ฒ๐—ฎ๐—ฐ๐—ต๐—ฒ๐˜€ ๐—–๐—น๐—ฎ๐˜‚๐—ฑ๐—ฒ-๐—ฆ๐—ผ๐—ป๐—ป๐—ฒ๐˜ ๐—น๐—ฒ๐˜ƒ๐—ฒ๐—น ๐˜„๐—ถ๐˜๐—ต ๐Ÿฐ๐—  ๐˜๐—ผ๐—ธ๐—ฒ๐—ป๐˜€ ๐—ฐ๐—ผ๐—ป๐˜๐—ฒ๐˜…๐˜ ๐—น๐—ฒ๐—ป๐—ด๐˜๐—ต ๐Ÿ’ฅ

This work from Chinese startup @MiniMax-AI introduces a novel architecture that achieves state-of-the-art performance while handling context windows up to 4 million tokens - roughly 20x longer than current models. The key was combining lightning attention, mixture of experts (MoE), and a careful hybrid approach.

๐—ž๐—ฒ๐˜† ๐—ถ๐—ป๐˜€๐—ถ๐—ด๐—ต๐˜๐˜€:

๐Ÿ—๏ธ MoE with novel hybrid attention:
โ€ฃ Mixture of Experts with 456B total parameters (45.9B activated per token)
โ€ฃ Combines Lightning attention (linear complexity) for most layers and traditional softmax attention every 8 layers

๐Ÿ† Outperforms leading models across benchmarks while offering vastly longer context:
โ€ฃ Competitive with GPT-4/Claude-3.5-Sonnet on most tasks
โ€ฃ Can efficiently handle 4M token contexts (vs 256K for most other LLMs)

๐Ÿ”ฌ Technical innovations enable efficient scaling:
โ€ฃ Novel expert parallel and tensor parallel strategies cut communication overhead in half
โ€ฃ Improved linear attention sequence parallelism, multi-level padding and other optimizations achieve 75% GPU utilization (that's really high, generally utilization is around 50%)

๐ŸŽฏ Thorough training strategy:
โ€ฃ Careful data curation and quality control by using a smaller preliminary version of their LLM as a judge!

Overall, not only is the model impressive, but the technical paper is also really interesting! ๐Ÿ“
It has lots of insights including a great comparison showing how a 2B MoE (24B total) far outperforms a 7B model for the same amount of FLOPs.

Read it in full here ๐Ÿ‘‰ MiniMax-01: Scaling Foundation Models with Lightning Attention (2501.08313)
Model here, allows commercial use <100M monthly users ๐Ÿ‘‰ MiniMaxAI/MiniMax-Text-01
reacted to Tonic's post with ๐Ÿ”ฅ about 2 months ago
view post
Post
1881
๐Ÿ™‹๐Ÿปโ€โ™‚๏ธ Hey there folks ,

Facebook AI just released JASCO models that make music stems .

you can try it out here : Tonic/audiocraft

hope you like it