Mohamed Rashad PRO

MohamedRashad

AI & ML interests

Computer Vision, Robotics, Natural Language Processing

Recent Activity

new activity about 19 hours ago
MohamedRashad/FinePersonas-Lite:Update README.md
liked a model 8 days ago
OmarSamir/EGTTS-V0.1
View all activity

Organizations

Navid AI's profile picture

MohamedRashad's activity

replied to Keltezaa's post about 8 hours ago
view reply

I am considering canceling my Pro subscription because I just discovered that i am just limited to 10 zeroGPU spaces i can host on my account. This number should be way higher.

New activity in MohamedRashad/FinePersonas-Lite about 19 hours ago

Update README.md

2
#2 opened about 22 hours ago by
Ah00m00sm
New activity in TencentARC/PhotoMaker 9 days ago
reacted to their post with 🚀 28 days ago
view post
Post
2007
The winners of Best Paper Award in NeurIPs2024 (FoundationVision) Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction (2404.02905) has just released a new paper called infinty:
Infinity: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis (2412.04431)

And i managed to build a space for it so anyone can try it out: MohamedRashad/Infinity

The idea of a text to image model using autoregressive archticture is quite interesting in my opinion.
reacted to alielfilali01's post with 👍 28 days ago
view post
Post
1911
3C3H AraGen Leaderboard welcomes today deepseek-ai/DeepSeek-V3 and 12 other models (including the late gpt-3.5 💀) to the ranking of best LLMs in Arabic !


Observations:
- DeepSeek-v3 ranked 3rd and only Open model among the top 5 !

- A 14B open model ( Qwen/Qwen2.5-14B-Instruct) outperforms gpt-3.5-turbo-0125 (from last year). This shows how much we came in advancing and supporting Arabic presence within the LLM ecosystem !

- Contrary to what observed in likelihood-acc leaderboards (like OALL/Open-Arabic-LLM-Leaderboard) further finetuned models like maldv/Qwentile2.5-32B-Instruct actually decreased the performance compared to the original model Qwen/Qwen2.5-32B-Instruct.
It's worth to note that the decrease is statiscally insignificant which imply that at best, the out-domain finetuning do not really hurts the model original capabilities acquired during pretraining.
Previous work addressed this (finetuning VS pretraining) but more investigation in this regard is required (any PhDs here ? This could be your question ...)


Check out the latest rankings: inceptionai/AraGen-Leaderboard
liked a Space 28 days ago
reacted to their post with ❤️ 29 days ago
view post
Post
2007
The winners of Best Paper Award in NeurIPs2024 (FoundationVision) Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction (2404.02905) has just released a new paper called infinty:
Infinity: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis (2412.04431)

And i managed to build a space for it so anyone can try it out: MohamedRashad/Infinity

The idea of a text to image model using autoregressive archticture is quite interesting in my opinion.
posted an update 29 days ago
view post
Post
2007
The winners of Best Paper Award in NeurIPs2024 (FoundationVision) Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction (2404.02905) has just released a new paper called infinty:
Infinity: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis (2412.04431)

And i managed to build a space for it so anyone can try it out: MohamedRashad/Infinity

The idea of a text to image model using autoregressive archticture is quite interesting in my opinion.
updated a Space 29 days ago