emin temiz PRO

etemiz

AI & ML interests

None yet

Recent Activity

Articles

Organizations

None yet

Posts 17

view post
Post
303
Having bad LLMs is ok and can be utilized well. They can allow us to find ideas that work faster.

Reinforcement algorithm could be: "take what a proper model says and negate what a bad LLM says". Or in a mixture of agents situation we could say refute the bad LLM output and combine with the output of the good LLM.

This could mean having two wings (or more) in search of "ideas that work for most people most of the time".

datasets

None public yet