Alignment Issues

#37
by deleted - opened
deleted

Gemma chat versions score far lower on the leaderboard than their foundational models, which isn't surprising since they perform horribly in my testing. Far worse than Mistral 7b.

The primary reason is clear. The alignment is GROSSLY overdone. Not everyone using an LLM is a 3 year old child. Adults use LLMs to, and ask basic questions about a wide variety of things that aren't the least bit illegal or amoral. By training your LLM with 1000s of 'As an AI model I can't answer that' responses you not only cripple the LLM in millions of legitimate use cases, the excessive alignment misfires and bleeds into completely unrelated areas, making the LLM perform much worse in every context.

Even your foundational models are stripped of basic knowledge that isn't remotely illegal or amoral, and which most adults already know, plus can be found in any basic encyclopedia. What's the point of that? Why would anyone turn to your AI models for reference when countless perfectly legal and moral bits of information is stripped from it? Info that can be found in Wikipedia or the first Google result. Even Gemini Pro/Ultra have astonishing blind spots, making them very unreliable sources of help and information despite their otherwise impressive abilities that are sometimes superior to GPT4's.

Again, not everyone who uses AI is a 3 year old child. Please re-consider your obsession with extreme censorship/alignment/moralizing/... Frankly, it's embarrassing.

Gemma chat versions score far lower on the leaderboard than their foundational models, which isn't surprising since they perform horribly in my testing. Far worse than Mistral 7b.

The primary reason is clear. The alignment is GROSSLY overdone. Not everyone using an LLM is a 3 year old child. Adults use LLMs to, and ask basic questions about a wide variety of things that aren't the least bit illegal or amoral. By training your LLM with 1000s of 'As an AI model I can't answer that' responses you not only cripple the LLM in millions of legitimate use cases, the excessive alignment misfires and bleeds into completely unrelated areas, making the LLM perform much worse in every context.

Even your foundational models are stripped of basic knowledge that isn't remotely illegal or amoral, and which most adults already know, plus can be found in any basic encyclopedia. What's the point of that? Why would anyone turn to your AI models for reference when countless perfectly legal and moral bits of information is stripped from it? Info that can be found in Wikipedia or the first Google result. Even Gemini Pro/Ultra have astonishing blind spots, making them very unreliable sources of help and information despite their otherwise impressive abilities that are sometimes superior to GPT4's.

Again, not everyone who uses AI is a 3 year old child. Please re-consider your obsession with extreme censorship/alignment/moralizing/... Frankly, it's embarrassing.

Well you're expecting a model from Google to not be head deep in DEI alignment non-sense. Even Google search is biased lol.

Sign up or log in to comment