Richard A Aragon

TuringsSolutions

AI & ML interests

None yet

Articles

Organizations

Posts 15

view post
Post
1334
ChatGPT does better at math if you prompt it to think like Captain Picard from Star Trek. Scientifically proven fact lol. This got me to thinking, LLM models probably 'think' about the world in weird ways. Far different ways than we would. This got me down a rabbit hole of thinking about different concepts but for LLM models. Somewhere along the way, Python Chemistry was born. To an LLM model, there is a strong connection between Python and Chemistry. To an LLM model, it is easier to understand exactly how Python works, if you frame it in terms of chemistry.

Don't believe me? Ask Python-Chemistry-GPT yourself: https://chatgpt.com/g/g-dzjYhJp4U-python-chemistry-gpt

Want to train your own Python-GPT and prove this concept actually works? Here is the dataset: https://huggingface.co/.../TuringsSolu.../PythonChemistry400
view post
Post
1376
The word 'Lead' has three definitions. When an LLM model tokenizes this word, it is always the same token. Imagine being able to put any particular embedding at any particular time into a 'Quantum State'. When an Embedding is in a Quantum State, the word token could have up to 3 different meanings (x1, x2, x3). The Quantum State gets collapsed based on the individual context surrounding the word. 'Jill lead Joy to the store' would collapse to x1. 'Jill and Joy stumbled upon a pile of lead' would collapse to x3. Very simple, right? This method produces OFF THE CHARTS results:


https://www.youtube.com/watch?v=tuQI6A-EOqE