Harrison Kinsley PRO

Sentdex

AI & ML interests

None yet

Organizations

Sentdex's activity

posted an update 6 months ago
view post
Post
8184
Okay, first pass over KAN: Kolmogorov–Arnold Networks, it looks very interesting!

Interpretability of KAN model:
May be considered mostly as a safety issue these days, but it can also be used as a form of interaction between the user and a model, as this paper argues and I think they make a valid point here. With MLP, we only interact with the outputs, but KAN is an entirely different paradigm and I find it compelling.

Scalability:
KAN shows better parameter efficiency than MLP. This likely translates also to needing less data. We're already at the point with the frontier LLMs where all the data available from the internet is used + more is made synthetically...so we kind of need something better.

Continual learning:
KAN can handle new input information w/o catastrophic forgetting, which helps to keep a model up to date without relying on some database or retraining.

Sequential data:
This is probably what most people are curious about right now, and KANs are not shown to work with sequential data yet and it's unclear what the best approach might be to make it work well both in training and regarding the interpretability aspect. That said, there's a rich long history of achieving sequential data in variety of ways, so I don't think getting the ball rolling here would be too challenging.

Mostly, I just love a new paradigm and I want to see more!

KAN: Kolmogorov-Arnold Networks (2404.19756)
·
posted an update 7 months ago
view post
Post
5545
Benchmarks!

I have lately been diving deep into the main benchmarks we all use to evaluate and compare models.

If you've never actually looked under the hood for how benchmarks work, check out the LM eval harness from EleutherAI: https://github.com/EleutherAI/lm-evaluation-harness

+ check out the benchmark datasets, you can find the ones for the LLM leaderboard on the about tab here: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard, then click the dataset and actually peak at the data that comprises these benchmarks.

It feels to me like benchmarks only represent a tiny portion of what we actually use and want LLMs for, and I doubt I'm alone in that sentiment.

Beyond this, the actual evaluations of responses from models are extremely strict and often use even rudimentary NLP techniques when, at this point, we have LLMs themselves that are more than capable at evaluating and scoring responses.

It feels like we've made great strides in the quality of LLMs themselves, but almost no change in the quality of how we benchmark.

If you have any ideas for how benchmarks could be a better assessment of an LLM, or know of good research papers that tackle this challenge, please share!
  • 3 replies
·
posted an update 9 months ago
view post
Post
Working through the Reddit dataset, one thing that occurs to me is we pretty much always train LLMs to be a conversation between 2 parties like Bot/Human or Instruction/Response.

It seems far more common with internet data that we have multi-speaker/group discussions with a dynamic number of speakers. This also seems to be more realistic to the real world too and requires a bit more understanding to model.

Is there some research into this? I have some ideas of how I'd like to implement it, but I wonder if some work has already been done here?
·
replied to their post 9 months ago
view reply

I actually came to the realization that not only could this dataset cover multi-turn, it could handle multiple speakers.

So far we only have instruct pairs like bot/computer, but instead we could have 5 or 10 or 3 ...etc entities in the discussion.

replied to their post 9 months ago
view reply

You can also try the torrent above, I've never seen that particular one, thank you Rasmus for sharing, that gets a little further to current than what I have too. I have historically found torrents for large datasets like this to be kind of a nightmare, however. Worth trying it ...and seeding though.

I am using the bigquery version of it here: https://www.google.com/url?q=https://bigquery.cloud.google.com/table/fh-bigquery:reddit_posts.full_corpus_201509&sa=D&source=docs&ust=1708450087939387&usg=AOvVaw3NKo-3iF1Mx_TjLF0yt2i0

There are other locations mixed around like https://archive.org/download/2015_reddit_comments_corpus/reddit_data/ which has 2007 to 2015...etc.

There are small chunks of this dataset everywhere but it'd be awesome to get it all on HF in some form.

posted an update 9 months ago
view post
Post
Hi, welcome to my first post here!

I am slowly wrangling about 5 years of reddit comments (2015-2020). It's a total of billions samples that can be filtered as comment-reply pairs, chains of discussion, filtered by subreddit, up/down votes, controversy, sentiment, and more.

Any requests or ideas for curated datasets from here? I'll also tinker with uploading the entire dataset potentially in chunks or something, but it's quite a few terabytes in total, so I'll need to break it up still. I have some ideas for datasets I personally want too, but curious if anyone has something they'd really like to see that sounds interesting too.
·
reacted to merve's post with 🤯🤗👍 9 months ago
view post
Post
Migrated all my GPU consuming Spaces to ZERO, it was super easy to do so (add three lines of code and voila!) and the start-up time decreased dramatically as well 💜
·