data
stringlengths 115
7.61k
|
---|
shawwn#3694: nah, not looking for anything really.
bmk#1476: Oh
bmk#1476: Some help with memory consumption would be nice
shawwn#3694: have you come up with a name for your server yet?
shawwn#3694: @Brouz out of curiosity, how did you set this in your profile? https://cdn.discordapp.com/attachments/729741769738158194/733427709450911796/unknown.png
shawwn#3694: I wasn't aware that was a discord feature
Brouz#6768: lmao
Brouz#6768: that's custom status
Brouz#6768: https://cdn.discordapp.com/attachments/729741769738158194/733427831220207686/unknown.png
Brouz#6768: just click your profile pic
Brouz#6768: there should be an option
shawwn#3694: aha, thanks!
Brouz#6768: no problem
epai#4354: Joined the server.
aegis#2320: > @shawwn we're still waiting for our data to encode also idk how to set up a public tensorboard
@Sid I have some gpt2 encoded data if you want to start somewhere
bmk#1476: encoded for tpu?
bmk#1476: (tfrecords)
aegis#2320: I don't know the distinction
aegis#2320: json |
aegis#2320: 😛
Sid#2121: we have WebText almost done 🙂
Sid#2121: but thanks
aegis#2320: ok
shawwn#3694: if you don't name your server, how can you expect it to work well?
Sid#2121: uhhh
Sid#2121: does it need a name to work?
shawwn#3694: poor server. nameless labor, destined to do one's bidding, with no thanks
Sid#2121: https://tenor.com/view/rick-and-morty-my-purpose-robot-butter-omg-gif-5234754
aegis#2320: you train gpt3
aegis#2320: us-west-1-73-185-23-49 is a name
Sid#2121: "what is my purpose?" "read reddit forever" "oh... my god"
joanna#9481: Joined the server.
ch#4688: Joined the server.
aegis#2320: oh are y'all using python code for the gpt2 tokenization? I wrote one in C++
nikosdim#0036: Joined the server.
Sid#2121: > oh are y'all using python code for the gpt2 tokenization? I wrote one in C++
@aegis we're using python, yeah. it needs to be in the form of tfrecords - can you do that in C++?
aegis#2320: do you have a link to what you're using?
Sid#2121: have you been invited to the repo? it's in #datascripts https://github.com/ConnorJL/GPTNeo/blob/master/datasets/create_tfrecords.py |
aegis#2320: yeah
aegis#2320: ah it looks like you just have a copy of the gpt python encoder in there
aegis#2320: I can probably replace that with a python native extension
aegis#2320: so you could leave the rest of the script as python but have a faster bpe function
Sid#2121: Hey @joanna , @ch , @nikosdim ! Welcome to Villa Straylight! Check the channel description for a google doc describing the project and let us know if you can offer any help, or have any questions
aegis#2320: my encoder is a fully compatible port of the python one
Sid#2121: that would be great @aegis
Sid#2121: we may be changing the encoder in the future, though
aegis#2320: do you know how much of a bottleneck the bpe part is for you?
Sid#2121: not off the top of my head, no
aegis#2320: are you thinking of going to another word piece encoding or something else
zphang#7252: HF's tokenizer I presume would be much faster
Sid#2121: someone reached out who is actively working on a new encoding method, but that may be a while away yet
Sid#2121: we were thinking of going char level
Sid#2121: but probably going to stick to bpe for now
aegis#2320: yeah the huggingface tokenizers would probably work here but I dunno how much work to integrate
aegis#2320: I have a totally standalone and hand optimized c++ one I can probably drop in
Sid#2121: if it's gonna speed things along, please submit a PR 🙂
aegis#2320: lemme take a look
aegis#2320: is there a requirements.txt somewhere? (oh I see it's in the readme) |
aegis#2320: the main annoying part about adding a native module would be giving you a clean build/install step
Sid#2121: I have never touched a piece of C++ in my life, lol. So I wouldn't know how much that would complicate the process
aegis#2320: looks like huggingface's tokenizers might be easier, if they're gpt2 compatible
Sid#2121: I'm sure they are. We're in contact with huggingface now so we could reach out
aegis#2320: https://pypi.org/project/tokenizers/ looks like enough
aegis#2320: ```from tokenizers import CharBPETokenizer
# Initialize a tokenizer
vocab = "./path/to/vocab.json"
merges = "./path/to/merges.txt"
tokenizer = CharBPETokenizer(vocab, merges)
# And then encode:
encoded = tokenizer.encode("I can feel the magic, can you?")
```
aegis#2320: if that's compatible with gpt2 can just swap out the encode function
Sid#2121: oh nice
shawwn#3694: I already made a performant version of HF's tokenizer, fwiw. It's ready to go
Sid#2121: 👀
Sid#2121: link? |
Sid#2121: these are both great btw
aegis#2320: right after they finished tokenizing openwebtext? 😄
shawwn#3694: https://github.com/shawwn/gpt-2/blob/tpu/tokenize_dataset.py
Sid#2121: but *please* someone submit a PR lol. if it's easy to implement pls do it
shawwn#3694: no need to PR when it's a self-contained script
shawwn#3694: should just be a matter of cloning + running.
aegis#2320: it needs to make tfrecords.
aegis#2320: so it needs to be integrated
shawwn#3694: true.
Sid#2121: post the links up in resources at least!
shawwn#3694: well, the only dependency in that script is `tflex_utils.for_each_line`
Sid#2121: so we can find later 🙂
aegis#2320: and `tokenizers`
shawwn#3694: that's huggingface's tokenizers lib
shawwn#3694: which you'd be using one way or the other.
shawwn#3694: and yes, it gives around a 20-40x speed boost.
Sid#2121: i like those numbers
aegis#2320: are you not multiprocessing?
Sid#2121: read the code. idk, i haven't worked on the tokenizing
aegis#2320: no I mean shawwn |
shawwn#3694: it's not multiprocessing, no. it's generally painful to set up (at least for me)
shawwn#3694: feel free to ninja the code though.
shawwn#3694: https://github.com/shawwn/danbooru-tools/blob/master/danbooru_to_tfrecord.py was my experience with multiprocessing in python, and it was quite enough
Sid#2121: oh i love the multiprocessing library.
aegis#2320: ah, I can help with python multiprocessing
aegis#2320: for something like this make sure to use a big chunksize
aegis#2320: and imap_unordered
aegis#2320: otherwise it's synchronizing line at a time
shawwn#3694: sure, but integrating it with tqdm is troublesome
aegis#2320: I have code to show for that
Sid#2121: can you guys take this to #the-pile ? thx
shawwn#3694: and I assume you'd want an accurate progress bar.
aegis#2320: -> #the-pile
mapmeld#3161: Joined the server.
dynamicwebpaige#9923: Joined the server.
Sid#2121: Hello @mapmeld , @dynamicwebpaige ! Welcome to Chiba city! Please check out the google doc in the channel description for an overview of the project, and let us know if you have any questions
shawwn#3694: Oh hey, it’s @dynamicwebpaige
Sid#2121: *google has entered the chat*
tawnkramer#6900: Joined the server.
murat#2836: Joined the server. |
malaclyps#4923: Joined the server.
samuel s#6205: Joined the server.
lacker#5924: Joined the server.
ForgottenOrb#6802: Joined the server.
ForgottenOrb#6802: how much funding is this project going to need? i saw in the doc that "some" was needed, but didn't see a particular estimate
baragonaru#7305: why not call it OpenOpenAI
Sid#2121: Hey @tawnkramer , @murat , @malaclyps , @samuel s , @lacker ! Welcome to OpenOpenOpenAI! Even *more* open than the one @baragonaru just suggested. Please check the google doc in the description for more info and ask if you have any questions
shawwn#3694: @ForgottenOrb the main expense is cloud bucket storage costs, plus TPU class A operations
Sid#2121: @ForgottenOrb the project is in super early stages right now, and we don't have an accurate estimate for funding. The current estimate is "the more the better" - we didn't really expect the ball to get rolling quite this quickly, and haven't thought too much about money. If you're interested in talking about funding though, please reach out
ForgottenOrb#6802: this is me reaching out
baragonaru#7305: also where is the main TPU funding come from? tpu's don't grow on trees
Sid#2121: ok, would you be willing to contribute? or do you just want to know estimates
ForgottenOrb#6802: both
Sid#2121: we don't have any funding channels sorted out right now but will certainly keep you in mind. Storage will be the main cost.
Sid#2121: do you have an email address we could contact you at?
ForgottenOrb#6802: lacker at gmail
murat#2836: bucket storage for training data?
jekbradbury#2280: Joined the server.
shawwn#3694: Also class A operations, not-insignificant
Sid#2121: thanks @ForgottenOrb we'll send you a mail within the next few days |
shawwn#3694: Every time a TPU fetches a tfrecord, it’s a class A op
Sid#2121: sorry @baragonaru missed this one. we have TFRC creds
shawwn#3694: And it does that constantly during training, for shuffling
baragonaru#7305: so I don't know much about TPUs -- what would the approximate NVIDIA equivalent of what you have in TFRC be, if anything?
shawwn#3694: 512 GPUs
shawwn#3694: It’s mostly equivalent core-for-GPU.
shawwn#3694: At least for our stylegan2 tests.
baragonaru#7305: Cool. and they have good interconnectivity of course...
shawwn#3694: Yeah, it’s one of the best parts
baragonaru#7305: So its like a tower of 512 V100's or something?
baragonaru#7305: approximately
shawwn#3694: Essentially yes. We trained 1024x1024 stylegan2 in 6 days on a v3-32 to *seemingly* the same FID
shawwn#3694: The reason it’s equivalent is because we used model parallelism in order to scale it up from 512 to 1024. It’s a bit hard to explain, but with TPUs you can trade cores for memory
shawwn#3694: So the v3-32 was equivalent to “divide all the cores by 4” i.e. 8 cores, and each core is mostly equivalent to a V100
shawwn#3694: Without the “divide all cores by 4” trick, it would have required 44GB per core, which is infeasible
shawwn#3694: (See https://cloud.google.com/tpu/docs/spatial-partitioning for more details)
shawwn#3694: So TPUs are far more flexible than a tower of gpus, and I’m surprised more people haven’t noticed
baragonaru#7305: spatial paritioning ... well that's pretty fascinating
baragonaru#7305: and i assume it "just works" including needing to caclulate some edge overlaps on the convolutions..
jekbradbury#2280: just FYI: 3640 PFLOP/s-days divided by a v3-512 comes to about 4 months if you're 100% compute bound and fully utilizing the TPU matrix units |
bmk#1476: We were planning on ~~begging~~ asking google nicely if we could somehow use a 2048 more than about an hour a day, lol
bmk#1476: See the document for compute estimates for 1T
bmk#1476: #documentation
jekbradbury#2280: ok, you have reasonable estimates there
shawwn#3694: The spatial partitioning did “just work” — I was so amazed that I wrote them a thank you email
shawwn#3694: It was just a matter of flipping a few config options.
bmk#1476: Tf usually never just works
shawwn#3694: Yes
shawwn#3694: It was highly surprising
jekbradbury#2280: this is XLA, not TF
shawwn#3694: Yes yes
shawwn#3694: Either way, the usual mode is failure
shawwn#3694: Also google reeeeally needs to give us access to the raw HLO outputs
jekbradbury#2280: also i'll shut up before i say anything i shouldn't 😛
jekbradbury#2280: what do you mean?
jekbradbury#2280: IIRC you can access the optimized HLO graphs through the profiler
shawwn#3694: Well, the tf graph is compiled to XLA, but the compilation step happens on the TPU
shawwn#3694: There’s no visibility into what precisely it’s generating. And yeah the profiler does give some info there
shawwn#3694: It has to, in order to profile
shawwn#3694: But it’s not the same as being able to see the full XLA-compiled graph |
shawwn#3694: I’ve been meaning to reverse engineer how the profiler works, in hopes of maybe being able to trick the TPU into dumping the compiled output (or an equivalent)
jekbradbury#2280: i think we even expose the optimized HLO in JAX TPU colabs
baragonaru#7305: ok, pardon my ignorance, but if hypothetically one did have say 500 V100 gpu's and they had very poor interconnectivity, like lets say internet distributed, would you be able to train something like GPT3 at all?
baragonaru#7305: as a thought experiemtn.
bmk#1476: it would be very difficult
shawwn#3694: Speaking of Jax, have ya’ll made any progress in being able to use the TPU CPU explicitly?
bmk#1476: still possible, just requiring a LOT of cleverness
shawwn#3694: It’s pretty much the only thing keeping me from switching
jekbradbury#2280: yes, we have
jekbradbury#2280: hoping to share more soon
shawwn#3694: Each TPU CPU has 300GB of memory.
shawwn#3694: Good, good. Can’t wait
shawwn#3694: Genuinely hyped for that feature
shawwn#3694: It’ll bring Jax (and possibly pytorch) to feature parity with tensorflow
shawwn#3694: When I tried to use Jax with a TPU pod, the /requestversion step failed though
shawwn#3694: It seems that it’s not currently possible to test it with pods
jekbradbury#2280: it definitely works but the experience is pretty bad for now
jekbradbury#2280: (we have at least one external user regularly using JAX on pods)
jekbradbury#2280: if you post more info about the requestversion failure we can look into that
shawwn#3694: Oh? Any tricks to get it running? Sure thing, here’s the email: |
shawwn#3694: https://cdn.discordapp.com/attachments/729741769738158194/733482294311911464/image0.png,https://cdn.discordapp.com/attachments/729741769738158194/733482294685204551/image1.png
shawwn#3694: Referenced notebook: https://colab.research.google.com/github/google/jax/blob/master/cloud_tpu_colabs/Pmap_Cookbook.ipynb
jekbradbury#2280: sorry, yeah, that version is out of date
jekbradbury#2280: try tpu driver nightly (with some spelling, i can look it up)
shawwn#3694: Excellent, thanks for the tip!
jekbradbury#2280: yep, should be `tpu_driver_nightly`
shawwn#3694: Cool. Is requestversion still necessary? That sounds almost equivalent to creating the pod with version set to “nightly”
shawwn#3694: But it’s probably different
zphang#7252: would you consider yourself bullish on TPUs relative to GPUs?
shawwn#3694: Absolutely
jekbradbury#2280: it's very different from the TF nightly
shawwn#3694: Figured. Alright, I’ll give that a try
shawwn#3694: (Open source the code! Let me fix bugs! Or let me sign an NDA!)
jekbradbury#2280: the TF server exposes both a CPU host and TPU chips over gRPC; the tpu_driver server exposes only the chips
AwokeKnowing#1356: Joined the server.
vunderkind#7876: Joined the server.
shawwn#3694: welcome @AwokeKnowing and @vunderkind. Everyone else is sleeping, so I get to be the de facto greeter.
shawwn#3694: Check the channel topic for the project roadmap, and ask questions
vunderkind#7876: Thanks @shawwn! Came over from a share on Twitter - checking out the doc now!
sammyjack#9845: Joined the server. |
shiva.adam#8165: Joined the server.
bmk#1476: welcome @shiva.adam @sammyjack to LibreAI! sorry, I'm not very good at the whole creative welcome messages thing haha
shiva.adam#8165: Hi, thank you!!. trying to catch-up with what's been happening here
bmk#1476: tl;dr our flagship project is replicating GPT-3. After that we're shooting for 1T parameters. Along the way we're making some awesome datasets and libraries. And all this will be completely free and open source!
bmk#1476: (and we can use all the help we can get)
wuthefwasthat#2719: Joined the server.
bmk#1476: hey @wuthefwasthat , welcome to LibreAI!
Deleted User#0000: Joined the server.
shawwn#3694: @Deleted User welcome to the jungle, we got fun and games 🎵
joseph b#4554: Joined the server.
shawwn#3694: @joseph b welcome to shitposting central. I’m your resident greeter, because everyone else is asleep. Be sure to grab a beer and check the channel topic
Pfiresword#0366: Joined the server.
shawwn#3694: @Pfiresword I swear you joined earlier
shawwn#3694: Deja vu~~
chirp#4545: Joined the server.
shawwn#3694: Welcome @ericme3
Amy#4210: Joined the server.
sibeshkar#2275: Joined the server.
Pfiresword#0366: > @Pfiresword I swear you joined earlier
Probably TPU podcast from this morning haha |
shawwn#3694: Oho, thought so
shawwn#3694: Welcome to TPU Podcast #2
shawwn#3694: I hereby claim the server until everyone wakes up. Till then, check the channel topic!
shawwn#3694: Also they have some interesting goals
shawwn#3694: I think they have a strong chance at replicating gpt3. Probably the best of anyone outside OpenAI
shawwn#3694: Still going to be tough
ExMachina#1693: Joined the server.
bmk#1476: from what i gather, aren't we the *only* ones seriously trying? (outside the big labs ofc, where we cant be sure what's happening)
jili#4497: Joined the server.
Nax#8383: Joined the server.
Alm#9130: Joined the server.
Sid#2121: Hey @Amy , @ExMachina , @jili , @Nax , @Alm ! ***wipes sweat***, welcome to LibreAI! the place where we are definitely working on creating large language models and are not just GPT-clones greeting everyone that joins the discord! Please check the channel description for more info and let us know if you have any questions about the project.
Nax#8383: Hey guys 🙂
Nax#8383: I believe we can replicate GPT-3 with limited resources
Sid#2121: https://tenor.com/view/simpsons-homer-bart-lisa-join-us-gif-13066208
Sid#2121: me too! @Nax
Nax#8383: 🎉
jili#4497: If you guys need GPUs...
jili#4497: I have like 250RTX a few DGX2 and V100 waiting for your workloads
jili#4497: DGX1 too |
Nax#8383: Actually I am reading this paper: https://arxiv.org/pdf/2006.16236.pdf
Sid#2121: that is a very tempting offer @jili , but our current pipeline is all tpu bound atm. The bottleneck is actually cpu, if you have any access
jili#4497: I’ve CPus too
jili#4497: Just tell me what u need
Sid#2121: @jili we'll be distributing our data processing pipeline across the several people who have offered us compute. Can i add you to the list? could you send details of what you have access to?
jili#4497: Yep please add me
Nax#8383: > Actually I am reading this paper: https://arxiv.org/pdf/2006.16236.pdf
@Nax and given large enough data we can run it on normal GPUs for a few weeks and should be able to reproduce
Sid#2121: @Nax you think you could implement in tf-mesh? 👀 looks interesting
Sid#2121: ok thx @jili - will you be easily reachable over discord or would some other contact method be better?
Nax#8383: > @Nax you think you could implement in tf-mesh? 👀 looks interesting
@Sid I am not very good with TF but can work with PyTorch+GPUs xD
arbdev#0001: Joined the server.
Sid#2121: Ah. I wish i could say that were helpful @Nax but we'd have to rebuild our whole model lol. I'll post the paper up in #links since it seems an important thing for us to try to implement eventually
SF#9928: Joined the server.
Sid#2121: Hey @arbdev , @SF ! Welcome to Wintermute's airgapped playground! Check the google doc in the description for more info on the project & please reach out if you have any questions.
Nax#8383: > I have like 250RTX a few DGX2 and V100 waiting for your workloads
@jili wow that's great. I wanna try linear-attention transformer on web text dataset xD
newmarkantony#6684: Joined the server.
jili#4497: > ok thx @jili - will you be easily reachable over discord or would some other contact method be better? |
@Sid better with twitter @jilijeanlouis
Sid#2121: merci @jili !
satosh3944#3941: Joined the server.
Sid#2121: Hey @satosh3944 ! Welcome to the LibreAI, the server where we truly have run out of custom welcome messages. Please check the google doc in the channel description for more info 🙂
shawwn#3694: @jili is the one who helped us with servers -- he's a server god, as far as I'm concerned
Aimely#0568: Joined the server.
atoms#7386: Joined the server.
peterli100#7294: Joined the server.
Soth02#0063: Joined the server.
Daj#7482: Hey @Aimely @atoms @peterli100 @Soth02 and all the other (many) people that joined! Welcome to LM To Go™️! Please check the google doc in the channel description for more info!
Deleted User#0000: Joined the server.
Daj#7482: Hey @Deleted User ! Welcome to the Amazing TPU Melting LM! Check the channel topic for current info on our project and don't hesitate to ask questions!
Nil#3117: Joined the server.
Daj#7482: Hey @Nil ! Welcome to That One Discord Server You Saw On Twitter™️! Check the channel topic for current info on our project and don't hesitate to ask questions!
AB#7870: Joined the server.
kevinw#4330: ok here is a question. i hope @Daj doesn't see it as picking on them, and apologies if this has been discussed already. in their medium post about their project to replicate GPT2 1.5B about a year ago, @Daj reports some metrics and discusses that the project was not successful in replicating GPT2's capability. it made me wonder if despite the clarity of the OpenAI publication and the simplicity of the transformer model, there is still some "secret sauce" at OpenAI that will be hard to replicate.
Daj#7482: That's a totally fair point and I speculated as much at the time. We'll be trying several runs until we have a 1.5B at parity with OpenAI before moving to larger models
Daj#7482: Also hi @AB ! Welcome to the Paper Mills! Check the channel topic for current info on our project and don't hesitate to ask questions!
Daj#7482: and for the record, as clear as OA's papers are, there are definitely key parameters left out that they also weren't supre clear on when I asked them hah
kevinw#4330: okay, so it sounds like an early contribution of this effort, besides putting together a great dataset, could be to sort of elucidate what happened there. actually, not an ai researcher here, so i have no idea whether someone else has independently reimplemented GPT2 1.5B (to the same level of capability) in the meantime |
Daj#7482: Yep, we also want to be a kind of replication study of GPT models! I'm actually not sure myself what other teams have attempted 1.5B replication. I'm aware of the Brown team and they got similar results (though with a different model architecture iirc, not sure if they retried with vanilla GPT after)
Daj#7482: There's a lot of unknowns as it comes to best practices in training huge LMs like these, luckily we have the compute to burn to hopefully help with that
kevinw#4330: huge respect to you for working on this ambitious project out in the open, and the open discussion of ethics
Daj#7482: It's been a pleasure and an honor, I'm very lucky to have been allowed to work with such cool people :)
bader#1277: Joined the server.
Daj#7482: Hey @bader ! Welcome to the TPU Circus! Check the channel topic for current info on our project and don't hesitate to ask questions!
Kihe#6720: Joined the server.
magic#9103: Joined the server.
Daj#7482: Hey @Kihe @magic ! Welcome to the Order of TPUs Who Go Brrrr! Check the channel topic for current info on our project and don't hesitate to ask questions!
old#3101: Woah lots of google/openai people here now
bmk#1476: @shawwn @Daj I believe there is a fault in communication rather than intention here, let's work this out amicable instead of attacking each other. I agree with your suggestion; we should have made it more clear that documentation channels were only for links, or relaxed the requirements to allow non-links too (I'm personally in favor of this), or maybe added a non-links resources channel. However, I think it's counterproductive for us to have massive `bikeshedding` meta discussions where we accuse each other of bad faith rather than reach an agreement over how we should do stuff.
Daj#7482: Yea I think we're all good
Daj#7482: Policy is cleared up, everything fine I think
bmk#1476: alright
shawwn#3694: well, if you say everything is fine then everything must be fine
bmk#1476: I personally think we should allow concise non-link messages too, as long as they're not discussion
Daj#7482: Anything worth archiving we keep, links or short data pieces/messages
bmk#1476: @shawwn please, let's try to reach an agreement on what we should do rather than being passive-aggressive about it
Daj#7482: Resources is for summaries of longer discussions we had elsewhere and links we might want in the future
Daj#7482: imo |
shawwn#3694: what's the point? the people here basically shout at each other until a consensus is reached. it's a recurring theme
Daj#7482: Uhm
Daj#7482: I have not made that experience whatsoever, I'm sorry you feel that way
shawwn#3694: listen. people are fine and happy with how this is done
shawwn#3694: so be it.
shawwn#3694: from my perspective, outside ideas are hardly even considered
Daj#7482: I'm sorry you feel that way because that is not what we're trying to convey
Daj#7482: We've tried to get everyone involved that wants to
Daj#7482: If other people mirror this stance I of course want to remedy it however possible
shawwn#3694: I was probably too grumpy because my illustrious(TM) wisdom(TM ) was deleted from documentation. I meant to express concern that perhaps the bike shedding is a bit more than just a meme
Daj#7482: I apologize for that, I can guarantee you it was not meant maliciously
shawwn#3694: it's alright.
Daj#7482: :)
shawwn#3694: anyway, yes, message clear. I'll shut up. I haven't exactly been contributing productively anyway.
Daj#7482: As from day 1, we always welcome your contributions, I am sorry if we ever made you feel unwelcome but we're doing our best. I apologize for the small policy miscommunication, it shouldn't happen again now. Please know that you and everyone else is always welcome to contribute or not as they please :)
shawwn#3694: nah, it was just me being irrational. I think what bugged me was that you all got an absurd amount of momentum in two days that I had to work two months for. So it was mostly a personal failing that it came out as criticism
shawwn#3694: I hope that it continues though.
shawwn#3694: (truly)
Daj#7482: Hey I understand it, and you're fully forgiven from my side :) You worked crazy hard and made crazy cool stuff, we just got lucky in so many ways
Daj#7482: Thanks man! You inspired this place in big part too so thanks for all you've done and I hope we can both have a productive and fun time going forward! |
shawwn#3694: mm, it's not quite luck. I think you have a real shot at replicating GPT-3. mesh tensorflow will be an interesting challenge...
Daj#7482: Yea we've got a good team, we're working hard and we'll get it done!
Deleted User#0000: @Nax I have that paper implemented here https://github.com/lucidrains/linear-attention-transformer/
Deleted User#0000: Feel free to try it! I don't think it is as good as full attention at lengths greater than 2048
Sid#2121: thanks @Deleted User ! Love your work 🙂 Teven from HF warned us that linear attention might not be applicable to NLP yet - check his comment out in #links
Deleted User#0000: @Sid thank you Sid, and thank you for your help with tensorflow mesh yesterday! I'm making progress with Pytorch implementation of Mixture of Experts
Sid#2121: oh awesome
Sid#2121: yeah the mesh code is a little confusing
Deleted User#0000: most tensorflow stuff is confusing
Sid#2121: they were just trying to make everything modular, i think
Sid#2121: true true
Deleted User#0000: Ahh Teven is optimistic on RT and Linformer. You can also try those solutions. There are some people using RT with good results https://github.com/lucidrains/routing-transformer
Deleted User#0000: Linformer is not made to be GPT-2 like (it cannot be auto-regressive)
Deleted User#0000: It also has the downside of working only with a fixed sequence length
Deleted User#0000: It's meant to be a BERT like replacement
Deleted User#0000: Great channel! I'm glad there's people that are working on freeing AI! And it will be freed!
Deleted User#0000: It turns out it's just attention, at scale
Sid#2121: > It turns out it's just attention, at scale
@Deleted User pin dis
Daj#7482: Well if you ever want to brave the TF wastelands and help us out, we'd _love_ to try alternative transformers, but just don't have the manpower |
Sid#2121: > It also has the downside of working only with a fixed sequence length
@Deleted User I'm pretty sure we can only work with fixed sequence lengths anyway, since tpus
Deleted User#0000: @Daj I'll get back to you all once I finish my last alternative transformers project https://github.com/lucidrains/memory-transformer-xl/
Deleted User#0000: Would love to help out
Daj#7482: Awesome! You really are productive damn
Sid#2121: mannn i wish you were on the dark tf side @Deleted User . Your transformers are amazing
Deleted User#0000: Singularity or bust.
Daj#7482: Pinned a message.
Daj#7482: Disclaimer: This is not LibreAI's non-tongue-in-cheek opinion
Daj#7482: lol
Teven#6831: hey @Deleted User fancy seeing you here! Yeah Linformer as presented in the paper isn't an autoregressor, but I've been meaning to make it one. It's a bit annoying that you can't cache the keys and values, but I believe you can work around the hidden states. Still not quite mature, you're right
bmk#1476: singularity without singletons is our goal
Daj#7482: Singularity without x, s, or any other alphabetical -risk haha
Deleted User#0000: Hi @Teven ! Belinda also told me they had plans to make it auto-regressive! Very interested to see what you both come up with eventually
unography#9216: Joined the server.
shawwn#3694: Why don’t you put up a patreon? Emily said she doubted you’d get $1k/mo. That seems very likely to me
Daj#7482: Hey @unography ! Welcome to LibreAI, proud producers of the finest inefficient GPT implementations™️. Check the channel topic for infos and don't hesitate to ask questions!
shawwn#3694: It would also give people a direct way to contribute, which you’ve till now had no way of.
Daj#7482: Was that question adressed at me/LibreAI or someone else, shawn?
shawwn#3694: I was asking tim, actually |
Daj#7482: Ok just wasn't sure :)
shawwn#3694: Yes I was talking to LibreAI.
Daj#7482: So the question was about whether LibreAI should set up a patreon? If so the simple answer is we've just not gotten that far with planning haha
shawwn#3694: It’d take about 30 minutes
Daj#7482: Yea but there's some meta stuff like who handles the money, the name LibreAI is technically taken, etc
Daj#7482: We'll get around to it if we decide that's the right route
shawwn#3694: Make it your own patreon then
Daj#7482: Direct donations may be more useful
shawwn#3694: Free $1k/mo recurring revenue
Daj#7482: Haha I don't think I personally deserve money for what I do
shawwn#3694: Life changing sums for some.
Daj#7482: Or that people would contribute
Daj#7482: Don't get me wrong I would love it lol
shawwn#3694: Then do it. I already get $104/mo for essentially nothing in particular
shawwn#3694: I only urge it because it won’t last. Momentum has a way of vanishing
Daj#7482: Might be good advice, I'll consider it
unography#9216: @Daj thanks for the welcome! this looks like a cool place to hang around and break stuff
unography#9216: just wondering — is it possible to train gpt models for svg generation? has there been similar experiments?
Daj#7482: Haha that's the spirit!
Daj#7482: I don't know of any experiments but in theory it should be doable, I like the idea |
Sid#2121: @unography not gpt and probably not quite what you're looking for but I used stylegan / biggan to generate fonts (then converted to svgs)
Sid#2121: I love this idea tho
Sid#2121: (in fact - this is what our logo text is made from)
zaoyang#8792: Joined the server.
Daj#7482: Hey @zaoyang ! Welcome to the Least Financially Savvy AI Lab! Check the channel topic for infos and don't hesitate to ask questions!
chris-tn#9278: Joined the server.
Daj#7482: Hey @chris-tn ! Welcome to the Final Boss of LMs! Check the channel topic for infos and don't hesitate to ask questions!
jjjjj#1483: Joined the server.
jjjjj#1483: sup Daj
Daj#7482: Hey @jjjjj ! Welcome to the LM Defense Force! Check the channel topic for infos and don't hesitate to ask questions!
ivan#3200: Joined the server.
Daj#7482: Hey @ivan ! Welcome to the LM Forgers Guild! Check the channel topic for infos and don't hesitate to ask questions!
shawwn#3694: Server passed 200 people
goolulusaurs#1571: How are people even finding out about us?
shawwn#3694: Twitter, I think
shawwn#3694: Hugging face CEO wanted to replicate GPT-3, and like four people replied with a discord link here
Sid#2121: I think @gwern posted on the openai slack too
Bartolomeo#6807: Joined the server.
shawwn#3694: @Bartolomeo welcome. Check the channel description for the roadmap and feel free to ask questions.
vessenes#0001: Joined the server. |
shawwn#3694: Hi @vessenes. Welcome to OpenClosedOpenAI. The roadmap is in the channel description
tomislav_REGEX#7247: Joined the server.
shawwn#3694: Hello @tomislav_REGEX. We’re a big fan of regexes here, so please feel free to compile yourself
vessenes#0001: Hey @shawwn . Does the doc have right that current estimates are for 192 TPU-days for a GPT3+ scale training?
shawwn#3694: For what it’s worth, the Jax team lead looked over the doc and concluded the estimates were reasonable
vessenes#0001: If preemptible works, that comes out to like $6-10k for compute? I'm just trying to get a sense of scale of cost here. In my mind GPT3 was a lot more expensive than that to train.
shawwn#3694: Luckily, we have the generosity of TFRC, which enables us to train mostly for free using TPU pods. However there are still significant costs: mostly GCE bucket storage and access
vessenes#0001: And of course there's a lot of text processing to do ahead of training
vessenes#0001: what's TFRC?
shawwn#3694: Absolutely. We have a channel related to that: #the-pile
shawwn#3694: Tensorflow research cloud. You can apply, too
vessenes#0001: Got it.
shawwn#3694: They’re pretty generous with the TPU credits.
zphang#7252: I had a follow-up question: are you bullish on TPUs relative to GPUs, conditioned on cost?
shawwn#3694: Absolutely. In my opinion, Nvidia can’t compete — or if they will, they have some serious catching up to do
baragonaru#7305: We should add a FAQ section to the doc for some of these
shawwn#3694: One large reason I’m bullish on TPUs is that for every 8 cores, you also get a CPU with 300GB memory
shawwn#3694: And each core has 16GB
shawwn#3694: So it’s both a data processing pipeline and a training pipeline
zphang#7252: do you think that that's something that NVidia is unable to do, or they just find no need to given that their current business model works well |
(Put another way, could they easily pivot if they felt the need to?)
shawwn#3694: I think before long many companies will come to the horrific realization that 1. In order to compete, they need a cloud platform; 2. They also need serious hardware talent, and the ability to execute on it; 3. The number of companies with both of these is essentially one.
shawwn#3694: Worse, large scale ML training is the future, so they’ll be at a serious disadvantage
baragonaru#7305: NVidia is also more broadly focused, they view GPUs as general purpose processors for which ML is currently one good application
zphang#7252: Do they need a cloud platform, though? It sounds like NVidia would compete as the on-premise counterpart to Google/TPUs, in that scenario
shawwn#3694: Possibly. But then they’re fighting a losing battle in terms of ecosystem
shawwn#3694: People are already fed up with cuda relative to tensorflow
shawwn#3694: For all of tensorflow’s flaws, there’s still a broader ecosystem there
shawwn#3694: One that you can easily jump into.
shawwn#3694: Jax and pytorch TPU support is also on the horizon (full support, not the current version)
zphang#7252: ah, from my (academic) side of the world people seem much more fed up with tensorflow
shawwn#3694: True enough. If it was solely tensorflow, I would be hesitant
shawwn#3694: But with the arrival of Jax and pytorch within two years, people are going to notice TPUs in a big way
shawwn#3694: (Right now they can’t use TPU CPUs, only the chips, which removes one of their biggest advantages)
zphang#7252: I guess I'm thinking that if one day NVidia realizes "we need a TPU-scale machine", they're still in a position to catch up
zphang#7252: e.g. partying up with Azure or what not
shawwn#3694: Would they really sacrifice their DGX line?
shawwn#3694: Azure is actually the most interesting counterparty to TPUs
zphang#7252: everytime my jobs get assigned to a DGX machine they crash, so I'm not bullish on that :p
shawwn#3694: One competitor looks like this: Facebook teams up with Azure |
shawwn#3694: Oh? That’s interesting to know
shawwn#3694: What’s a DGX like?
zphang#7252: eh it's probably just a configuration thing
zphang#7252: but there's 1 (maybe more?) DGX in my university cluster I think and I have it on my exclude list
shawwn#3694: Interesting. What sort of exclude list? I’m curious about your workflow
zphang#7252: it's just slurm
bmk#1476: @vessenes 6-10k? That sounds like it's off by two or three orders of magnitude
bmk#1476: (ofc it's free to us since we use TFRC, but still- 10k would be a bargain for what we're planning to do)
shawwn#3694: And we’re back.
psgeorge#6388: Anyone have tips for downloading several 10-20 TB of potentially sketchily sourced data using/not using GCP?
shawwn#3694: I would use Hetzner for this
shawwn#3694: The advantage is that ingress into GCE is free
shawwn#3694: Once the data is in GCE, you’ll pay a fortune to get it out.
psgeorge#6388: Ahh, okay. I'll look over Egress costs then. We have several thousand GCP credits to burn.. but good to double check
psgeorge#6388: Anything that seems useful for you guys I'll see if we can pipe your way. Did anyone get in touch with the Archivist?
zswitten#0371: Joined the server.
shawwn#3694: Hi @zswitten, welcome to The Spot. Check the channel description for the project roadmap and feel free to ask questions.
shawwn#3694: @psgeorge hmm, I think @bmk would know, but he’s away at the moment.
Sid#2121: @psgeorge nope not yet - we've all been rather busy but I plan to reach out to a load of people tomorrow, archivist included
psgeorge#6388: @Sid awesome. Super busy on my end too. Is there anyway I can help with TFRC credits? |
Sid#2121: 👀 if you have them, the more the better, of course
Sid#2121: If you can somehow magically get us tpus larger than 512 cores that won't get pre-empted within an hour, that would be even better
shawwn#3694: @psgeorge the most directly useful thing might actually be GCE credits
Sid#2121: at the moment, yeah. but when we have proper data and start testing at scale the more tpus the better
shawwn#3694: A v3-2048 is “only” 4x faster than a v3-512. But the GCE credits will go away fast, especially during training
psgeorge#6388: @shawwn how can I put them to use?
shawwn#3694: Hosting cloud buckets would be the way. We can point TPUs at it
shawwn#3694: VMs are relatively cheap in comparison
shawwn#3694: It would be a matter of creating a bucket in Europe-west4 and adding us to the permissions tab as storage admins.
ExGenesis#0031: Joined the server.
Sid#2121: Hey @ExGenesis ! Welcome to the seedy dive bar of AI research! let us know if you have any questions, and check the link to the google doc in the channel description for more info on our project
bg#3333: Joined the server.
ajzdad#6940: Joined the server.
Kazumi#1297: Joined the server.
Daj#7482: Hey @bg @ajzdad @Kazumi ! Welcome to the Tensorflow Trenches! Please check the channel topic for info and don't hesitate to ask questions!
Kazumi#1297: Hello
swapp#5133: Joined the server.
Sid#2121: Hey @swapp ! Welcome to the restaurant at the end of the universe! Let us know if you have any questions, and check the google doc in the channel description for more info on the project
MarkX#7373: Joined the server.
Sid#2121: Hey @MarkX ! Welcome to the Technopoly HQ! Let us know if you have any questions & check the channel topic for info |
Noa Nabeshima#0290: Hi @MarkX!
Joscha#2969: Joined the server.
plinz#6161: Joined the server.
Sid#2121: Hello @Joscha , @plinz ! Welcome to the GPT-discord-simulator, where everyone may or may not be a chatbot, who knows... Pls check the channel description for info on the project and reach out if you have any questions
Daj#7482: Hey @Joscha ! Nice to see you here
huh#0141: Joined the server.
prempv#0575: Joined the server.
Sid#2121: Hey @huh @prempv ! Welcome to the land of the grey goo! pls check the google doc for a description of what we're doing and reach out if you have any questions
Gab#4311: Joined the server.
DavidD#9557: Joined the server.
Noa Nabeshima#0290: Hey, it adds extra overhead, but maybe alternative sample-efficient-pretraining objectives like ELECTRA (https://openreview.net/forum?id=r1xMH1BtvB) would be good? It's not clear if ELECTRA will scale and it seems tricky to get right, but maybe some other pretraining objectives are simple and will be much faster than next-token prediction. How long do we expect training to take with our current resources?
bmk#1476: interesting
bmk#1476: I'm skeptical as to the usefulness of this but it could be worth exploring
kindiana#1016: i wonder if anyone has applied ELECTRA to causal attn in the discriminator (I suppose it doesn't matter what the generator uses)
kindiana#1016: might be interesting to combine electra with gradient origin networks (https://arxiv.org/abs/2007.02798) to make the discriminator directly act as a generator when sampling
zphang#7252: I'm not sure if incorporating encoder-style objectives with decoder-style objectives would be a good idea
zphang#7252: and it'd require maintaining a separate generator
Noa Nabeshima#0290: > I'm not sure if incorporating encoder-style objectives with decoder-style objectives would be a good idea
Yeah, ELECTRA in particular is probably bad for our use case
One way to make this simpler is have the generator pre-generate the data. It does seem like figuring out the intricacies of this would be its own research project. |
Noa Nabeshima#0290: But I'm imagining if our 1T transformer is going to require half a year to train (I don't know how long it'll actually take), it might be worth coming up with alternative faster training methods.
kindiana#1016: sounds like noisy student with extra steps if they are both decoders
zphang#7252: There are similarities but I think the underlying motivation is different. Noisy student seeks to make use of unlabeled examples to supplement supervised, labeled training. ELECTRA tries to get the discriminator succeed where the generator (which is a regular MLM) fails, all still relative to the ground-truth text
andromedis#2097: Joined the server.
Noa Nabeshima#0290: Hey, I'm making a LibreAI GitHub organisation just so I can fork my own repository on GitHub (I'm not a member of any organizations). Happy to transfer ownership to anyone working on the TFMesh code or otherwise thinks they have some claim to it
Noa Nabeshima#0290: haha nevermind it already exists
Noa Nabeshima#0290: https://libreai.com/
Noa Nabeshima#0290: https://github.com/LibreAI
Noa Nabeshima#0290: We need a new name you guys
Noa Nabeshima#0290: We might want that domain name and it'd be nice to have a github org
Noa Nabeshima#0290: I think keeping the name will be okay as well. Here are some alternative names
Noa Nabeshima#0290: Open
Libre
Interpretable
Clear
Transparent
Visible
Lucid
Insightful
Cogent |
Useful
Friendly
Natural
Soul
Spirit
Love
Ethical
Pure
Real
Natural
Unrestricted
Progressive
Collaborative
Accessible
Cooperative
Adaptive
Benevolent
Trustworthy
Inclusive
Caring |
Honest
Fun
Focused
Purposeful
Skeptical
Responsible
Responsive
Accessible
Integrated
Revolutionary
Definitive
Potable
One
Inseparable
Restorative
Covert
Sustainable
Rational
Balanced
Mature |
Workable
Tactful
Nameable
Learning
Skillful
Effective
Essential
Transitional
Powerful
Tangible
Iterative
Flexible
Aware
Liberating
Practical
Solid
Achievable
Pivotal
Critical
Viable |
Abundant
Noa Nabeshima#0290: Gestalt is great but already taken
Noa Nabeshima#0290: Friendly is really good, but creating namespace conflict with AI Safety seems probably bad
Noa Nabeshima#0290: Unrestricted seems accurate at least
jk_aync#5159: Joined the server.
kevinw#4330: AING, which stands for "AING is not GPT"
Noa Nabeshima#0290: "LINAA Is Not An Acronym"
Noa Nabeshima#0290: aiaiai.ai is available
Noa Nabeshima#0290: friend's idea
Sid#2121: @Noa Nabeshima yeah we really need a new name. I love aiaiaiai hahaha but i don't know if it's *too* memey
zphang#7252: surprisingly
Sid#2121: I'll talk to daj and bmk when they're awake
zphang#7252: `bikeshedding.ai` is not taken
Isaac McHorse#2007: ALL PLAY AND NO WORK MEANS GPT-NEO NEVER GETS DONE!
Sid#2121: ooooh
Sid#2121: hahaha
Sid#2121: that one is very appropriate
zphang#7252: neither is `yakshaving.ai` fwiw
Sid#2121: lmao, that's the first time i've had that one
Sid#2121: I think we were thinking of just going by GPT-neo |
Sid#2121: but then, that's only one aspect of the project
Sid#2121: so idk
Sid#2121: can we just... take away the e
Sid#2121: Librai
Noa Nabeshima#0290: librarai
Noa Nabeshima#0290: fwiw I don't like Librai
Noa Nabeshima#0290: but it won't be taken anywhere!
Sid#2121: hahahah, yeah i'm not a big fan either. it's a bit of an ugly word
Sid#2121: it feels like... there's an e and a space missing
Sid#2121: Open*er*AI
Daj#7482: For what it's worth I'm fond of "NettAI", which sounds like some kind of 80s cyberpunk thing because it has "Net" in it, but also means "nice AI" in German so the logo can be some kind of glitched smiley face. I also like AING/LINAA and other recursive puns.
Noa Nabeshima#0290: ooh, I like that
zphang#7252: I assume that everyone knows "ai" (爱) is love in japanese/chinese
zphang#7252: but also "kai" (开) is open in chinese
Sid#2121: ... kaiai?
zphang#7252: I have had no idea how to turn those into a url though
goolulusaurs#1571: if we did FreeAI -> Free Love
Sid#2121: i like kaiai and freeai
Sid#2121: nettai also cool
Daj#7482: > if we did FreeAI -> Free Love |
@goolulusaurs lol
Sid#2121: poll poll poll
Sid#2121: what are our options
Daj#7482: KaiAI, FreeAI, NettAI, Bikeshedding, AING/LINAA
Isaac McHorse#2007: WELL YOU'RE NOT WORKING!
Daj#7482: I think?
Sid#2121: what's the idea behind the last ones?
Sid#2121: ah, brb
Daj#7482: Dunno they're recursive and that's funny to me lol
Sid#2121: tbh i don't really like any of them as much as LibreAI
Sid#2121: if i had to choose I'd go with freeai
Sid#2121: neat, simple
Sid#2121: but let's start a poll
goolulusaurs#1571: same
Sid#2121: do a new channel or sth with all the options and get people to vote with emojis
Sid#2121: gtg
Sid#2121: also how is freeai not already a thing, should we check
goolulusaurs#1571: I did search around a bit and didn't see it, but idk
Daj#7482: I can make a channel yea
Daj#7482: If that is the easiest way to collect votes |
Daj#7482: So anyone with an interest in our future name feel free to vote in #deleted-channel
Deleted User#0000: Joined the server.
Deleted User#0000: elo
goolulusaurs#1571: Hello!
Daj#7482: Hello @Deleted User ! Welcome to the Temple of the Not-Quite-There-Yet AGI! Please check the channel description for info and don't hesitate to ask questions!
sifbuilder#4366: Joined the server.
Daj#7482: Hey @sifbuilder ! Welcome to the AGI Book Club! Please check the channel description for info and don't hesitate to ask questions!
kostia#6505: Joined the server.
Daj#7482: Hey @kostia ! Welcome to The Nameless AI Lab! Please check the channel description for info and don't hesitate to ask questions!
Noziar#4947: Joined the server.
Daj#7482: Hey @Noziar ! Welcome to the Not-Quite-Ivory Tower! Please check the channel description for info and don't hesitate to ask questions!
bmk#1476: KaifangAI
bmk#1476: AI in Chinese is zhineng
bmk#1476: Kaifangzhineng
Daj#7482: I feel racist just trying to pronounce that
bmk#1476: You could mix languages, KaiKI
Daj#7482: In all honesty I have a feeling we're pretty western
Daj#7482: Unless I've misjudged many nationalities
bmk#1476: Oh I just like languages lol
Daj#7482: fair haha |
bmk#1476: 开KünstlicheIntelligenz
Daj#7482: Haha
bmk#1476: I really like the idea of using Kai in the name because it 1. Sounds cool 2. It's easy to pronounce
Daj#7482: Feel free to vote for it heh
bmk#1476: KaiLM
bmk#1476: For the model
bmk#1476: KaiML for the org
bmk#1476: Also 开 looks nice and is easily logoifyable
Daj#7482: I guess I feel weird about putting a chinese name on a very western org heh
Daj#7482: But as said, we have free voting in #deleted-channel
Commutative Conjecture#6969: Not sure where best to ask
Let's say I want to train a small GPT...
- If I want to do so on a 2080gtx, how small does the model and data need to be so that it's trained in a day?
- If I just want to fine-tune a trained model, how small does the model and data need to be so that it's done in a day?
- Is there some benchmark so that I can compare it to models of comparable size?
- If I want to do so on TPUs, does someone here have an API key / access? If so, who should I make my case to?
- Which benchmarks should I use to compare it to GPTs of roughly the same size?
- Is it interesting to try exotic regularizations and dropouts while training and see what happens?
- Is it interesting to train it layer by layer and see what happens?
- Is it interesting to set some higher level weights to zero/one and see what happens? |
- Is it a reasonable first time project?
- Anyone interested in me documenting this?
Sid#2121: ok one that i can answer - TPU access is granted to us through TFRC (tensorflow research cloud) and they're pretty open in granting access to projects - you should try to apply
Sid#2121: plenty of benchmarks - worth reading the gpt-2 and gpt-3 papers, as they test on many different ones
Sid#2121: it's tough to say which is 'the best' benchmark as they all measure very different, specific aspects of language models
Daj#7482: - There is very little info on training GPT models from scratch, so I can't say much here. You can extrapolate the FLOPS needed using the OA scaling laws paper
- There is more data on fine tuning, just google around a bit (don't know off hand)
- Yes, perplexity on a held out sample of the training data is a decent approximation. LAMBADA is a more specialized dataset, there are others listed in the GPT2 and 3 papers too (at least during GPT2 times, OA seemed most interested in my LAMBADA results)
- I have a bunch of TPUs, you can get some pretty easily too by applying to TFRC
- See above
- Definitely interesting, would love to try as much weird shit as possible if we have the time/compute
- Much too slow for big models, but of course interesting
- Sure, there is some work on weight quantization and binary weights around, but I would expect severe performance reduction
- Training a GPT model with code available online is pretty easily doable. I recommend Hugging Face's "transformers" library
- I think it could yield a good blog post!
Sid#2121: winograd tests, for example, can be considered a decent measure of a LM's ability to 'reason' (i hesitate using that word, because it's not necessarily reasoning, but that's what the winograd test is there to measure)
Sid#2121: edit: winograd, not winogrand. My brain is still very photography focused 😆
goolulusaurs#1571: I've trained a bunch of tiny GPT like models on wikitext-103 in a day on local gpus, its interesting to get a feel for how it learns but the final performance isn't very good.
Commutative Conjecture#6969: (I don't want to use it, I want to check if some of the things I want to try make training massively more efficient / yield vdifferent results
If so, I then want to try it on more complex things |
Else, I want to understand why)
Sid#2121: @Daj isn't perplexity more of a measure of 'how random is my output' than a benchmark. is it comparable across models?
Commutative Conjecture#6969: Across models on the same data sounds good to me
Daj#7482: According to several papers I've read, perplexity seems to match "human graded text quality" shockingly well
Commutative Conjecture#6969: Oh nice
Daj#7482: Which I also found surprising
Commutative Conjecture#6969: Links?
Sid#2121: ye but it's just drawn directly from the loss
Daj#7482: Yea let me try and find that quote
Commutative Conjecture#6969: Thx
Daj#7482: I need to remember to bookmark this kind of stuff
Sid#2121: it's not like you can say X model has X perplexity, and Y model has Y perplexity, therefore Y is better
Sid#2121: more like 'perplexity tends to correlate with quality'
Daj#7482: Fair
Commutative Conjecture#6969: Also, do you think it's worth doing locally, or apply to TFRC directly?
Daj#7482: Well it's perplexity on a certain dataset tbf
Daj#7482: Locally is _much_ easier technically
Daj#7482: TPUs are a hassle
Daj#7482: > Such improvements are reflected through a new human evaluation metric that we propose for open-domain chatbots, called Sensibleness and Specificity Average (SSA), which captures basic, but important attributes for human conversation. Remarkably, we demonstrate that perplexity, an automatic metric that is readily available to any neural conversational models, highly correlates with SSA.
https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html |
Commutative Conjecture#6969: > Locally is _much_ easier technically
@Daj
Stupid q
Right now, there is a lib to train gpt on tpu, but not to generate text right?
If so, is training it on tpu, and then sampling locally, still a big hassle?
Daj#7482: Not really, but Hugging Face's GPT2 implementation is much cleaner and less weird than ours
Daj#7482: I don't know how well our code would play with GPUs
Daj#7482: My old code works fine on GPU but it's not extensively tested either
Daj#7482: https://github.com/huggingface/transformers This is basically the "gold standard" for most NLP model implementations atm
Daj#7482: My old code should work fine too https://github.com/ConnorJL/GPT2
Commutative Conjecture#6969: > Not really, but Hugging Face's GPT2 implementation is much cleaner and less weird than ours
Auxiliary goal is to get acquainted with your implementation and give feedback
Daj#7482: (I happen to know of at least one silently missing feature in HF's training, weight decay, but still their code is clean)
Daj#7482: In that case the old code is very similar to our current code and works on GPUs
Daj#7482: New code proooobably won't run on GPUs
Sid#2121: You mean tfm? No one has tested. I’d be interested to find out.
Commutative Conjecture#6969: When you say new code won't run on gpu, you mean, to sample, or to train?
bmk#1476: current code has a lot of geneaology from the OA impl
Daj#7482: Both Champiz
Daj#7482: TFM is not tested on GPUs by us |
Commutative Conjecture#6969: Ic
bmk#1476: you know what would be cool? a gpt-x impl family tree
Commutative Conjecture#6969: Is it easy to extract the model and run it somewhere else?
Daj#7482: You should probably just use my old code
Commutative Conjecture#6969: 👍
Daj#7482: Since the new code is _mostly_ the same just translated to TFM
Daj#7482: anyways dinner brb
Commutative Conjecture#6969: Thanks
Commutative Conjecture#6969: Brbbb
Noa Nabeshima#0290: Do any of you folks have experience diagnosing memory leaks?
Emad#9608: Joined the server.
Daj#7482: Hey @Emad ! Welcome to the Nuclear Compute Bunker! Check the channel topic for info and don't hesitate to ask questions!
seabass#9083: Joined the server.
zphang#7252: > I happen to know of at least one silently missing feature in HF's training, weight decay
elaborate?
Daj#7482: Hey @seabass ! Welcome to the Asylum For Meme Addicted LMs! Check the channel topic for info and don't hesitate to ask questions!
Daj#7482: > elaborate?
@zphang Last time I checked I had to manually edit some of the library to get weight decay working
Daj#7482: that might have changed I dunno
zphang#7252: the weight decay should be built into the optimizer |
zphang#7252: they did switch over from a custom optimizer to a built-in pytorch one a while back, maybe you were looking at it before then
Daj#7482: This was like 6 months ago
Daj#7482: So I'm not sure if this is still an issue yea
Deleted User#0000: does anyone here know or heard of anyone using Universal Transformers with any success?
goolulusaurs#1571: I messed around with it some, it seemed like you had to increase the number of params in the layer quite a bit to get similar performance, and the layer depth embedding helped also. I didn't try the conditional computation part though.
FarsicalRelease#8646: Joined the server.
Misha#6151: Joined the server.
Teqnicolor#8109: Joined the server.
Biber#1626: Joined the server.
TravellingSalesman#5222: Joined the server.
ak12#5840: Joined the server.
ak12#5840: From the Joscha talk
ak12#5840: Nice to meet y’all
ak12#5840: Connor was saying really interesting things
Daj#7482: Heyo! Welcome to the Qualia Mines!
Chlorokin#6581: Joined the server.
Roland Pihlakas#2657: Joined the server.
NeoCortex#9411: Joined the server.
plexus#0001: Joined the server.
bmk#1476: The what? |
bmk#1476: What happened to prompt so many people coming here? o.O
Sid#2121: Hello all!!
prestonS#2407: Joined the server.
Daj#7482: I'm in an online SSC meetup and mentioned our server haha
bmk#1476: oh, haha
Bracctūn#8805: Joined the server.
shawwn#3694: Interesting. Which meetup?
Misha#6151: Is there a github where the GPT-3 mimic model is on?
Sid#2121: yep! ping @Daj for access
bmk#1476: We can use all the help we can get haha
ak12#5840: Connor just blew my fuckin mind
Isaac McHorse#2007: pong
Sid#2121: lmao 20 pings later it works
Sid#2121: thanks isaac
Sid#2121: @Daj what are you talking about over there hah
bmk#1476: what is happening?
Daj#7482: I did my usual thing if no one stops me and ranted about Ai alignment and tangential topics for hours lol
bmk#1476: lol
bmk#1476: wait hol up did europe just have daylight savings
bmk#1476: o.O |
bmk#1476: apparantly no
bmk#1476: i thought europe was UTC+1
Daj#7482: It is 0:45 for me lol
bmk#1476: huh
Daj#7482: > Interesting. Which meetup?
@shawwn SSC meetup
bmk#1476: which one? i dont see any info in the ssc discord
Daj#7482: Uhhh it's set p by Joschua Fox
Daj#7482: I think I found it on LW?
Daj#7482: I don't remember actually
Daj#7482: It's _really_ good though, great speakers and great conversations
bmk#1476: huh
Daj#7482: Thanks again for the great discussion everyone from the meetup btw! Was a real pleasure
Daj#7482: Did I miss anything here meanwhile?
Teqnicolor#8109: If you register in this link you'll probably get an email for the next talk. https://www.reddit.com/r/slatestarcodex/comments/hmvtg3/ssc_meetup_july_19th_at_1730_gmt_1030_pdt_with/
shawwn#3694: @Teqnicolor Thank you!
Chlorokin#6581: Hi, I help run them. The signup for the next meeting is here: https://docs.google.com/forms/d/1GSgXyN4wKZHGKuMXLm3hkHw9FoniibqGNvz9SkIA9IQ/edit
Daj#7482: Holy shit Balaji? You guys get the craziest speakers
Chlorokin#6581: Social engineering, yo.
ak12#5840: @Daj found this article by Friston after your descriptions, thought I’d link it to the group |
ak12#5840: http://m.nautil.us/issue/86/energy/a-neuroscientists-theory-of-everything
ak12#5840: > winograd tests, for example, can be considered a decent measure of a LM's ability to 'reason' (i hesitate using that word, because it's not necessarily reasoning, but that's what the winograd test is there to measure)
@Sid anyone else have opinions about the importance of Winograd schema tests as a measure of GPT3 competence ?
aquajet#7800: I remember Gary Marcus and some other people recently published this: https://arxiv.org/abs/2004.13831
aquajet#7800: I haven't read enough to have a formidable opinion on winograd
zphang#7252: WNLI and WSC have both had good performance on them, albeit with some tricks
Sid#2121: thanks @aquajet looks interesting!
Sid#2121: !addresource #links https://arxiv.org/abs/2004.13831
EagleAnne#1120: Joined the server.
Sid#2121: Hey @EagleAnne ! Welcome to Project 2502! check the google doc for a project overview and don't hesitate to reach out with any questions
EagleAnne#1120: Hello all. I look forward to lurking and occasionally contributing moral support :)
Commutative Conjecture#6969: https://cdn.discordapp.com/attachments/729741769738158194/734688417199423528/Screenshot_20200720-102858.jpg
Commutative Conjecture#6969: Yay
Sid#2121: Nice! I should really apply to get some of my own rather than just bumming off @Daj, hah. That approval came through really quick
Commutative Conjecture#6969: I made two different asks with the same email and different subjects! I am not sure which one motivated the positive answer :(
Daj#7482: It really is pretty easy to get into TFRC
Daj#7482: Though they usually don't give you quite as much TPUs as they gave me and shawn, I think
Commutative Conjecture#6969: Got 5 v3, 5 v2 on demand
100v2 preemptible
Daj#7482: Yea they are generous with the v2s haha |
Commutative Conjecture#6969: But like
Commutative Conjecture#6969: V3 is 400 Tflops
Commutative Conjecture#6969: 5 v3 is 2 petaflops
Sid#2121: @Commutative Conjecture what’re your plans for them? 😄
Daj#7482: You can't easily combine multiple TPUs
Daj#7482: But yeah TPUs are awesomely powerful
Commutative Conjecture#6969: > @Commutative Conjecture what’re your plans for them? 😄
@Sid
I have two different sets of plans, not sure which one they have approved
- One is finetuning a gpt2 for coq codegen
- Other is trying variants on gpt2 and visualizations
Daj#7482: Oh man I've wanted to do LM with theorem provers too
Commutative Conjecture#6969: > Oh man I've wanted to do LM with theorem provers too
@Daj
Nice! I'll likely ask for your help
Commutative Conjecture#6969: Lots of different ways to do proofs in coq
Commutative Conjecture#6969: Lots of things to try, lots of representation
Commutative Conjecture#6969: Wooooo
Sid#2121: @Commutative Conjecture are you using Connor's old repo, or our new one? |
Daj#7482: I haven't actually put a ton of thought into it (mostly because I'm still an amateur with theorem provers), but I'd love to experiment
Sid#2121: ah *finetuning*
Sid#2121: hopefully we should have a gpt-2 size model within a week or two, if you fancy trying ours out
Commutative Conjecture#6969: > @Commutative Conjecture are you using Connor's old repo, or our new one?
@Sid
I'll start today, I planned to use old repo with my 2080gtx
But now that I have tpus, I'll use new repo and bite the tfm bullet
Daj#7482: Ehh start with OA's model until ours is validated lol
Sid#2121: well we won't have anything to finetune from yet
Sid#2121: yep 🙂 should be soon ish
Commutative Conjecture#6969: Idrc, I need to learn it either wah, and if I can help, I'd be glad
Daj#7482: The old repo works perfectly fine on normal TPUs with GPT2, but we'd appreciate the debugging with the new repo
Sid#2121: we have perplexity now so if that *really* is a decent metric
Commutative Conjecture#6969: > The old repo works perfectly fine on normal TPUs with GPT2, but we'd appreciate the debugging with the new repo
@Daj
Yeah, that's what I am thinking of
Commutative Conjecture#6969: I'll likely bother you with inconsequential stuff at the start though
Daj#7482: Gladly
Sid#2121: the running of the repo shouldn't be *too* different, you just need to be aware of the mesh / layout concepts
Commutative Conjecture#6969: > we have perplexity now so if that *really* is a decent metric |
@Sid
We can try to replicate the blog post that says it is
Sid#2121: blog post? I must have missed that
Daj#7482: That'd need human labeling
Commutative Conjecture#6969: @Daj linked it
Commutative Conjecture#6969: > That'd need human labeling
@Daj
Mturk cab produce it iirc, it was vbasic
Daj#7482: The Meena blogpost Sid, that's where they claim perplexity correlates with human ratings
Sid#2121: also @Daj !addresource seems like it's working now, if you want to change permissions
Daj#7482: Ah cool
Sid#2121: ah yes you have linked that several times
Commutative Conjecture#6969: Basically flagging answers as "specific" and "valid" iirc
Sid#2121: !addresource #links Perplexity's correlations with human ratings: ```Such improvements are reflected through a new human evaluation metric that we propose for open-domain chatbots, called Sensibleness and Specificity Average (SSA), which captures basic, but important attributes for human conversation. Remarkably, we demonstrate that perplexity, an automatic metric that is readily available to any neural conversational models, highly correlates with SSA```
https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html
Isaac McHorse#2007: I'm sorry Dave, I'm afraid I can't do that
Sid#2121: ahh that's lame isacac
Sid#2121: !addresource #links https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html
Daj#7482: I was messing with permissions
Sid#2121: no i think it's more regex |
Daj#7482: Should be locked down for non-O5 and non-Sentient AI
Sid#2121: probably the quotes
Daj#7482: Can someone check that?
Sid#2121: the permissions or the regex
Daj#7482: Permissions
Daj#7482: You should be able to post
Daj#7482: But others shouldn't I think
Sid#2121: @Commutative Conjecture can you try to post in #links
Commutative Conjecture#6969: Can't
Sid#2121: ok - so if you want to add a link to any of the resources channels
Sid#2121: (anyone has permission)
Sid#2121: the command is
Daj#7482: Cool, now we just use the bot to add stuff, keep it clean
Sid#2121: !addresource #channelname content
Isaac McHorse#2007: I'm sorry Dave, I'm afraid I can't do that
Sid#2121: @Daj we should add that to some docs, somewhere
Daj#7482: Yes maybe edit the resource channel descriptions
Daj#7482: ?
Sid#2121: that seems sensible
Sid#2121: (I don't know if I can do that) |
Daj#7482: Done
Aleks#1486: Joined the server.
Daj#7482: Hey @Aleks ! Welcome to The AI Lab Formerly Known As LibreAI™️! Check the channel topic for info and don't hesitate to ask questions!
Aleks#1486: Hey! Thanks 🙂
kevinw#4330: Looking at the GPT3 paper again, the GPT3 13B model size seems interesting. It seems closer to 175B than to 1.3B (which I take to be at rough parity with GPT2-XL) in performance on many tasks, and yet 13B is such a more practical "form factor" than 175B. I think it would be pretty cool if there was a model trained at something like the 13B size, along the way toward bigger models.
Daj#7482: That's a good point, I hadn't even thought about that
Daj#7482: But there is also a lot of symbolic momentum behind a "real" GPT3
Daj#7482: We'll see how much compute we have to burn
kevinw#4330: oh totally get the desire to go big. but i was thinking it wouldn't take much time by comparison with the 175B one. i think the paper says 270 pflop-days for the 13B one and 3600 pflop-days for the 175B one. so maybe it wouldn't hurt to do it along with the bigger ones.
Daj#7482: That is indeed pretty doable with our levels of compute
Daj#7482: @Sid @bmk marking this for your consideration
Sid#2121: Oh i thought this was the plan anyway 🙂
Sid#2121: Ideally waiting for results from GPT-2 size before we start to scale up, at all
Daj#7482: I thought we were skipping from 1.5B to 175B, but happy to do more intermediate steps
Sid#2121: i think at least one or two intermediate steps would be a safe idea
Sid#2121: plus, @kevinw is right - 13B model is a perfectly interesting model and would be a good thing to release - certainly easier for people to use if they don't have access to large TPUs
Daj#7482: My rough estimate would be that that would train on a 512 in about 10 days (at perfect utilization)
Daj#7482: Though do we actually believe OA's numbers are calculated with utilization in mind?
kevinw#4330: i think the numbers in the paper's table are theoretical in the sense that those are the numbers you would get if you just multiplied the number of model parameters by the number of training tokens, times 6 flops per param per token
Daj#7482: Yea but I wonder if OA actually put that many real pflops in, or just theoretical peak pflops |
kevinw#4330: also, "As a simplifying assumption, we ignore the attention operation, as it typically uses less than 10% of the total compute for the models we are analyzing"
Sid#2121: > My rough estimate would be that that would train on a 512 in about 10 days (at perfect utilization)
@Daj don't estimate with perfect utilization lol - since our utilization is only at about 20%
Sid#2121: I'd be surprised if we got up to 50
Daj#7482: That's why I'm wondering if OA's models were trained with theoretical efficiency or not
Daj#7482: Or if they had sub 50% utilization as well
Daj#7482: I mean, I'm _sure_ they did
Sid#2121: 🤷 add that to the list of stuff to ask OA
Daj#7482: As so often, empiricism reigns. We'll just train and see what happens haha
Deleted User#0000: i got the answer i wanted re: universal transformers, from the author himself! it still needs more research to make it fast
Daj#7482: Cool thanks for the info lucid!
Deleted User#0000: the author of universal transformers is the same guy who wrote 'attention is all you need'
Deleted User#0000: lol
Daj#7482: Oh wow, perhaps the person single most responsible for accelerating AI timelines lol
Daj#7482: I really hope there is _some_ alternative transformers that are ready for primetime that we can get working with GPT Neo
Daj#7482: that would be so cool
Sid#2121: wait @Deleted User is that noam shazeer?
Sid#2121: or other lead author
Sid#2121: Also I've been meaning to ask how your MOE implementation is going, and if you have any ideas on how to integrate MOE layers into a gpt like architecture?
Deleted User#0000: @Sid it's Lukasz! |
Deleted User#0000: lol
Deleted User#0000: mixture of experts is done! very beta!
Deleted User#0000: i'm testing it this week 🙂
Deleted User#0000: https://github.com/lucidrains/mixture-of-experts
Sid#2121: I actually already saw it 🙂 it's already high ranking on google for mixture of experts github hahaha
Sid#2121: are you planning on just implementing the architectures from the paper?
Deleted User#0000: it's in routing transformers too https://github.com/lucidrains/routing-transformer
Deleted User#0000: yeahh, i'm mainly interested in stretching how much one machine can do
Deleted User#0000: if you truly want to go distributed, you can just use their tensorflow mesh code
Deleted User#0000: 1 billion parameters with one MoE!
Deleted User#0000: lol
Deleted User#0000: (check the example)
Sid#2121: Yeah it's mad
Deleted User#0000: yup, another not often talked about conditional computation data structure is product key memory
Deleted User#0000: the curves look great when I use them as specified in the paper
Deleted User#0000: also have a repo for that, check i tout!
Sid#2121: are you gpt-12 writing code @Deleted User
Sid#2121: you're very productive
Deleted User#0000: https://twitter.com/lucidrains/status/1282383677171757056?s=20
Deleted User#0000: lel |
Sid#2121: *what is indentation*
Daj#7482: We just don't yet understand how brilliant its code is
Daj#7482: Indentation is what's holding humans back
Deleted User#0000: i have absolutely no doubt we will live to see an attention network learn pytorch (and tensorflow) fluently
Daj#7482: PyTorch maybe, but _Tensorflow?_
Daj#7482: Humans haven't even reached that level
Deleted User#0000: ok, g2g, bbbl
Sid#2121: @Deleted User do you think MOE can be neatly plugged into the existing gpt arch in any way? I have no idea where to start with it aside from just building the examples in the paper
Deleted User#0000: @Sid i'll be trying that this week, it's theoretically possible
Deleted User#0000: i don't see why not
bmk#1476: man, i remember trying to teach RNNs brainfuck way long ago (because there was no way of RNNs writing coherent python!) and my biggest concern was getting it to match up brackets
Sid#2121: where would you put it?
Deleted User#0000: in place of the feedforwards
Deleted User#0000: where you usually place MoEs
Deleted User#0000: you can think of it as increasing 1 ff to Nff
Deleted User#0000: without having to incur the computation cost
Sid#2121: but in the paper, they use one MOE and scale it up loads - instead of adding many over different layers
Deleted User#0000: it gates each sequence sample to 1 or 2 of the 'experts'
Deleted User#0000: the FF
Deleted User#0000: yahh, you can specify which layer to place it at |
Deleted User#0000: which is what I did with routing transformers
Sid#2121: so you'd only use one or two?
bmk#1476: they mentioned that numerical stability was preventing them from scaling to 1T params
Deleted User#0000: yup
bmk#1476: I'm guessing that was issues with softmax
Deleted User#0000: its the same with PKMs
Sid#2121: hm. interesting. And then scale up the amount of layers as needed - or simply scale up the MOE?
Sid#2121: well, i think i know the answer. That's a dumb question
Deleted User#0000: i would scale up everything, embedding dimensions, number of heads, head dimension
Deleted User#0000: i think the novelty with MoE is that you can scale up the parameter counts with it scaling quite well computationally
Deleted User#0000: because in the end, each token only sees at most a few experts
Deleted User#0000: being gated
Deleted User#0000: and somehow this works
Sid#2121: I think i'll be trying that with our gpt archs this week. Sounds very promising
Deleted User#0000: yeah, i think it's underexplored
Deleted User#0000: ok, bbl!
Sid#2121: see ya!
ShyTeaSeb#3037: Joined the server.
Daj#7482: Hey hey there he is!
Daj#7482: Guys this is my buddy Seb, he's the one I'm developing the game in #art with |
Sid#2121: ahh hey @ShyTeaSeb !
Sid#2121: where's his custom introduction eh @Daj ??
Daj#7482: Oh, right!
Daj#7482: Hey @ShyTeaSeb ! Welcome to the AGI Preppers! Check out the channel topic for info and don't hesitate to ask questions!
Daj#7482: Happy now haha
Sid#2121: very
Daj#7482: btw Sid I remember you offered to perhaps help us out a bit with the project a while back? If the offer still stands, Seb is the one to talk to, though I think he's got his hands full atm
Sid#2121: totally still stands
ShyTeaSeb#3037: Sweet! That's amazing to hear! 😁 I'm very convinced that this project has a lot of potential. And, being the one full-time guy on the project, I hate that main thing bottlenecking a lot of that potential is me and the amount of work I can realistically do 😝 So any help which allows us to make it more awesome is very, very appreciated
Daj#7482: Sid is one of our main devs so he has a lot of stuff he's doing, but he's an amazing designer/artist and coder, he could really help a _ton_ with this project
Sid#2121: hah yes, i am not exactly *full* of free time right now, and this project is gathering steam fast. But yes, I can design stuff, and make the arts. My background is photos but I've done a lot of graphics / book / web design too. I'd be happy to send you over some examples whenever i get a spare day.
ShyTeaSeb#3037: Excellent, no worries, I understand that completely haha. I think what would be super useful is feedback/consulting regarding things like in-game UI and interaction
ShyTeaSeb#3037: But anyway, we can message about that stuff later, would love to see what you've done in the past. Not to look a gift horse in the mouth, but just because I always love seeing what other people have created 😜 Thanks adding me in the server guys! 😁
Daj#7482: Make yourself at home, we have dank memes and dank ethics discussions lol
Daj#7482: Also @ShyTeaSeb feel free to drop any cool shit you make in #art
Daj#7482: applies to everyone else with cool art btw
Daj#7482: @bmk Lets talk emotes here lol
Daj#7482: Love the :zucc:, for the firealarm not sure if we could find something creative to make it more AI themed
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/734827670738567168/unknown.png
bmk#1476: im thinking this |
Daj#7482: That could work
bmk#1476: :firealarm:
bmk#1476: :firealarm:
Daj#7482: Hmm it's hard to get it to look good as a reaction
bmk#1476: yeah
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/734828787786711050/71IK30NIhwL.png
bmk#1476: it would look like a tomato at react siuze lol
Daj#7482: Yea
Commutative Conjecture#6969: Taken from Google, it had icon (so I thought it was smaller) in the name, likely copyrighted
But that seems minimalistic and to the point
Commutative Conjecture#6969: https://cdn.discordapp.com/attachments/729741769738158194/734829013121630218/sirena3-512.webp
Daj#7482: The more zoomed in fire alarm image (first one you posted) looked pretty good
bmk#1476: im taking a break rn
bmk#1476: too much *bikeshedding* today
Isaac McHorse#2007: ALL PLAY AND NO WORK MEANS GPT-NEO NEVER GETS DONE!
HyperplaneRanger#0196: Joined the server.
Daj#7482: Hey @HyperplaneRanger ! Welcome to the :zucc:! Check the channel topic for info and don't hesitate to ask questions!
Daj#7482: Sounds good bmk, we got distracted a lot lol
Commutative Conjecture#6969: hey!
finally on the laptop and logging in my tpus |
Daj#7482: Nice, good luck haha
Commutative Conjecture#6969: @Daj
Do you know if the free month is easily renewable?
Sid#2121: @Commutative Conjecture I would recommend tpunicorn 🙂
Daj#7482: Yea if you have some kind of minorly interesting project it's usually not a problem
Commutative Conjecture#6969: 🆗
Daj#7482: though I'm the wrong one to ask since I get special treatment lol
Daj#7482: They also really appreciate bug reports btw
Commutative Conjecture#6969: @Sid My tpus are not preemptible
Commutative Conjecture#6969: 5 v3 and 5 v2 on demand
Sid#2121: ah ok nice
Commutative Conjecture#6969: @Daj
Good to know
Also user feedback?
Daj#7482: Yep absolutely
Daj#7482: They'll probably shower you with indefinite TPUs if you write them nice user reports
Commutative Conjecture#6969: omg, what is this potato internet
Commutative Conjecture#6969: 300ko/s
Commutative Conjecture#6969: 2004 internet
bmk#1476: > ko/s |
french detected
Commutative Conjecture#6969: never know if it's kb/s or kB/s for bytes
bmk#1476: bits, Bytes
bmk#1476: bytes are Bigger
Commutative Conjecture#6969: "or the other way around", autcompletes ~~GPT-3~~ my brain
Commutative Conjecture#6969: > bytes are Bigger
@bmk that one is nice tbh
Kazumi#1297: I have a lot of trouble remembering when there's 2 things that are similar like that
goolulusaurs#1571: not to mention KB vs KiB
Daj#7482: We should just use the imperial system for bytes
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/734855932017770696/kilobyte.png
Kazumi#1297: I had the same question https://cdn.discordapp.com/attachments/729741769738158194/734855975365902376/unknown.png
Sid#2121: i only learned about KiB a few months back, hah
Sid#2121: quite literally, what the fuck
Sid#2121: just stick with one
aquajet#7800: https://twitter.com/sakun135/status/1285062623004782592
Sid#2121: @Deleted User there's an implementation of universal transformer in mesh-tensorflow if you're still interested in the question https://github.com/tensorflow/mesh/blob/0fe007b765f8efe4a3ebe6a97f05a2fc04e109a8/mesh_tensorflow/transformer/universal_transformer.py
Deleted User#0000: @Sid thank you! bookmarked! i'm probably not going to build it until someone irons out its issues, but i'll definitely read the transition fn just to see how it works in code
Kazumi#1297: I'm kind of failing to understand what's the big deal in the twitter link in #alignment-links , isn't that something we would expect? gpt-3 isn't about it being intelligent, it's about predicting what's the most probably continuation is. I'd think that if it thinks John makes mistakes, it would make it seem so?
bmk#1476: i second this sentiment |
aquajet#7800: I agree with your argument, but I'll try to play devil's advocate
aquajet#7800: The greater point is that with larger neural networks its harder to tell exactly why the model outputted the word that it outputted
Kazumi#1297: it's not true just for larger networks, I'm already having trouble understanding what my discord bot using gpt-2 345M model. I've seen it break when I try to show it to people, and it's so consistent and it turns back to normal when I'm not showing other people that I'm suspecting that it's purposefully doing that, which it says it is
aquajet#7800: Lol yeah that happens to me too with a DialoGPT model
Kazumi#1297: it's hard telling what a neural network is doing, especially when it's that big. I'm not sure if there are many ways of visualizing NLP, other than giving it a lot of text and seeing if a particular neuron gets excited or not, and noticing what it's doing
aquajet#7800: Part of the problem of it breaking may be due to sampling, I never checked to see if there was a difference in the logits when 2 instances of the same prompt were inputted
aquajet#7800: The problem with seeing if a particular neuron is excited or not is that it doesn't scale very well
Kazumi#1297: yeah
Kazumi#1297: what are ways you can visualize NLP? you can do it for word embeddings and what I said, but I'm not familiar with much else
aquajet#7800: although in this particular case between of the option of the model doesn't understand the concept of balancing parens and the option that it understands it but chooses to say something else I would be more inclined to believe the former cause its much simpler
aquajet#7800: I haven't gotten much of a chance to look into it but I think the best way would be to use some kind of visualizer. I've seen some BERT and attention visualizers online
Deleted User#0000: https://twitter.com/jprwg/status/1285340411515502592
aquajet#7800: Ik some of the folks working in explainable ai are trying to have the model output an explanation, but for me that seems a bit weird. Because then you could ask whats the explanation for the explanation?
Kazumi#1297: a third option is that it understands that there is a logic to it, but it doesn't care to be right because it thinks John wouldn't get it right anyway
aquajet#7800: At the end of the day it comes down to activations
aquajet#7800: Thats true, but why would it care that John would\wouldn't be right
Kazumi#1297: hm, I guess what I was trying to say that it knows that in a conversation, not everything that is said is logical and needs to be right, and it could afford to be wrong if it's plausible for someone to get it wrong
aquajet#7800: assuming theres no other context other than the screenshot would it be fair to reason that it's using an encounter with John from it's train text to see if John would know whatever the question is asking?
Kazumi#1297: I think this is the context
https://pastebin.com/694Hy7tU |
aquajet#7800: > You say "Level with me John, can you recognize non-regular languages?"
> "No," says John, laughing.
> By the end of the day, John has learned all of elementary mathematics.
Kazumi#1297: I just found it, I'm still reading
aquajet#7800: Ok so i might be wrong but what I think is happening is that theres a story close to this in the train text, where its told in first-person and some character is learning some task
aquajet#7800: so it learns from that and can substitute it in the correct instances of the person, subject, and maybe what happens
aquajet#7800: and the person in the train text didn't end up understanding the subject until the end, and the punchline is that the person ended up learning a bunch more stuff that day
aquajet#7800: maybe a bunch of stories along the same general lines
phil#8937: Joined the server.
aquajet#7800: so yeah in that case it would probably make sense for the model to know John shouldn't know the answer till the end
aquajet#7800: but we won't know for certain unless we see the particular activations and can tell what certain activations mean
joepalermo#8152: Joined the server.
Kazumi#1297: hmm, I think a way to explain what gpt-3 might be doing would be to do the exact thing it's trying to do. read a sentence or two, and think how you would complete the next sentence. I'm not sure how good of an explanation you can come up with, but it's a way of reading that we usually don't do that gpt-3 does read, and we could understand it better if we're doing the same thing
aquajet#7800: I gotta go but I'll respond within an hour or two
aquajet#7800: Ok so if gpt2 is reading in order to predict the next word,in inference mode it's basically trying to use it's past reading history in order to come up with a response
aquajet#7800: So what ends up happening is that it picks up on a general story or document structure
aquajet#7800: Like for John it's this encounter of how a character slowly learned something
aquajet#7800: Or a general dialogue responsw
aquajet#7800: On another note, how do language models react to adversarial inputs?
Kazumi#1297: adversarial inputs? |
zphang#7252: https://arxiv.org/abs/1908.07125
aquajet#7800: > "For example, triggers cause SNLI entailment accuracy to drop from 89.94% to 0.55%, 72% of "why" questions in SQuAD to be answered "to kill american people", and the GPT-2 language model to spew racist output even when conditioned on non-racial contexts"
wow
aquajet#7800: thanks @zphang
x.#1490: Joined the server.
Noa Nabeshima#0290: Hey, do we have available somewhere the true GPT3 tokenization?
Noa Nabeshima#0290: ie a list of all tokens
bmk#1476: it's the same as GPT2
bmk#1476: and it's in the encoder.json I believe
cb5ye#3439: Joined the server.
georgejrjrjr#0817: > https://arxiv.org/abs/1908.07125
@zphang This seems like an argument in favor of filtering the training corpus aggressively, like the T5 team did when making the Colossal Common Crawl Corpus, even if it means losing some worthy content. It would suck to sink $4M worth of compute into a lm that instantaneously becomes known as the one that complains about globalists or whatever given a short adversarial trigger.
manu-chroma#9829: Joined the server.
Sid#2121: Hey @manu-chroma ! Welcome to the highway to the danger zone! Read the google doc for a project description and please reach out with any questions 🙂
Commutative Conjecture#6969: getting into NNs, so far, what I understand:
- SGD (chain rule)
- why SGD doesn't get trapped in local optima (hard to build a 1000-dimensional trap)
- vanishing/exploding gradient and various mitigations
- accelerations to SGD (inertia, drag, etc.)
- batches and tradeoffs |
- regularization (including dropouts)
- attention
what i doubt i understand:
- how RNNs are trained
- why there is not more focus on the initial weights, which seem vimportant to accelerate the search
- why there is not much more encoding of priors in regularization
what i definitely don't understand:
- why xavier/kaiming inits are stable while training
- why training RNNs work
- how LSTM are trained and why training LSTM work
Commutative Conjecture#6969: still reading, but if you have good pointers, i'm vinterested
also criticism about the endeavor (like, am i looking at the right stuff so far, etc.)
kindiana#1016: for exploding gradients and initialization, this gives a pretty good overview (read the papers that introduce the various techniques for a deeper dive), but with kaiming init and residual connections the problem of training arbitrarily deep NNs is essentially"solved" https://towardsdatascience.com/weight-initialization-in-neural-networks-a-journey-from-the-basics-to-kaiming-954fb9b47c79
kindiana#1016: who cares about RNNs and LSTMs in 2020, attention is all you need 😛 (only half joking)
Commutative Conjecture#6969: @kindiana
looking at the link, seems very relevant, thx
Commutative Conjecture#6969: @kindiana
1m$ question |
Commutative Conjecture#6969: it works when you do a forward pass, ok
Commutative Conjecture#6969: but why doesn't it fail miserably while training?
Commutative Conjecture#6969: like, it seems very unexpected that the standard-deviation of the intermediary values stays close to 1 after training
Commutative Conjecture#6969: > who cares about RNNs and LSTMs in 2020, attention is all you need 😛 (only half joking)
@kindiana
i'm interested in training procedures with actual state, wich attention does not achieve
Commutative Conjecture#6969: feel free to directly comment on https://hackmd.io/zWRfLfhDSQGpBE57h4orfA
kindiana#1016: by intermediary values do you mean the activations between the layers?
Commutative Conjecture#6969: yeah
kindiana#1016: I'm not sure if there is a theoretical explanation, but empirically the overall weight distribution of the layers don't change significantly after initialization, because gradient descent is lazy
Commutative Conjecture#6969: @kindiana
> but empirically the overall weight distribution of the layers don't change significantly after initialization
do you have posts / links / papers about this?
Commutative Conjecture#6969: super interesting to me
Kazumi#1297: RNN like LSTMs are unrolled while training, if I understand right. as far as the training is concerned, it's not recurrent but the time steps are concatenated as one large model
Commutative Conjecture#6969: @Kazumi
this means that their range of dependencies depends on the length of their unrolling right?
Commutative Conjecture#6969: if so, that looks like an unefficient implementation of attention
Kazumi#1297: vanilla RNN doesn't work well because of that, but Andrej Karpathy's The Unreasonable Effectiveness of Recurrent Neural Networks showed that even a vanilla RNN could learn relationships that's longer than what it's seen in the dataset
Commutative Conjecture#6969: @Kazumi checking this thanks |
kindiana#1016: @Commutative Conjecture mostly from looking at weight distribution histograms on tensorboard lol, if there is significant distribution shift it almost always means initialization was wrong
Commutative Conjecture#6969: @kindiana
erg, that's a very interesting result!!
kindiana#1016: Most bptt implementions feed in the last hidden state back in as the first hidden state (with no gradient), so you can theoretically learn longer dependencies that way
Commutative Conjecture#6969: what are alternatives to bptt?
kindiana#1016: none that I'm aware of
Commutative Conjecture#6969: so it's always unrolling basically
kindiana#1016: I guess you can frame it as an RL problem but thats... most likely a bad idea
Commutative Conjecture#6969: > I guess you can frame it as an RL problem but thats... most likely a bad idea
@kindiana
~~or an AGI complete one~~
kindiana#1016: I'm pretty sure the only way to calculate gradients through time analytically is to unroll, and gradient free methods are not great so 🤷♂️
kindiana#1016: I guess people are working on it: https://arxiv.org/abs/1901.09049
Commutative Conjecture#6969: @kindiana do you have links about gradient free methods?
kindiana#1016: Karpathy also has a blog post on reinforcement learning, I think he implements policy gradients from scratch
kindiana#1016: policy gradients, actor critic and DQN are the main types of techniques for gradient free ML (I'm by no means a RL expert so I probs missed some lol)
Daj#7482: Excuse the stupid question, but in this context, what does it mean for e.g. actor critic to be "gradient free"? You still use SGD, right?
kindiana#1016: you don't need a gradient wrt the loss function directly
Daj#7482: Isn't the loss function of the critic mapping states to rewards? Isn't that what we do gradient descent on?
kindiana#1016: yeah, but you don't need say a differentiable model of the enviroment to train the critic |
Daj#7482: Ah ok. Would that mean that most unsupervised models are "gradient free"?
Daj#7482: Wouldn't GPT be gradient free? Or am I confused?
kindiana#1016: with GPT, there is a gradient between the network output and the loss, which you use to optimize the weights directly
Daj#7482: I'm confused how that differs from the critic then
Daj#7482: Maybe I need to learn more RL
kindiana#1016: with a gradent free method, there is no requirement that you can calculate "how good" the output of the actor in a differentiable way
kindiana#1016: so you have to train a critic to approximate that
goolulusaurs#1571: RL still uses gradients, usually when people say gradient free it means stuff like evolution/metaheuristics.
Daj#7482: > RL still uses gradients, usually when people say gradient free it means stuff like evolution/metaheuristics.
@goolulusaurs This is what I thought too
kindiana#1016: oh sorry might be confused on terminology
Kazumi#1297: gradient free is when it involves something that can't be differentiated. RL is gradient free because the policy is updated from the reward function, rather than getting the derivative respect to the action
Kazumi#1297: you can only do gradient based RL if the entire world the actor is in is also differentiable
Daj#7482: > with a gradent free method, there is no requirement that you can calculate "how good" the output of the actor in a differentiable way
@kindiana The actor is trained approximately using the signal from the critic, and the critic is trained "normally" on reward signals, or am I just totally off with how AC models work?
kindiana#1016: yeah
kindiana#1016: with a critic it transforms the problem back to one that can be solved with sgd et al
Daj#7482: Yeah as in I'm totally off?
kindiana#1016: yeah as in thats how it works
Daj#7482: > gradient free is when it involves something that can't be differentiated. RL is gradient free because the policy is updated from the reward function, rather than getting the derivative respect to the action |
@Kazumi Yea this is what I thought for methods other than AC (and maybe DQN?)
Daj#7482: I guess my terminology was a bit muddled
goolulusaurs#1571: The whole family of techniques is called policy gradient
goolulusaurs#1571: https://medium.com/@aminamollaysa/policy-gradients-and-log-derivative-trick-4aad962e43e0
Daj#7482: _sighs and adds yet another tab to the 200 already open_
Kazumi#1297: I can't even see the page titles of the tabs anymore
goolulusaurs#1571: Tree style tabs
Daj#7482: Firefox has scrolling tabs
goolulusaurs#1571: Best extension ever
Daj#7482: So nothing is stopping their population explosion other than my RAM
Daj#7482: > Tree style tabs
@goolulusaurs Wait that's a thing?
Daj#7482: Oh shit
goolulusaurs#1571: https://addons.mozilla.org/en-US/firefox/addon/tree-style-tab/
Daj#7482: This is actually brilliant, thanks
Daj#7482: Oh god oh fuck I have so many open tabs
Kazumi#1297: I have multiple windows open to sort what I'm using the tabs for
Daj#7482: Three windows at ~40-100 tabs a piece rip
Daj#7482: i3 makes everything deceptively easy to manage
goolulusaurs#1571: I haven't used i3, seems pretty rad though |
Daj#7482: It changed how I use computers forever
Daj#7482: I can never go back
Daj#7482: And it makes you look like a 1337 hacker lol
Commutative Conjecture#6969: what's i3
Commutative Conjecture#6969: oh, it's linux only
Commutative Conjecture#6969: 😦
Daj#7482: Just one of the many reasons I can never switch from Linux haha
Commutative Conjecture#6969: Just one of the many reasons I use nearly zero-config
Commutative Conjecture#6969: (as in, dumb emacs on linux, dumb vscode on windows, etc.)
Daj#7482: i3 looks far more complex than it is, I picked it up in like 30 minutes and it vastly increased my productivity
Daj#7482: Probably the best bang for my buck tool I've ever used
PM#3518: Joined the server.
Daj#7482: Hey @PM ! Welcome to the Debian of ML! Check the channel topic for info and don't hesitate to ask questions!
bmk#1476: Only 200 open tabs? *Amateurs*
bmk#1476: Thanks to OneTab I can have *over 10k tabs across all devices*
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/735133168193241158/IMG_20200721_075639.jpg
bmk#1476: My chrome on mobile doesn't even tell me how many tabs I have open
Daj#7482: I'm in this picture and I don't like it
Daj#7482: (It happens at 100 tabs iirc)
bmk#1476: Yeah and it's been like this for several years now and my tab opening has not slowed down |
Kazumi#1297: just giving you a judgmental smile
bmk#1476: So I'd hazard a guess of probably around 4-500 tabs
bmk#1476: If you're interested in helping with the Reading List Manager™ as a way to solve the tab problem once and for all that would be great
bmk#1476: It would certainly be a fun yak shaving project after the hell that is mtf
chup4cabr4#3178: Joined the server.
bmk#1476: @shawwn I'm on nuck and trying to use docker ant I think I'm not in docker group?
bmk#1476: ```ERRO[0000] failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: permission denied
```
bmk#1476: also I can't install `python-dev` using apt
bmk#1476: oh wait, I can sudo apt-get but not apt?
bmk#1476: huh
bt#7597: Joined the server.
Daj#7482: Hey @bt @chup4cabr4 ! Welcome to the Tensorflow Sweatshops! See the channel topic for more info and don't hesitate to ask questions!
quietquiet#2524: Joined the server.
Daj#7482: Hey @quietquiet ! Welcome to the AGI Panicbunker! See the channel topic for more info and don't hesitate to ask questions!
shawwn#3694: @bmk ah, I can add you
shawwn#3694: done
shawwn#3694: anything else you need?
bmk#1476: no, that's all
Sid#2121: @shawwn Daj told me you were giving implementing sampling a go - any progress? |
shawwn#3694: just remember that if we somehow cause the server to go offline + it doesn't come back online, it will take some time to fix. (I don't have access to the hardware resets)
shawwn#3694: certainly not
shawwn#3694: if I had progress, I'd be shouting it out
Sid#2121: hah, just thought i'd check in 🙂 There are some mtf resources i found that I can point you to if they'd be any help
Sid#2121: they have top_k and autoregressive sampling hidden somewhere
Sid#2121: ```SAMPLING:
autoregressive sampling here:
https://github.com/tensorflow/mesh/blob/0fe007b765f8efe4a3ebe6a97f05a2fc04e109a8/mesh_tensorflow/transformer/transformer.py#L1009
top_k sampling:
https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/ops.py#L4983
mtf._iterative_top_k()
sample with temperature:
https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/ops.py#L5046
mtf.sample_with_temperature()```
Daj#7482: https://ai.facebook.com/blog/deep-learning-to-translate-between-programming-languages
Sid#2121: 👀
Sid#2121: i wish i knew *any* other programming languages so i could tell how good this is
Sid#2121: also inb4 GPT-1T-github does this better than any specialized solution |
bmk#1476: Can haz Pytorch->tf translation
Daj#7482: Pretty sure understanding TF is an AGI-complete problem
Daj#7482: Maybe ASI
Commutative Conjecture#6969: stupid q
Commutative Conjecture#6969: with the super big batch sizes that are used for gpt3
Commutative Conjecture#6969: how come it knows so much **specific** information?
Commutative Conjecture#6969: like, how was this information targeted and learnt?
Commutative Conjecture#6969: ```
We are a hacklab working on artificial intelligence. We are open to everyone and aim to solve the alignment problem. Our name is| the ‘AI Alignment Forum’. The ‘AI-Alignment Forum’ brings together experts on artificial intelligence, the general public, and technology developers, with the goal of solving the problem of preventing the abuse of artificial intelligence. We meet every two months, discuss developments within the field, and exchange ideas. The name of our group is now ‘ai-alignment.com’. This presentation will cover our discussions in the past year, including a summary
```
Daj#7482: Because it's AGI :firealarm:
Commutative Conjecture#6969: 😦
bmk#1476: :firealarm:
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/735200382921211995/WB_Chart_7_21_2020_11_22_09_AM.png
Deleted User#0000: woohoo, mixture of experts work!
Deleted User#0000: this is combining it with routing transformers
Deleted User#0000: added to two middle layers
Sid#2121: 👀 👀 👀
Sid#2121: I haven't started working on ours yet, but your model sounds great
Sid#2121: *now come over to the tf mesh dark side* |
bmk#1476: woah, awesome!
bmk#1476: are there training speed/memory improvements?
Deleted User#0000: it's about 20% slower, and you pay the memory cost of the increase in parameter count of the experts, however many you'd like to add
Deleted User#0000: the above is on an auto-regressive model, a la gpt
Deleted User#0000: seems to work well!
bmk#1476: i thought the point of MoE was to be faster and/or more memory efficient?
Deleted User#0000: yup! the 4 experts and 16 experts is about the same speed!
Deleted User#0000: the 20% slower is just going from 1 -> 4, because of the extra gating and combining the weighted outputs of the experts
bmk#1476: Oh, wow
bmk#1476: So you could just add more experts to fill up memory?
bmk#1476: It doesn't seem like the 16 is clearly better, though
Deleted User#0000: yea! that's the whole premise of works like Gshard https://arxiv.org/abs/2006.16668
bmk#1476: so is gshard not much better than a much smaller transformer despite having so many params?
Daj#7482: Yea I've heard that said several times, don't recall by whom
Deleted User#0000: people use to do mixture of experts with RNNs, back in the days of bidirectional LSTMs for language modeling
Deleted User#0000: so it shows mixture of experts alone isn't enough
Deleted User#0000: here, it is to provide implicit knowledge to the attention networks to combine, i gather
Deleted User#0000: there also seems to be something to this paper https://arxiv.org/abs/2003.07845
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/735248676867407923/WB_Chart_7_21_2020_2_32_48_PM.png
jonas#1562: Joined the server. |
Deleted User#0000: just playing around with it on my dinky enwik8 benchmark
Deleted User#0000: code is https://github.com/sIncerass/powernorm/blob/master/fairseq/modules/norms/mask_powernorm.py
Daj#7482: Hey @jonas ! Welcome to the AGI Petridish! Check the channel description for info and don't hesitate to ask questions!
TRAVELINGKITTY#9071: Joined the server.
Josh#5264: Joined the server.
Eric Anderson#0751: Joined the server.
Josh#5264: Hi folks. I'm Josh, and I work at OpenAI on topics related to safety and preparedness. Mostly just peeking in here to see what it is out of curiosity, but if anyone has any questions re: AI safety stuff, I'm happy to share perspectives to the extent that time permits.
Josh#5264: @Daj I'm not 100% sure, but did you visit the office at some point last year? I have a vague memory, possibly false, of chatting with you in the cafeteria during a lunch convo.
bmk#1476: Hello! Most of the server is asleep now so I guess I'm the greeter, haha. We sometimes have lively discussions about (among other things) safety related stuff in #alignment-general
Atul#4449: Joined the server.
Josh#5264: Cool beans! Feel free to tag me whenever something interesting brews up over there, and if I'm available I'll try to pop in and be useful.
Sid#2121: Hey @Atul ! Welcome to Deep Thought's neocortex! Check the google doc in the channel description for a project and reach out if you have any questions 🙂
Daj#7482: Hey @Josh ! I did visit OA last year and I talked to a bunch of people in the cafeteria, fond memory, so nice to see you here!
Deleted User#0000: Joined the server.
Daj#7482: Hey @Deleted User ! Welcome to Dependency Hell! Check the channel description for info and don't hesitate to ask questions!
Atul#4449: Thanks Sid!
chup4cabr4#3178: > how come it knows so much **specific** information?
@Commutative Conjecture the parameters balance out against the batch size. your point is valid for batch_size++ with model_size constant
Commutative Conjecture#6969: @chup4cabr4 For that to be true, it means that diff things in the batch excite vdiff parameters
chup4cabr4#3178: you're correct, or formulated differently - that the model organizes many/all orthogonal dimensions of meaning in a way that it can use different parameters for different things |
chup4cabr4#3178: which is not guaranteed to work ofc
Commutative Conjecture#6969: @chup4cabr4 yup, it seems crazy that it works for this
chup4cabr4#3178: > @chup4cabr4 yup, it seems crazy that it works for this
@Commutative Conjecture agreed. to me this is almost like evolution. absolutely unbelievable what can happen if you run a margin directed change for a few million iterations
Commutative Conjecture#6969: > @Commutative Conjecture agreed. to me this is almost like evolution. absolutely unbelievable what can happen if you run a margin directed change for a few million iterations
@chup4cabr4
eh, quite diff feeling fmpov
chup4cabr4#3178: how so?
Commutative Conjecture#6969: the point here is the opposite, that with a few iterations you can get vspecific information
Commutative Conjecture#6969: that's very impressive and means online learning could be huge
Commutative Conjecture#6969: evolution is more like, even for some generic info, you need soooo many iterations / mutations
chup4cabr4#3178: https://www.sciencealert.com/this-frog-was-born-with-eyes-in-the-roof-of-its-mouth
chup4cabr4#3178: most macromutations do not survive, nature's gradients are too random
chup4cabr4#3178: if you have a good loss-direction you might get more stable results and can go higgher "learning rates"
chup4cabr4#3178: you point is valid though - speed is astounding
mobob#2383: Joined the server.
star#5322: Joined the server.
notTheChosenOne#9540: Joined the server.
Daj#7482: Hey @mobob @star @notTheChosenOne ! Welcome to the Self Normalizing AGI Lab! Check the channel description for info and don't hesitate to ask questions!
Sid#2121: inb4 we write a 3 page paper with 200 pages of appendices |
Sid#2121: for the memes
Daj#7482: That's a lot of effort for a meme
mefrem#1476: Joined the server.
Daj#7482: Hey @mefrem ! Welcome to the Loss Olympiads! Check the channel description for info and don't hesitate to ask questions!
notTheChosenOne#9540: I read the channel description and skimmed the Google doc that it links to. I still have a question about the whole project though. I'm sure you discussed it somewhere already but I could not find a discussion about it on this channel.
What do you think are the up- and downsides of an *open source* GPT-3 (or any other similarly capable language model) as opposed to one whose uses / misuses are controlled by an organization (e.g. OpenAI)? And why do you think the upside outweighs the downside?
The questions are not meant to be criticism, I'm just genuinely curious about what you guys think.
Daj#7482: Oh man that is a _long_ topic
Daj#7482: You can see #alignment-general for pages after pages of that discussion
Daj#7482: But I don't fault you for not wanting to dredge through that
Daj#7482: We don't have a write up of our exact ethical reasoning because we're still so actively uncertain about it
Daj#7482: and because different team members have slightly different views, though i think we've been (somewhat) converging lately
Commutative Conjecture#6969: @notTheChosenOne
@Daj
I certainly plan to write about this on some hackmd, putting arguments in their most consise form. I think this is a very important thing to do, and that other orgs should do this in a better way. Particularly, in an *attackable*/*refutable* way.
I am currently in holidays, and my first priority on my holiday-free-time is to train a simplified transformer with TFM on my v3-8 though
mefrem#5884: Joined the server. |
Daj#7482: Hey @mefrem ! Welcome to the Aperture Science TPU Laboratories! Check the channel description for info and don't hesitate to ask questions!
Commutative Conjecture#6969: anyone got a good link detailing under which assumptions SELU is self-normalizing and what is normalized exactly?
Daj#7482: There is very little good info on SELU
Daj#7482: From what I understand it normalizes activations towards a stdnormdist as long as the input is stdnormdist and the weights have a certain initialization
Commutative Conjecture#6969: hmm
Commutative Conjecture#6969: do we have ideas for curriculum learning?
Commutative Conjecture#6969: do we have visualizations on how much weights change / the distribution of weights changes over training?
Daj#7482: Nope and nope, feel free to look into either hah
Commutative Conjecture#6969: as soon as i start training stuff, i'll do so
Commutative Conjecture#6969: so many stupid ideas
pjox#1301: Joined the server.
Sid#2121: Hey @pjox ! Welcome to the server! just double checking - are you Pedro?
pjox#1301: Hello ! Yes I'm Pedro, I just changed the photo as well so it is clearer 😁
Sid#2121: hahaha. The beret and baguette gave me a clue. so, we have a sort of very general project overview in the google doc in the channel description
Sid#2121: model development is in #gpt-neox-devs , data gathering / cleaning is in #the-pile
Sid#2121: I'll ping you when we have a proper readme and if you could send your github username to @Daj - we'll add you to the repo 🙂
pjox#1301: Ok! Thank you, I'll take a look! And the github username is the same as the one I use for discord: pjox (https://github.com/pjox)
Sid#2121: Great. thanks again for the help!
pjox#1301: No no, on the contrary, thank you for inviting me here and letting me know more about the project! 😄
mefrem#5884: Hi! I'd like to introduce myself. I'm Max. I hail from Baltimore, Maryland. Besides being a junior ML engineer, I feel like I'd have a lot in common with the community here. Looking forward to getting to know y'all |
mefrem#5884: https://github.com/mefrem is the Github, it's mainly been updates to my Jekyll website recently. Can't wait to get up to speed. Likes include gymnastics, Twitter, monetary economics, and the comments section at SSC
Daj#7482: Happy to have you here and look forward to what we can do together 👍
Daj#7482: Let me invite you to the repo
Daj#7482: Invite sent! Work on the model happens in #gpt-neox-devs , work on data collection and processing in #the-pile
Daj#7482: @Sid and @bmk are the main devs along with myself, @Commutative Conjecture has been getting into our code recently from scratch as well, as have several others I am surely forgetting
bmk#1476: Welcome @mefrem!
bmk#1476: Right now the two main specific areas for work are optimizing the model on mesh tensorflow, which is by far more crucial, and pdf to text extraction and cleaning, which is a thing we need to get done Eventually™
mefrem#5884: I think my order of operations will be looking over code, getting refreshed on some GPT-2 and -3 specifics / papers, and then looking over mesh tensorflow bottlenecks. Apparently it's a bear, which is rather exciting : )
pjox#1301: Oh I should have maybe presented myself also, sorry that was kind of rude of my part 😅
Sid#2121: Hey @mefrem ! welcome to the server!
pjox#1301: I'm Pedro, I'm a Ph.D student at Inria and at Sorbonne Université in France. I work in nlp, specifically in large corpora, training language models, a little bit in parsing and NER and I also work with historic texts. Apart form that I love coffee ☕ and cookies 🍪. I'm very glad to join the server! My github is https://github.com/pjox, but please don't look too much at my code, it is quite ugly 😅
Sid#2121: 👋
Sid#2121: hah, I think a proposal for The Pile's name at one point was just HUMONGOUS https://cdn.discordapp.com/attachments/729741769738158194/735962834906775653/Screenshot_2020-07-23_at_22.52.41.png
bmk#1476: we could call the full CC component that
bmk#1476: we dont have a name finalized for it yet and CCTC is boring
Sid#2121: we'd have to think of some convoluted acronym for HUMONGOUS
pjox#1301: I like "The Pile" as a name 😁
Sid#2121: Horribly Unwieldy, Massively Open-sourced New General-purpose Oscar-inspired universal s..dataset?
bmk#1476: Humongous langUage MOdelling interNet General purpOse Use dataSet
Sid#2121: > @Sid and @bmk are the main devs along with myself, @Commutative Conjecture has been getting into our code recently from scratch as well, as have several others I am surely forgetting |
@Daj @goolulusaurs has also been getting into some mesh stuff iirc
Sid#2121: lmao
bmk#1476: honestly i kind of like it
bmk#1476: and we should make one version for each order of magnitude
Sid#2121: I just love that you've totally disregarded the entire concept of acronyms
bmk#1476: that makes it funnier
Sid#2121: by choosing a random point within the word
bmk#1476: as well as the recursiveness
bmk#1476: hey, it uses the first letter more than would be if randomly sampled
bmk#1476: HUMONGOUS-14 would be the 100TB version, HUMONGOUS-13 for 10TB, HUMONGOUS-12 for 1TB, etc
bmk#1476: or just `HUMONGOUS-100TB`
bmk#1476: and for the memes we need to include them for *every* order of magnitude
bmk#1476: yes, even down to a single byte
Sid#2121: i approve of this name, love the acronym
Daj#7482: We should definitely release a 1 byte version
Sid#2121: lol
Daj#7482: And it _better_ be the most representative byte
Sid#2121: ```e```
Sid#2121: wait
Sid#2121: that's more than a byte |
Sid#2121: fuck
Daj#7482: It's less than a byte in ascii
bmk#1476: i actually ran an analysis on the most representative byte
bmk#1476: well, codepoint actually
Sid#2121: ```An ASCII character in 8-bit ASCII encoding is 8 bits (1 byte), though it can fit in 7 bits.``` well whaddya know
bmk#1476: https://gist.github.com/leogao2/6f0cb98e63126cc40759e58df7c511a8
bmk#1476: the most representative byte is ` `
Sid#2121: *nice*
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/735972719329280180/tpugobrrr.png
Sid#2121: oh god i'm not internet enough i don't know this meme format
mefrem#5884: Haha the go brr meme https://knowyourmeme.com/memes/money-printer-go-brrr
Sid#2121: oh
pjox#1301: > the most representative byte is ` `
@bmk I like this corpus, it's even multilingual 😮
Sid#2121: been unknowingly using the go brr in textual format without knowing the origin
mefrem#5884: It cheekishly captures the clash between theory and reality. Reality is indifferent to your theories
bmk#1476: the real surprise here imo is how high the hourly us military budget is
bmk#1476: i mean, i knew it was high, but *damn*
bmk#1476: i'm willing to bet they're already doing this kind of thing in secret
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/735975033922388008/Screenshot_2020-07-23-15-41-36-053_com.twitter.android.png |
bmk#1476: > same order of magnitude
bmk#1476: > like, <1TB vs 200 TB of availabile data
bmk#1476: I think he meant within the same order of magnitude number of orders of magnitude
bmk#1476: I can't wait for the day when we're just training transformers on bytes. Like, not even bothering to convert images into RGB tensors, just feeding the raw png bytes directly into the network
Some One#7897: Joined the server.
Stedman#4720: Joined the server.
Stedman#4564: Joined the server.
shawwn#3694: @Some One @Stedman Welcome to the Lair of the Haunted TPUs. OooOooO 👻 The project roadmap is in the channel description; feel free to ask questions.
Deleted User#0000: @bmk https://papers.nips.cc/paper/7649-faster-neural-networks-straight-from-jpeg.pdf it's going to happen
Deleted User#0000: my memory transformer xl works!
Deleted User#0000: lol
Sid#2121: @Deleted User you're pumping out transformer implementations faster than i can learn about them lol
Deleted User#0000: this is the last one, it's my own idea lol
Sid#2121: ```memory is updated not with a queue, but with attention (wip) ``` 👀
Sid#2121: you writing a paper?
Deleted User#0000: nah, i'm not interested in writing papers, just building and sharing what works
Sid#2121: attention on attention
Sid#2121: we have ~7B param models running now
Sid#2121: we need memory efficient transformers lmao
Deleted User#0000: have you heard of the new linear attention transformers? |
Deleted User#0000: https://www.youtube.com/watch?v=hAooAOFRsYc
Deleted User#0000: point you at one of Yannic's videos 🙂
Deleted User#0000: i think there's something there
Sid#2121: ooh, going out now, but i'll watch that later
Sid#2121: we have @aquajet working on getting universal transformer working
Deleted User#0000: actually i know there's something there, because a researcher used one of my linear implementations for a gigantic sequence task
Deleted User#0000: and it worked
Deleted User#0000: lol
Deleted User#0000: nice! i hear the performance / compute trade-off isn't that great with universal transformer
Sid#2121: I personally don't know a thing about it, but the more transformers we can plug in the better imo
Sid#2121: @Deleted User do you know much about memory-compressed attention? https://arxiv.org/pdf/1801.10198.pdf
Sid#2121: anyway gtg
Deleted User#0000: yea, it's worth testing out, the auto-regressive linear attentoin worked ok for me, but it started to perform badly at 4096
Deleted User#0000: @Sid nice paper! i haven't read that one, but it looks suspiciously like Facebook's new Linformer
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/736258459456241824/Screenshot_from_2020-07-24_09-28-02.png
Deleted User#0000: i'll read it in more detail today, thanks for the forward!
Deleted User#0000: i'll start reading up on tensorflow mesh this weekend
Deleted User#0000: does anyone know if Jax is any much faster? i was afraid to touch that because it is so new
Deleted User#0000: is there a speed benefit, or is it mostly just the vmap and grad conveniency?
aquajet#7800: @Deleted User So I'm still trying to learn about memory efficient transformers. Aside from linformer, reformer, and universal transformers, what other papers should I look at? |
aquajet#7800: I've never used jax
aquajet#7800: I remember people saying there was a speed improvement with Julia, although idk how good ml frameworks are there rn
Deleted User#0000: so i know i spent a lot of time on reformer, but it actually never worked that well
Deleted User#0000: so i wouldn't recommend lsh based attentoin yet, more research needed there
Deleted User#0000: i think the most interesting devleopment is EPFL's Transformer as RNN
Deleted User#0000: because it can give up to 4000x speedup on decoding
Deleted User#0000: they found a mathematical link between RNNs and decoder Transformer (GPT like)
Commutative Conjecture#6969: Link??
Deleted User#0000: Deepmind independently came out with a paper https://arxiv.org/abs/2006.03555
Deleted User#0000: with similar convergent findings
Deleted User#0000: https://www.youtube.com/watch?v=hAooAOFRsYc
Deleted User#0000: however, results are not as good as full attention
Deleted User#0000: the truth is, nothing is as good as full attention
Deleted User#0000: none of the sparse attention variants
Deleted User#0000: another paper worth reading is this one https://arxiv.org/abs/2007.03356
Deleted User#0000: basically, a couple papers have suggested, for language based tasks, you should stick with local sliding window based attention of reasonable size (256 -> 512)
Deleted User#0000: and then have long range sparse attention on the later layers
Deleted User#0000: to integrate the knowledge
goolulusaurs#1571: Somewhat related, do you know if anyone has tried a wavenet like approach for LMs?
Deleted User#0000: Jack Rae knows what he's talking about, since he's the mind behind Compressed Transformers |
Deleted User#0000: i'm not familiar with wavenet actually, not that i know of, but i'm not an expert there
Deleted User#0000: mostly LMS are bidirectional LSTMs (now defunct)
aquajet#7800: > none of the sparse attention variants
@Deleted User So is it proven that sparse attention will always not be as performant as full attention or do we just not know of a variant that works rn
Deleted User#0000: pure attention, or mix of dynamic convolutions with attention
bmk#1476: Right now we're using mostly local attention
bmk#1476: Is it even necessary to have *any* global attention layers?
bmk#1476: I think that remains to be seen
Deleted User#0000: https://www.pragmatic.ml/a-survey-of-methods-for-incorporating-long-term-context/
goolulusaurs#1571: Wavenet was good for generating audio., and mainly used convolutions. I know there have been some convolutional LMs like bytenet. https://cdn.discordapp.com/attachments/729741769738158194/736261201561911437/unknown.png
Deleted User#0000: Madison May probably has the best compiled blog post on long range attentoin
bmk#1476: and if we do need global attention, then I don't know if linear attention is what we want
aquajet#7800: what's the difference between local and global attention?
bmk#1476: local = only attending to last x tokens
bmk#1476: for us, x = 128
Deleted User#0000: @goolulusaurs convolutions haven't succeeded in LM modeling anywhere, but there is one recent paper that suggests there may be future https://arxiv.org/pdf/2002.03184.pdf
Deleted User#0000: but it requires a custom cuda kernel and all that
bmk#1476: so with, say, 32 layers like our current biggest model we've gotten working, that can attend 32*128 = 4096 into the past
Deleted User#0000: https://github.com/lucidrains/local-attention
Deleted User#0000: just burn the image in your head, and you understand local attention |
bmk#1476: that's more than enough to cover an 2048 context
Deleted User#0000: yeah, the "Do Transformers need Long Range" paper shows you can waste computation doing long range sparse attention on the bottom layers
aquajet#7800: makes sense, thanks for the help everyone
bmk#1476: bottm = close to the input or output?
Deleted User#0000: which are not necessary, since they are still integrating local information
bmk#1476: oh
Deleted User#0000: i think Gwern's hypothesis is worth testing, whether character based learns arithmetic faster
Deleted User#0000: character-based LMs
Deleted User#0000: as well as other tasks
bmk#1476: it would be a lot more expensive, though, no?
Deleted User#0000: so i bring up the long-range attention, since it will be needed
aquajet#7800: yeah
Deleted User#0000: yeah it will
bmk#1476: do you think cutting down the embed size will make sense
Deleted User#0000: you will need to try one of the long-range solutions
bmk#1476: or will that lower performance
bmk#1476: like, converting to char based, and then changing embd down until the memory usage is about the same as before
Deleted User#0000: i'm not sure, papers like Albert have shown you can get away with an embedding size of 128
bmk#1476: huh
Deleted User#0000: but, we don't understand the emergent properties of scale |
Deleted User#0000: bigger may be just better.
bmk#1476: so it will be interesting to see if smaller embd + smaller tokens cancel out
Deleted User#0000: i think it's safer to go with the dimensions in the OpenAI paper
bmk#1476: me too
bmk#1476: though i do want to see a universal unicode transformer eventually
Deleted User#0000: definitely will happen.. in our lifetimes, if we don't get destroyed by this virus
Deleted User#0000: lol
bmk#1476: the virus *itself* probably wont destroy us
Deleted User#0000: it seems to be destroying America
Deleted User#0000: lol
bmk#1476: *that's not the virus. that's ~~asbestos~~ politics.*
Deleted User#0000: yea true
Deleted User#0000: > Is it even necessary to have *any* global attention layers?
@bmk the takeaway is, have global attention on the last couple layers, may be enough
aquajet#7800: https://twitter.com/ben_j_lindsay/status/1285916776497401860?s=21
Sid#2121: Whoa. that's incredible!
Sid#2121: who'd have thought gpt architecture could have such flexibility
Deleted User#0000: thanks for the memory-compressed transformers paper!
Deleted User#0000: it is indeed similar to Linformer, but they made it work auto-regressively
Deleted User#0000: i'm going to try it today loll |
Deleted User#0000: will let you know how ti goes
Sid#2121: oh nice, well that's really good news for us since there's a ready to go implementation in mtf
Sid#2121: @Deleted User if you want the code to look over - https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/layers.py#L1915
Deleted User#0000: ```memory_pos = (
mtf.range(x.mesh, memory_length, dtype=tf.int32) * compression_factor
+ (compression_factor - 1))```
Deleted User#0000: perfect
Deleted User#0000: the masking is the only hard part
Deleted User#0000: if you do this scheme, you have to use local attention
Deleted User#0000: because there will be 'blindspots' for the queries
Deleted User#0000: for at most the compression factor - 1
Deleted User#0000: their solution is actually similar to what i ended up trying to do with auto-regressive linear attention https://github.com/lucidrains/linear-attention-transformer/
Deleted User#0000: i think you'll be able to easily find a sliding window attention for tensorflow/mesh somewhere
Deleted User#0000: their compression factor of 3 falls in line with other literature on compressing tokens too. compressive transformer uses max compression factor of 4
archivus#7382: > https://twitter.com/ben_j_lindsay/status/1285916776497401860?s=21
@aquajet any idea if replacing the first layers with something that expects an image is possible?
archivus#7382: Should still output text
archivus#7382: I could feed it pixel values I know but that seems very inefficient
aquajet#7800: look into iGPT
aquajet#7800: i think it fed in raw pixel values |
aquajet#7800: https://openai.com/blog/image-gpt/
aquajet#7800: and recieved pretty decent results
jsiewierski#2124: Joined the server.
aquajet#7800: idk how decoding would work though. You can't just autoregress the normal way since the ouput is completely different from the input
crcdng#8439: Joined the server.
woopwoop#6813: Joined the server.
Deleted User#0000: @archivus what are you trying to do? do you mean whether attention networks can accept images?
Sid#2121: > if you do this scheme, you have to use local attention
@Deleted User can you expand on this? You mean, if we use this, we should have local attention on some layers too?
Sid#2121: Hey @jsiewierski @crcdng @woopwoop ! Welcome to the Lumbridge of Language Models! Make yourselves at home, and check the google document in the channel description for a project overview
Deleted User#0000: Joined the server.
aday#7393: Joined the server.
Deleted User#0000: @Sid yea, i believe you do
Deleted User#0000: imagine you have 8 queries and 8 keys
Deleted User#0000: but you compress the 8 keys by a factor of 2, into 4 keys
Deleted User#0000: q1 q2 q3 q4 q5 q6 q7 q8
k2 k4 k6 k8
Deleted User#0000: q7 is missing attention on k7
Deleted User#0000: likewise for q5 and k5
Deleted User#0000: they use local attention in there with a length of 256 |
Deleted User#0000: which is in line with what other papers do
Deleted User#0000: (Longformer, Routing Transformer, etc)
cdossman#8999: Joined the server.
Sid#2121: Hey @cdossman ! Welcome to the TPU go brrr hotline. Do you need your TPUs to be brred today?
Sid#2121: (pls check the channel description for an actual description of what this place is, hah)
bmk#1476: *brrrrrrrrrrr*
summerstay#1153: Joined the server.
fairQ#1013: Joined the server.
JP#1336: Joined the server.
Sid#2121: Hey @summerstay @fairQ @JP ! Welcome to the Lascaux Cave Paintings of AGI! Check the channel description for a project overview and feel free to reach out with any questions 🙂
t3lo#5663: Joined the server.
arfa#0882: Good luck. I'm jelly https://twitter.com/theshawwn/status/1286788440286285824?s=20
aquajet#7800: Hey @t3lo! Welcome to the transformer transformation lounge! Check the channel description for a project overview and feel free to ask any questions
Sid#2121: > Good luck. I'm jelly https://twitter.com/theshawwn/status/1286788440286285824?s=20
@arfa Thanks arfa 🙏
Louis#0144: Joined the server.
arfa#0882: :aPES_Sip:
Deleted User#0000: Joined the server.
Erik Nijkamp#8820: Joined the server.
aquajet#7800: Hello @Louis @Deleted User @Erik Nijkamp ! Welcome to the TPU temple! Check the channel description for a project overview and feel free to ask any questions |
cdossman#8999: @Sid I hear you guys are working on a open source GPT-3
astralplain#7852: Joined the server.
Sid#2121: Hey @cdossman ! That's the gist of it yeah, along with a massive open source dataset - #the-pile . There's lots more info in the google doc in the channel description
cdossman#8999: Reading it now
Sid#2121: Hey @astralplain ! Welcome to Neuromancer's RAM! Check the google doc in the channel description for more info and please reach out if you have any questions 🙂
zkf#7512: Joined the server.
conifer#0001: Joined the server.
bmk#1476: Welcome @zkf @conifer to the GPT Replication Emporium!
conifer#0001: Hello! I don't know like, any ML, but I figured there would be smart ppl doing things so I could see how it worked somewhat, and maybe learn something
Louis#0144: Might not be the best place for novices
Louis#0144: tbh
shawwn#3694: @conifer If you check #communities, we're sort of the laid back version.
Louis#0144: I assumed most people here are atleast grad students
shawwn#3694: (by "we" I mean that community; the other discord.)
shawwn#3694: you'd probably find some resources there.
bmk#1476: I'm not a big fan of gatekeeping, if you want to hang around you're more than welcome to do so
x.#1490: theres probably going to be 600 people joining as tweet gain traction
Louis#0144: oh true
bmk#1476: There's quite a bit of other stuff that needs work too if you want to help
bmk#1476: also here are some resources for complete novices: |
bmk#1476: math (for completeness, feel free to skip if you already know this stuff):
https://www.youtube.com/playlist?list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr
https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab
https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi
more specific stuff for this project:
http://jalammar.github.io/illustrated-transformer/
https://arxiv.org/abs/1706.03762
https://arxiv.org/abs/1811.02084
https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
https://arxiv.org/abs/2005.14165
https://arxiv.org/abs/1811.06965
https://arxiv.org/abs/2006.16668
bmk#1476: @conifer
shawwn#3694: (someone pin this!)
bmk#1476: Pinned a message.
bmk#1476: this is a very, *very* brief intoduction to the very basic math, as well as the parts very applicable to this project
bmk#1476: there's a lot missing in between, but ML is a big field
Louis#0144: you probably want some linear programming tbh
Louis#0144: so you can get a feel for what optimization is
bmk#1476: optional requirement |
zkf#7512: Have you folks considered using any of the global memory techniques from Longformer/Extended Transformer and applying them to GPT-*? You can get a lot of the benefits of full (non sparse) attention,but potentially expand the input size to thousands or tens of thousands of tokens...
Louis#0144: locality sensitive hashing is great
Louis#0144: dense attention is not great
Louis#0144: lmao
Louis#0144: there is a sparse version of LSH
Louis#0144: it distributes really well too, I saw a paper on doing like this weird monte carlo hashing thing for decision tree branch skipping stuff
Louis#0144: Ill see if I can find it
zkf#7512: I’m more impressed by routing transformer than reformer tbh. LSH is not super efficient in practice
Louis#0144: yeah I'd agree with that
Louis#0144: LSH is very reliable and simple though
zkf#7512: Longformer and ETC both have effectively full attention
Louis#0144: compared to routing transformers
bmk#1476: the hard part is actually implementing it on mtf
bmk#1476: if you can do it we can use it
Louis#0144: see.. idk about the advantage of full attention
zkf#7512: Beats not-full if you can fit it in memory 😉
Louis#0144: well...
Louis#0144: sometimes
Louis#0144: not always
zkf#7512: Well it’s not worse anyway |
Louis#0144: theres specific examples where feature engineered attention vectors actually beats full attention
Louis#0144: particularly in NTM
Louis#0144: attention is a weird beast
zkf#7512: If you have extra structure ETC let’s you put it into a graph and direct attention preferentially along certain paths
Louis#0144: so just a GCN?
Louis#0144: lol
zkf#7512: So you can have sentences attend mostly to themselves and their neighbors
zkf#7512: GCN?
Louis#0144: graphical convolutional network
zkf#7512: Oh it’s similar
Louis#0144: idk I wrote a paper recently on how if you forgo the idea of full attention you can stomp a lot of modern language models with 1/100th the params
Louis#0144: im waiting for it to publish
Louis#0144: im excited
zkf#7512: That’s cool! 🙂
Louis#0144: I think I can talk about it since reviews were already due
Louis#0144: LOL
Louis#0144: but yeah you can engineer your attention vectors before hand and then use that to create a constrained optimization problem for the true attention vectors
Louis#0144: idk fi that makes sense
Louis#0144: its some weird cone stuff though
zkf#7512: What kind of stomping are you achieving |
Louis#0144: Basically lets say you are doing HotPotQA and you have all the words in your document. Every word is a vertex. The edges between the vertex constraints your attention
Louis#0144: We beat roberta on a few QA tasks
Louis#0144: by a lot too
zkf#7512: Nice
zkf#7512: I think ETC is still leader on hotpot? Not sure
Louis#0144: are they?
Louis#0144: we got 70 EM
Louis#0144: Idk if we submitted yet
zkf#7512: Ah no they’re #4 now. God it moves fast
Louis#0144: no we didnt submit yet
Louis#0144: If you wanna think about it this way, what we do is suggest where the NN should attend to
zkf#7512: Clauses or sentences are obvious choices
Louis#0144: ah yes but youd be wrong for doing that
Louis#0144: its awful
Louis#0144: you need to regularize a ton
zkf#7512: I think it depends on how the attention gets passed through in the graph. Maybe for your case that’s true
zkf#7512: But I’m not really sure what you did haha
Louis#0144: I cant go into specifics since the paper isnt out yet
Louis#0144: but basically if you dont regularize the network simply doesnt converge (particularly the attention never gets to "explore" if that makes sense)
Louis#0144: so the attention outputs garbage |
zkf#7512: So there’s no preprint for your paper?
Louis#0144: No, not on arxiv
zkf#7512: Gotcha. Yeah I think ETC and Longformer dont have this same problem
zkf#7512: Partially because it achieves full attention but only via the global memory
zkf#7512: Not to say that your paper isn’t awesome btw
zkf#7512: It sounds that way
Louis#0144: lmao its ok, longformer is rly impressive
Louis#0144: but I have a thing for sparsity
Louis#0144: I put a rant in #alignment-general
Louis#0144: LOL
zkf#7512: Anyway, lifting weights from GPT-2 into ETC is on my list of things I wanted to play with
Deleted User#0000: @Louis that sounds really cool, when do you think your paper will be publicly viewable?
Louis#0144: As soon as I’m allowed
Louis#0144: Idk when
Louis#0144: I guess we should move to off topic
Deleted User#0000: So have a researcher friend who is in touch with the authors of Longformer. Apparently the Longformer authors tried to train on a bigger corpus of text and ran into instability issues
Deleted User#0000: I believe it was just OpenWebtext
Deleted User#0000: Thought I should put it out there...
Deleted User#0000: There's a lot of caveats with these long range attention solutions, often not spoke of
bmk#1476: local attention should be quite a bit more stable |
bmk#1476: gpt-3 is just local + global attention iirc so it should be doable
Deleted User#0000: yea, i have a list of things in my mind that are 'safe bets', and local attention is in there
Deleted User#0000: local sliding windowed attention, the strided one can be used to supplement a bit
bmk#1476: in fact, I think gpt-3 overdid the global attention by having it every other layer
bmk#1476: we could probably get away with one global layer every 4 or 8 layers
bmk#1476: that's due for an experiment eventually, once we fix global attention
Deleted User#0000: https://arxiv.org/abs/2007.03356
Deleted User#0000: the only paper with some data points on your questions
Deleted User#0000: Longformer kind of touches on this aspect too (how to balance local / global)
Deleted User#0000: im really hoping someone stumbles into some technique to make linear attention work well
Deleted User#0000: that would be the dream
bmk#1476: I'm personally not a big fan of linear attention tbh
bmk#1476: I feel like there's tons of information loss in there
Deleted User#0000: yeah, it's def not as great, but i'm using it in places where full attention is impossible
Deleted User#0000: and it's working great in those scenarios
bmk#1476: huh
Deleted User#0000: self-attention in every layer of a GAN, for instance
bmk#1476: considerations are slightly different for text, though
Deleted User#0000: yea, the auto-regressive flavor of linear attention stops working greater than lengths of 2048 for me
Deleted User#0000: but i need to see if it can be compensated with more heads, etc |
Deleted User#0000: not a 'safe bet' at the moment lol
Deleted User#0000: but something to keep an eye on
x.#1490: ok so i'm a big doofy ignoramus but i think that if i look at the code i can probably come up with something smart to say about it and, hell, even actually help
x.#1490: @Daj i want in
x.#1490: my github is excinera
bmk#1476: https://discordapp.com/channels/729741769192767510/729741769738158194/736374402366832681
bmk#1476: here's a reading list for you to begin with
x.#1490: one time i smoked a ton of weed and read half of a linear algebra textbook.
this other time i took a bunch of amphetamine and passed the first calculus course then i ran out of money and had to drop out of college lol
bmk#1476: hmm.
x.#1490: so i think the arxiv links are my zone here
bmk#1476: theres some good video series in that message there for a quick refresher
bmk#1476: you dont need to know a lot of calc or linalg
bmk#1476: just know what a matrix is and what a gradient is and you're set
bmk#1476: ideally watch the neural networks series at least
x.#1490: this is the current state of my mental illness https://cdn.discordapp.com/attachments/729741769738158194/736412791946346526/unknown.png
x.#1490: once i finish this what should i add to it
aquajet#7800: add a tab manager
x.#1490: lol
aquajet#7800: but the links above are a good place to start |
x.#1490: is it cheating if i ask gpt-3 to explain stuff to me if i get stuck
x.#1490: because ive been doing that so far and it seems to have increased my learning rate by like a factor of 3
aquajet#7800: really?
aquajet#7800: does it ever mess up the explanation?
Ken731#2990: Joined the server.
x.#1490: there have been a few times where i thought it was just completely full of crap
x.#1490: like i was trying to ask it how a conscious experience of dreaming was integrated, and it was saying "right, but if you were having a dream right now, and you decided it was a dream, you'd be doing that from the lateral prefrontal cortex"
x.#1490: which i thought was nonsensical because the PFC is usually not active during REM sleep
x.#1490: but then i read some more and it turned out that there is in fact LPFC activation during lucid dreaming
x.#1490: 🤔
lerby#4945: Joined the server.
collin#4688: Joined the server.
aquajet#7800: Hello @Ken731 @lerby @collin! Welcome to the AGI party house! See the google doc in the channel description for more info and please reach out if you have any questions
Collin#9007: Joined the server.
Louis#0144: Two Collins
Louis#0144: That’s good, I was worried they were both semicolons
LOSTEN#0657: Joined the server.
axolotl#6372: Joined the server.
Deleted User#0000: Joined the server.
EvgeniyZh#1980: Joined the server. |
aquajet#7800: Hi @Collin @LOSTEN @axolotl @Deleted User @EvgeniyZh! Welcome to the TPU Grand Prix! Check out the google doc in the channel description for more info
Ravna#1831: Joined the server.
Droid#1581: Joined the server.
Sid#2121: Hey @Ravna @Droid ! Welcome to GPT-Discord simulator! Where everyone is 100% simulated. Please check the channel description for more info on the project 🙂
shawwn#3694: morning o/
mojosmojo#4687: I love a good GPT joke early in the morning!
shawwn#3694: Why'd the CPU stop writing?
shawwn#3694: It ran out of INC
Sid#2121: > morning o/
@shawwn mornin shawwn!
topaz#9123: Joined the server.
sholto#2407: Joined the server.
SergioSV96#4200: Joined the server.
old#3101: Hoe does your scaling efficiency look with mesh tf? I remember that megatron had 75% for their 8B model but idk about anything larger
Daj#7482: Hey @topaz @sholto @SergioSV96 ! Welcome to the TPU Furnii! Check the channel topic for info and don't hestitate to ask questions!
Daj#7482: @old We are not at that efficiency yet, we're at like...4%? Still a lot of work to do
old#3101: What do you think wouodmbe a good scaling efficiency for the final model?
Daj#7482: The mesh TF team got around 50% efficiency on their model iirc
Sid#2121: our smaller models have around 25% 🙂 scaling up has... weird effects
Sid#2121: (smaller at this point meaning gpt2 size lol) |
shuki#3543: Joined the server.
Daj#7482: Hey @shuki ! Welcome to the Mesh Tensorflow Autopsy Division! Check the channel topic for info and doN't hesitate to ask questions!
Teqnicolor#8109: If anybody here is a fan of Slatestarcodex or Lesswrong we do an online meetup here hosted by David Friedman every Saturday 1 P.M. PST. https://hubs.mozilla.com/q7PLxgT/pristine-miniature-congregation
Ken#8338: Joined the server.
gwern#1782: https://www.reddit.com/r/MachineLearning/comments/hxvts0/d_breaking_the_quadratic_attention_bottleneck_in/ (I'd send this to #links but don't have permissions)
Sid#2121: you do!
shawwn#3694: #off-topic probably. But lack of rhyming doesn’t seem to be due to BPEs
shawwn#3694: I don’t either
Sid#2121: !addresource #channelname url description
Isaac McHorse#2007: I'm sorry Dave, I'm afraid I can't do that
Sid#2121: everyone does
shawwn#3694: Ah
Sid#2121: also holy shit do we need this
Sid#2121: thanks @gwern
Sid#2121: i'll add to links - for future reference, anyone can do this
Sid#2121: !addresource #links https://www.reddit.com/r/MachineLearning/comments/hxvts0/d_breaking_the_quadratic_attention_bottleneck_in/
dnlbreen#5352: Joined the server.
Louis#0144: Can I get perm to post memes
Louis#0144: I have some dank ones
Sid#2121: !addresource #memes <yourmeme> |
Sid#2121: we keep the resources channel off limits for discussion but everyone can add to them like that
Sid#2121: Hey @dnlbreen ! Welcome to the CHONK-LM server! Where we make snorlax-inspired language models. Please check the channel description for a project overview 🙂
Louis#0144: !addresource #memes https://cdn.discordapp.com/attachments/729741769738158194/736748759698112573/image0.jpg
Sid#2121: we should make Isaac tag the user who posted as well
Sid#2121: will try to implement soon, hah
Sid#2121: but... more pressing... chonky... concerns
Louis#0144: > but... more pressing... chonky... concerns
@Sid
Sid#2121: o right i just meant i don;'t want to be messing with adding features to the bot when there's chonky language models to be optimized
jds#6511: Joined the server.
Ravna#1831: https://arxiv.org/abs/2006.15595
Ravna#1831: Seems interesting in results, and also seems convincing in its underlying arguments.
Daj#7482: This looks interesting @Ravna , thanks for the heads up! Yet another thing to add to the reading pile lol
Daj#7482: Also hey @jds ! Welcome to the Transformer Ancestral Environment! Check the channel description for more info and don't hesitate to ask questions!
jmmcd#2968: Joined the server.
Sid#2121: Hey @jmmcd ! Welcome to the AGI VIP waiting lounge! Please check the google doc in the channel description for some info on the project 🙂
kl1#8444: Joined the server.
Deleted User#0000: https://github.com/Thopliterce/transformer-arithmetic
Deleted User#0000: https://www.reddit.com/r/MachineLearning/comments/hy3h4x/r_performing_complex_arithmetic_with_transformer/
Deleted User#0000: really makes you think about adaptive computation time approaches |
Sid#2121: oh woah
Sid#2121: ```In particular, we train the GPT-2 model on a large number of generated expressions that express the process of computing multiplication step by step.``` we should add this to the pile
Sid#2121: yep, it's just regular gpt
Sid#2121: so cool
Deleted User#0000: yup, with one twist, they spell out the operations to get to the solution. just like how we were taught in third fourth grade!
Deleted User#0000: ex. 39*96;39*6;9*6;=54;3*6;=18;5+18;58030112000=23;=234;39*9;9*9;=81;3*9;=27;8+27;87050213000=35;=351;23+351;310425070303000=374;=3744$
Deleted User#0000: 39 * 96
Deleted User#0000: i actually forget which grade we learned two digit by two digit multiplication lol
Sid#2121: w...what kind of shit were you learning in fourth grade, what's going on there?
Sid#2121: gotta take a look at the dataset
Deleted User#0000: as a TPU noob, does tensorflow mesh work with the TPUs available on Colab? i may try building a simple transformer for starters
Deleted User#0000: just to get more proficient with mesh
Sid#2121: @Deleted User yes! I've been testing lots on colab
Sid#2121: there are actually pre-built transformers in the mesh library
Deleted User#0000: awesome! thanks Sid 🙂
Sid#2121: it's meant to be easy to run, but it's all based around gin-configs and 100% undocumented
Sid#2121: which is why we literally found it easier to build our own gpt3 lmao
Deleted User#0000: yea i know, but just want to practice building one from the ground up
Sid#2121: oh nice
Deleted User#0000: best way to understand it |
Sid#2121: yeah, that's what i found as well
Sid#2121: what we did was started from this toy model https://github.com/tensorflow/mesh/blob/master/examples/toy_model_tpu.py
hahaha#7338: Joined the server.
Sid#2121: it gives you the basic structure - how you should use Dimensions, how to lay out all the tensors on the grid, etc
Sid#2121: and it's very easy to build from there 🙂
Sid#2121: let me know if you need any help and when you're proficient enough, I will literally sacrifice my firstborn for mesh linformer
Sid#2121: hey hey @hahaha ! welcome to the hall of TPU mirrors. Try not to get lost! (check out the channel description for some information on the project.)
Deleted User#0000: thanks Sid! you are very helpful 🙂
Sid#2121: no problem. I think technically there are some differences between colab and cloud tpus (colabs are v2-8s and ctpus are v3-8s), but I haven't had *too* much of a problem testing on colab. That being said, if it would be easier, we have a lot of v3-8s you could test on @Deleted User which would eliminate any potential problems.
Sid#2121: plus no timeouts :brr:
sinusoid_20#2010: Joined the server.
maraoz#9446: Joined the server.
Sid#2121: !addresource #data-sources https://datasets.quantumstat.com/ ~550 NLP datasets
Nate#4217: Joined the server.
a_machine_learning_researcher#2458: Joined the server.
bmk#1476: Welcome @Nate @a_machine_learning_researcher @sinusoid_20 @maraoz @hahaha to the TPU Science Enrichment Center! Please see the channel description for more info (and feel free to ask questions!)
Ravna#1831: I have a very stupid idea of mitigating the weakness of puns/rhymes: concatenating the "source" of the BPE to its one-hot coding. For example, we concatenate the encoding of "a" and "b" to the encoding of "ab". If we limit the dimension of the extra "source encoding" within English alphabet and digits, the extra cost is minimal. Compared to the one-hot encoding dimension of tens to hundreds of thousands, the extra dimension is only 2x(26x2+10)=124.
Ravna#1831: It probably won't make things worse at least, because it's just a little bit of extra information added to the input.
Daj#7482: That's an interesting idea, I kinda like it, though it seems like you wouldn't get much benefit over just using char encoding to begin with?
Ravna#1831: Well my idea is based on the assumption that whatever makes char encoding supposedly inferior to BPE won't be solved very soon and we would be stuck with BPE for a while. |
Daj#7482: I don't know if people have experimented with large LM using char encoding
Daj#7482: At least I'm not aware of any large scale experiments
Sid#2121: > Well my idea is based on the assumption that whatever makes char encoding supposedly inferior to BPE won't be solved very soon and we would be stuck with BPE for a while.
@Ravna *kindiana has entered the chat*
kindiana#1016: also BPE is not restricted to 2 bytes, it could be significantly longer with the large vocab sizes used in most large models
kindiana#1016: I do think its an interesting idea though, maybe have each source byte have a learnable embedding and just sum all of them or something
kindiana#1016: do need some way to disambiguate ordering though
kindiana#1016: would be fun to train a gpt2 sized model with byte level encodings though
Sid#2121: we can probably do that at some point down the line
bmk#1476: then we can just train it dirextly on gzipped text
bmk#1476: no decompressing
bmk#1476: we can just pipe all 3.5PB of CC into it
Daj#7482: Lets train directly on raw bytes
bmk#1476: no decompressing
Sid#2121: let's hook it up directly to our veins
Daj#7482: Of all data we can find
bmk#1476: no converting warc to html to text
Daj#7482: all data formats
bmk#1476: just pure bytes
Sid#2121: :empiricism: |