data
stringlengths 115
7.61k
|
---|
bmk#1476: just crawl the internet
Ravna#1831: i was just about to say that lol, gzipped context = free context enlargement
bmk#1476: for images, executables
bmk#1476: if it has bytes it goes in the pile
Daj#7482: "GPT Neo write Paperclip_Maximizer.exe"
bmk#1476: i cant wait for models that powerful lol
bmk#1476: unfortunately sampling will be a right pain
Daj#7482: > if it has bytes it goes in the pile
@bmk W-What data doesn't have bytes?
bmk#1476: ¯\_(ツ)_/¯
Daj#7482: _Forbidden data_
bmk#1476: wait.. what if we train a model.. *on a model checkpoint*
bmk#1476: dun dun dunn
Daj#7482: Lets make the model predict its own gradients
bmk#1476: thats synthetic gradients with extra steps
kindiana#1016: I actually dont think training on compressed data is the worst idea, if you whip up a LZ encoding which can be tokenized in a reasonable manner, and modify attention to follow references...
bmk#1476: the idea of training on well structured containers directly is painful
Daj#7482: LZ codes can have very long range dependencies...if I'm remembering my algorithms class correctly
bmk#1476: something about the idea is just cursed
zphang#7252: ELMo used a CNN over characters |
SmartManoj#1319: Joined the server.
brianweet#9814: Joined the server.
Daj#7482: Hey @SmartManoj @brianweet ! Welcome to the Git Branch Jungle! See the channel topic for info and don't hesitate to ask questions!
Brian#0686: Joined the server.
Daj#7482: Hey @Brian ! Welcome to the Bug Farm! See the channel topic for info and don't hesitate to ask questions!
mirage#1049: Joined the server.
enter#5600: Joined the server.
shawwn#3694: Hi @mirage, @enter.
Commutative Conjecture#6969: I looked up into batch learning more
Commutative Conjecture#6969: And I still don't understand why the means of deltas can't be computed online
Commutative Conjecture#6969: Like, why would batch learning take more space
kindiana#1016: (assuming relu for simplicity) to calculate gradients you need to know if the the rectifier is activated for each input example to be able to back propagate
kindiana#1016: and the easiest way to do that is to store all activations
Commutative Conjecture#6969: because you do the backward passes in a batch after having done all the forward passes?
kindiana#1016: yes
Commutative Conjecture#6969: stupid q
Commutative Conjecture#6969: why not do forward+backward for each example in a batch then
Commutative Conjecture#6969: no need to store activations
Commutative Conjecture#6969: and you just batch the updates
kindiana#1016: doing larger matrix multiplications is more efficient |
kindiana#1016: just due to kernel launching overhead and cache effects
kindiana#1016: theoretically its the same number of flops if you do bs=1 * 100 or bs=100
kindiana#1016: but the second will run a lot faster in practice
Commutative Conjecture#6969: i thought the main benefit of batch was parallelization
Commutative Conjecture#6969: like, if you are online, you can't process k+1th example before kth because you need the update from k first
Commutative Conjecture#6969: but ok
Commutative Conjecture#6969: then, next stupid q
Commutative Conjecture#6969: is, how processing more examples lead to a bigger matrix multiplication?
kindiana#1016: batching is still used even in things that have 0 recurrence, history or online components (like cnns for image classification)
kindiana#1016: if you have a 128d in->64d out linear layer, to process one example you multiply a 1x128 input matrix by the 128x64 weight matrix
kindiana#1016: and you just use a nx128 input matrix if you batch by n
Commutative Conjecture#6969: @kindiana
the main part of the speedup from bigger matrices comes from better gpu/tpu use, or from fast matrix multiplication algorithm?
kindiana#1016: afaik fast matrix multiplication algorithms are not really used in gpus
Commutative Conjecture#6969: cool
Commutative Conjecture#6969: wubu tpus?
kindiana#1016: tpus are just a big 2D Systolic Array so no
kindiana#1016: actually cudnn might use some fast matmul algorithms for convolutions, but only in the filter size dimensions, not in the batch dimension (so batching still doesn't reduce flops or anything)
Commutative Conjecture#6969: ic
Commutative Conjecture#6969: thx |
CheshireCat#1425: Joined the server.
cogscides#0387: Joined the server.
Sid#2121: Hey @cogscides ! Welcome to the loss valley! Please check the channel description for more info on the project, and reach out if you have any questions 🙂
shawwn#3694: poor @CheshireCat wasn't welcomed
Sid#2121: i didn't seem to be able to tag him :/
Sid#2121: i thought he left
DR.PROACT#2111: Joined the server.
Daj#7482: Hey @DR.PROACT ! Welcome to the Schroedinger Machine Learning Group! Check the channel description for info and don't hesitate to ask questions!
DR.PROACT#2111: I feel I will get left behind I don't learn about gpt3
Daj#7482: It is a pretty hot topic atm yes
Sid#2121: out of interest @DR.PROACT , how did you find your way here? 🙂 welcome btw
DR.PROACT#2111: I'm an infectious diseases researcher
DR.PROACT#2111: Found this through a telegram group
Daj#7482: Hey @h4pZ ! Welcome to the ~~Illuminati~~ Totally Normal ML Discord Server! Check the channel description for info and don't hesitate to ask questions!
Daj#7482: A telegram group? That's interesting. We're just curious because we don't really do much outreach and somehow new people keep popping up, not that that's a bad thing haha
DR.PROACT#2111: https://cdn.discordapp.com/attachments/729741769738158194/737680227395698738/Screenshot_20200728-093707.png
Daj#7482: Huh, interesting how word of mouth spreads
Sid#2121: nice!
Sid#2121: lool we should probably change our name before people get too used to it
Sid#2121: i already think it's too late tbh |
Daj#7482: yea but we need to haha
DR.PROACT#2111: I. Is there any way to get access to gpt 3 other than open ai's invite? II. Is anyone here interested in testing ideas related to research (I'm in medical academia).
Sid#2121: not the official gpt3, that's why we're trying to replicate it
Sid#2121: we'll be open sourcing once we're done, but we still have a lot of work ahead of us. we're a small group.
Daj#7482: We're pretty interested in research in general and a lot of people here are in academia
Sid#2121: yes! in fact one of our main contributors @bmk is i think mainly in this for the medical research at the end, and has been testing how to use gpt3 to crunch academic papers for accelerating medical research
DR.PROACT#2111: That's awesome 🤖🤖
DR.PROACT#2111: I'll see how I can help
DR.PROACT#2111: We have hundreds of data sets from rare diseases
Daj#7482: That's pretty cool, I'd love to see what we can do with that down the line. Currently we're very much still bottlenecked by dev time I'm afraid, still gonna take a while until we have a feature complete GPT3 model
DR.PROACT#2111: Sure. With patience I'm sure it'll be done in no time 🤩
Sid#2121: so much patience :nooo:
Sid#2121: > We have hundreds of data sets from rare diseases
@DR.PROACT are these public or private?
Sid#2121: if public, pls post in #data-sources ! if not, please keep in touch 🙂
DR.PROACT#2111: They are private. Will be public once papers are published. I'll give data sources a look
Sid#2121: when you say datasets from rare diseases, what kind of data would that be?
Louis#0144: Tbh
Louis#0144: That raises a lot of ethical concerns
Louis#0144: Training GPT on that |
Louis#0144: I’m rly pro keeping GPT away from patient dada
Louis#0144: Data
archivus#7382: @DR.PROACT you an actual medical doctor?
DR.PROACT#2111: Yes. All data has patient informed consent for research use. The data is mainly symptom frequency of certain diseases. But the idea of having gpt-3 enabled would be maybe find associations not previously reported in the literature. Prime it with published studies on disease x, and see what happens.
Sid#2121: @DR.PROACT cool. with our model you could even finetune :sutt:
DR.PROACT#2111: If anyone is interested we could publish something. Im only starting to understand gpt-3
Sid#2121: we're definitely interested. Again though, it will probably be a while until we have gpt-3 size models
Sid#2121: how big are the datasets out of interest?
Daj#7482: > If anyone is interested we could publish something. Im only starting to understand gpt-3
@DR.PROACT I'd love to publish, @bmk was also very keen
bmk#1476: Yes that would be awesome
bmk#1476: Especially overlapping medical research
DR.PROACT#2111: Hmm, our data is small. The thing is that you would have to prime gpt-3 with previously published info (that's my laymen understanding of how it works) before it's able to understand whatever associations we ask it to find.
bmk#1476: If we can use gpt3 and friends to accelerate medical research that'd be awesome
bmk#1476: We can feed it lots of studies, that's part of the goal
Daj#7482: We still haven't quite figured out how to feed it tables though
Daj#7482: Which seems important for medicine
DR.PROACT#2111: Haha yeah it'll be a massive hit "Using gpt 3 to find previously unknown associations: is academia doomed or about to boom."
Daj#7482: If we had results that lived up to that headline it would be huge for sure
DR.PROACT#2111: > We still haven't quite figured out how to feed it tables though |
@Daj yeah tables are important. But other results are usually written in the manuscript.
DR.PROACT#2111: So it wouldn't be too crazy to think it may be able to surprise us just with that data.
Daj#7482: I'm sure it could surprise us, it's probably just very non trivial to figure out what of its output is useful and what is hallucinated
archivus#7382: tbh I don't think GPT3 would be the right architecture to find associations
Daj#7482: GPT3 is magic until proven otherwise
Sid#2121: @archivus @bmk has already shown interesting results in this regard
archivus#7382: link?
bmk#1476: It was pretty good at analogies
Sid#2121: uhh he was just posting them here lmao
Sid#2121: @bmk you should definitely do a writeup on those
Sid#2121: they were really interesting
bmk#1476: Soon™
bmk#1476: I'm currently working on 2 different drafts lol
DR.PROACT#2111: > tbh I don't think GPT3 would be the right architecture to find associations
@archivus I guess it's probable. But my take is we didn't know roast beef tasted good before 🔥 was discovered.
Daj#7482: I'm _extremely_ bullish on GPT3 and related technologies
Daj#7482: I think we just opened up an entire garden of low hanging fruit
bmk#1476: > Machines are like organisms in that both can be more complex than their parts.
bmk#1476: > Volcanoes are like breasts in that they are both sources of lifegiving fluids.
Daj#7482: wtf yo |
Daj#7482: haha
bmk#1476: > **A2E is like lipofuscin in that both are substances that accumulate in cells impeding cell function**, but whereas lipofuscin accumulates as an intracellular byproduct of normal cellular metabolism, A2E accumulates as an extracellular product of light exposure.
bmk#1476: Copy pasting some analogies
Daj#7482: yea just commenting on that volcano quote haha
star#5322: @Daj out of curiosity what's wrong with LibreAI as a name?
Daj#7482: It's already taken by a different organisation
star#5322: Oh that's uh, pretty bad yeah
DR.PROACT#2111: I. Feed it all the papers on a group of viruses. II. Say I have a query related to wether or not a certain virus has been reported to be carried by certain animals. I input the query and tells me there isn't a single paper published with that data. III. I ask if another virus (of the same phylogenetic family(genetically similar), has published studies showing it can be carried by certain animals. Gpt-3 answers yes, and lists the animals reported (and the papers it reported in) to carry thr latter virus. IV. a new hypothesis can be established. Vi) man hours saved= an insane amount.
archivus#7382: but doesn't GPT-3 just generate stuff based on what it's seen?
archivus#7382: It's a markov chain on steroids
archivus#7382: I'm still having trouble understanding how it'll make new links when it's not designed to do so
Lerby#9125: Joined the server.
DR.PROACT#2111: It probably wont , it'll provide the tools for the researcher to make the links.
DR.PROACT#2111: But say you feed it all the data on published studies of disease x. And prompt it with , "what are the least common symptoms of disease x" it may be able to list symptoms not previously seen in the vastness of scientific literature
DR.PROACT#2111: Gpt-3 is able to make text prediction look like inductive reasoning.
DR.PROACT#2111: It's a very philosophical question
DR.PROACT#2111: If something is inductive reasoning if it looks like inductive reasoning, but it wasn't intended to be inductive ressoning.
archivus#7382: I want to be wrong so going to keep an eye on this project but I think you'd be better off using BERT or some variant in this case
DR.PROACT#2111: Sure, gpt-3 is just the new toy in town.
archivus#7382: I have access to GPT-3 and had it generate the rest of the paragraph taken from up to date |
archivus#7382: https://cdn.discordapp.com/attachments/729741769738158194/737722220251709480/twitter_Ed_W6H1WAAALk92.jpg
archivus#7382: Here is the input (the highlighted part only)
archivus#7382: And here was the generated part https://cdn.discordapp.com/attachments/729741769738158194/737722322198462484/twitter_Ed_W6H3X0AEcrBy.jpg
archivus#7382: it generated it almost verbatim - meaning it it essentially rehashing what it's been trained on
archivus#7382: it could've gone a million different ways in regards to treating thyroid storm with propanolol etc
archivus#7382: instead it chose to generate the (See etc...) which I see as a failure
DR.PROACT#2111: Sure. Try, "the following are 20 multilayered hypothesis on why mayaro virus and dengue virus could be urbanized"
archivus#7382: https://cdn.discordapp.com/attachments/729741769738158194/737723195112554606/Screen_Shot_2020-07-28_at_8.28.30_PM.png
DR.PROACT#2111: Try ending the statement with a semicolon
DR.PROACT#2111: :
archivus#7382: it even added a citation lol https://cdn.discordapp.com/attachments/729741769738158194/737723408149643314/Screen_Shot_2020-07-28_at_8.29.14_PM.png
archivus#7382: https://cdn.discordapp.com/attachments/729741769738158194/737723776980222012/Screen_Shot_2020-07-28_at_8.30.22_PM.png
archivus#7382: It contradicts itself more than once
Sid#2121: what settings are you generating it at?
archivus#7382: It highlighted the 'on the other hand the more you are' as if I added it, but I didn't
Sid#2121: I haven't played with the api but iirc if you want 'correct' answers you should tune down the temperatures
archivus#7382: what setting do you want it on?
Sid#2121: i have no idea what settings are even available lol
Sid#2121: don't have access
DR.PROACT#2111: Dude.. |
archivus#7382: Temperature 0.1 https://cdn.discordapp.com/attachments/729741769738158194/737724284545531956/Screen_Shot_2020-07-28_at_8.32.42_PM.png
DR.PROACT#2111: I don't want to be biased
archivus#7382: close to the minimum
DR.PROACT#2111: But that statement just gave me a new idea
DR.PROACT#2111: On my research
archivus#7382: it discovered epidemiology in the most inefficient way possible
DR.PROACT#2111: There is no research on mayaro virus talking about the risk of urbanization
DR.PROACT#2111: From the entomological point of view
DR.PROACT#2111: On that virus
archivus#7382: yes that's why it didn't provide anything new on the subject
archivus#7382: anyway just my two cents - if you manage to prove me wrong i'd be very happy
DR.PROACT#2111: Yeah, I guess what I'm trying to say is the following. I. That statement made a relation between lifespan of the mosquito and likelihood to become urbanized.
Sid#2121: tbf @archivus gpt3 hasn't explicitly been trained on any medical data / scientific papers
DR.PROACT#2111: Ii. There isn't any published data associating the lifespan of a mosquito and the likelyhood of becoming urbanized for those viruses.
archivus#7382: @Sid it did I'm pretty sure
Sid#2121: if you look at, e.g, @gwern 's poetry samples, you can see it's not just copying
DR.PROACT#2111: Iii. I'm adding that to my repertoire of hypothesis and I'm going to present it to my team.
DR.PROACT#2111: And fuck I m excited.
DR.PROACT#2111: I can't believe that just happened.
Sid#2121: @archivus this is the details of the training data https://cdn.discordapp.com/attachments/729741769738158194/737725472728809512/Screenshot_2020-07-28_at_19.37.24.png |
archivus#7382: Last proof you can make GPT-3 say anything https://cdn.discordapp.com/attachments/729741769738158194/737725473995620475/Screen_Shot_2020-07-28_at_8.37.23_PM.png
Sid#2121: also @DR.PROACT that is really cool, i wish i knew more about the topic. please keep us updated!
archivus#7382: Wikipedia has loads of medical data lol
Sid#2121: yeah but not *papers*
DR.PROACT#2111: I'm talking about this statement btw
DR.PROACT#2111: https://cdn.discordapp.com/attachments/729741769738158194/737725851990229033/Screen_Shot_2020-07-28_at_8.30.22_PM-1.png
DR.PROACT#2111: It makes little sense. But used a basic principle that would require a PhD to think about.
DR.PROACT#2111: Verbatim it doesn't make much sense
DR.PROACT#2111: But the idea that some mosquitos may live longer than others
DR.PROACT#2111: Depending on wether they've been infected by dengue or mayaro virus
DR.PROACT#2111: Is fucking new
DR.PROACT#2111: And fuck
DR.PROACT#2111: I swear to God I'm trying to be as objective as possible
DR.PROACT#2111: That's what I meant
DR.PROACT#2111: By associations
DR.PROACT#2111: https://cdn.discordapp.com/attachments/729741769738158194/737726945860517898/JPEG_20200728_124327.jpg
jhsu#8763: https://huggingface.co/allenai/biomed_roberta_base has roberta trained on some biomed data
DR.PROACT#2111: I've been in the field for 3 years
archivus#7382: @DR.PROACT https://cdn.discordapp.com/attachments/729741769738158194/737727207560183939/Screen_Shot_2020-07-28_at_8.44.15_PM.png
bmk#1476: *why train on some of the biomed data when you can train on ALL the biomed data* |
bmk#1476: (/s)
Deleted User#0000: i think roberta is not auto-regressive
DR.PROACT#2111: Not one person has suggested mosquito lifespan as a limiting important factor for outbreaks
bmk#1476: https://twitter.com/gwern/status/1284276584573214721
Deleted User#0000: @DR.PROACT we are still at the stage where we are trying to get attention models to output correct answers
bmk#1476: **"Sampling can prove the presence of knowledge but not the absence."**
Deleted User#0000: if you look at natural qa benchmarks
DR.PROACT#2111: Not for those viruses. And thing is, maybe someone specialized in entomology could provide that info, but that's my point.
Deleted User#0000: @DR.PROACT perhaps you could try one of the smaller models https://transformer.huggingface.co/
Deleted User#0000: it won't be as impressive as GPT-3, but may give you a gist
DR.PROACT#2111: I have, ty
jhsu#8763: yeah roberta isn't autoregressive so no generation
archivus#7382: when the relationship has never been investigated, GPT3 spits straight facts 🔥 https://cdn.discordapp.com/attachments/729741769738158194/737728478241095700/Screen_Shot_2020-07-28_at_8.48.26_PM.png
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/737728481282228294/Screenshot_from_2020-07-28_10-48-44.png
Deleted User#0000: @DR.PROACT gives you an idea where GPT-3 stands
Deleted User#0000: https://arxiv.org/pdf/2007.01282.pdf
bmk#1476: "Sampling can prove the presence of knowledge but not the absence."
bmk#1476: You need to let gpt3 know what you want from it
Deleted User#0000: i think OA probably accounted for this when sampling for the benchmarks
archivus#7382: @bmk mind giving me a prompt that proves his point? I'm not trolling or anything I promise. I'm genuinely confused |
bmk#1476: https://twitter.com/nicklovescode/status/1284050958977130497
bmk#1476: "You need to tell it what the AI is and is not capable. " i think this is the key
archivus#7382: let's try it out
archivus#7382: anyone want to give me QA I should add as a prompt?
Deleted User#0000: @bmk yea, that was a surprising finding
archivus#7382: I rest my case https://cdn.discordapp.com/attachments/729741769738158194/737729510853705809/Screen_Shot_2020-07-28_at_8.53.31_PM.png
DR.PROACT#2111: Try priming it with "the following is a conversation between an AI and an infectious diseases researcher. The ai is explaining 5 novel reasons why mayaro virus and dengue virus may share the same vector and lists 5 impacts on public health.
bmk#1476: not even AI because it might try to intentionally dumb itself down to what people think AI is like
bmk#1476: just maybe "the following is a conversation between a world renowned expert in infectios diseases and a researcher"
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/737732092716122163/Screenshot_from_2020-07-28_11-03-41.png
archivus#7382: https://cdn.discordapp.com/attachments/729741769738158194/737732485294850118/Screen_Shot_2020-07-28_at_9.05.22_PM.png
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/737732564382384178/Screenshot_from_2020-07-28_11-05-38.png
archivus#7382: it doesn't really argue for it and just states they have the same vector
archivus#7382: that's one of its arguments
DR.PROACT#2111: Fucj me
DR.PROACT#2111: https://cdn.discordapp.com/attachments/729741769738158194/737733424701374625/Screenshot_from_2020-07-28_11-05-38.png
DR.PROACT#2111: Again, that last paragraph
DR.PROACT#2111: It's something that's widely known
archivus#7382: so it's not a novel reason
archivus#7382: so circling back to how GPT-3 will not generate anything new |
DR.PROACT#2111: Not novel per se. But novel within the context of mayaro virus.
DR.PROACT#2111: It's as if you had a specialist on your side.
archivus#7382: Fuck https://cdn.discordapp.com/attachments/729741769738158194/737734006321446953/Screen_Shot_2020-07-28_at_9.11.24_PM.png
DR.PROACT#2111: Lol
DR.PROACT#2111: It's a cognitive conundrum.
DR.PROACT#2111: It's not making connections on purpose
archivus#7382: Bit of a NSFW but yeah it will say anything https://cdn.discordapp.com/attachments/729741769738158194/737734319153348618/SPOILER_Screen_Shot_2020-07-28_at_9.12.26_PM.png
DR.PROACT#2111: You have to prime it is my take
DR.PROACT#2111: If I prime it with scientific lingo
DR.PROACT#2111: And ask complex questions
DR.PROACT#2111: I'm sure it will provide statements not thought of before
DR.PROACT#2111: I can see why you're having a hard time seeing it
DR.PROACT#2111: It's almost like having a new way to think.
Deleted User#0000: @DR.PROACT you should add your desired dataset to the-pile
DR.PROACT#2111: That uses a lot of data.
Deleted User#0000: maybe this group can train it on their attention network
DR.PROACT#2111: And that could be wrong
Deleted User#0000: nobody knows what may emerge at this point
Deleted User#0000: at scale
DR.PROACT#2111: Cool |
archivus#7382: Last one I promise - sorry I polluted general ❤️
archivus#7382: https://cdn.discordapp.com/attachments/729741769738158194/737735254114173089/Screen_Shot_2020-07-28_at_9.16.07_PM.png
bmk#1476: @DR.PROACT what is the dataset youre thinking of?
Deleted User#0000: @archivus need to be able to fine-tune GPT-3 on all of UpToDate
archivus#7382: It's behind a paywall but I could technically scrape it for you
archivus#7382: I don't want to get in legal trouble lol
bmk#1476: yes
Deleted User#0000: @archivus i hear with a VPN, free access in Spain or Japan 🙂
Deleted User#0000: i plan to scrape it as well
bmk#1476: that would be awesome
archivus#7382: @Deleted User that is beautiful
Deleted User#0000: just regurgitating UpToDate makes you smarter than most doctors
bmk#1476: it's probably not even the most illegal thing we plan on doing
Deleted User#0000: facts
archivus#7382: y'all have a twitter i can follow you on?
DR.PROACT#2111: There's dynamed
DR.PROACT#2111: That's similar to up-to-date
DR.PROACT#2111: You could also download up-to-date offline
DR.PROACT#2111: There are many torrents
archivus#7382: would they be up to date though? /s |
DR.PROACT#2111: Just do up-to-date and dynamed offline
Deleted User#0000: @DR.PROACT nice! didn't know about the torrents, thanks for the tip
DR.PROACT#2111: > @DR.PROACT nice! didn't know about the torrents, thanks for the tip
@Deleted User sure np
DR.PROACT#2111: I'm a Dr so let me know if I can help in that regard.
DR.PROACT#2111: Also if anyone has any access to gpt-3 I would greatly appreciate tinkering with it (and setting the record for most published papers in 24 hours)
Deleted User#0000: @DR.PROACT PM me
bmk#1476: is it easy to get dynamed and uptodate into text format?
bmk#1476: and how big are they?
bmk#1476: @DR.PROACT
DR.PROACT#2111: Not that big
DR.PROACT#2111: Let me check
bmk#1476: darn
bmk#1476: we're looking for really big stuff
DR.PROACT#2111: 4gb
DR.PROACT#2111: The 2018 version
DR.PROACT#2111: Youd have to look for a workable link though
Deleted User#0000: oh man, i just had a vision of a future where you could give a long context GPT raw studies, guidelines, and out comes the up-to-date-esque article
Deleted User#0000: UpToDate hires an army of medical doctors to pen their articles currently
Deleted User#0000: it's going to be wild |
DR.PROACT#2111: From the demos I've seen
DR.PROACT#2111: I can imagine so many applications
DR.PROACT#2111: And I'm not even in tech
DR.PROACT#2111: Pretty sure someone is already way ahead
Deleted User#0000: yea, it's still early. there's a lot of limitations to the tech still
Deleted User#0000: but the promise is real
Deleted User#0000: nice chatting! back to work
DR.PROACT#2111: 🖖
Daj#7482: So, seeing as the new name voting got us some great name suggestions but not too much voting, we've decided on our new name:
**EleutherAI**
"Eleutheria" is ancient greek for "liberty", so I think the name is very appropriate and has a cool ring to it, I hope you agree. Thanks to @goolulusaurs for suggesting the name! We're in the process of setting up social media/website/etc but it's not a priority atm so don't expect anything too soon, though we might refactor the github a bit.
P.S. Since #deleted-channel is no longer needed I will be deleting it in the next few days. If there's anything from there you want to archive, do so now.
shawwn#3694: RIP GapingAI
zphang#7252: I might need a pronunciation guide
Daj#7482: eh-loo-ther-ay
Daj#7482: I think that's what we agreed on
Daj#7482: or eh-loo-ther-A-I |
zphang#7252: "Luther" for short
shawwn#3694: who decided it was a good idea...? well, if you guys like it. *shrug*
Daj#7482: It was the most voted on and the main contributers were unanimous
Daj#7482: Sure just 3 to 2 votes but not my fault people didn't vote lol
Sid#2121: I vote we have a mobile name - keep changing our name every 2-3 weeks, just to keep people on their toes
shawwn#3694: ^
aquajet#7800: or dont have a name at all
shawwn#3694: also MobileAI
Daj#7482: We change discord servers every time too
aquajet#7800: "The Project"
Sid#2121: we'll get to gapingAI eventually
Daj#7482: No, we change chat service every time
zphang#7252: "It"
Daj#7482: Different PGP key every time
Daj#7482: different onion address
Sid#2121: access code that changes on the hour, every hour
Daj#7482: That wouldn't be a pain at all lol
Daj#7482: We can't even get the opsec on a single VM right
Daj#7482: by "we" I mean "I"
shawwn#3694: to put it another way: you like word of mouth? Eleuther can't even be pronounced. But I've been wrong about everything so far. |
Sid#2121: next week: CHONK.AI
Daj#7482: tbh I don't care about publicity _at all_
Sid#2121: i am willing to bet that not one single person has said 'libreAI' with their actual mouth
Daj#7482: And if anything, the name is mysterious and intriguing lol
shawwn#3694: lol fair enough
shawwn#3694: you'd be wrong SId
Sid#2121: ok, exaggeration, but obviously most 'word of mouth' spreading about us has been done through typing
Daj#7482: Also, EleutherAI sounds cyberpunk af
zphang#7252: we're slipping back to Bikeshedding.AI
Isaac McHorse#2007: ALL PLAY AND NO WORK MEANS SLOB WHO ONLY VALUES BIKESHEDDING, NO RESPECT FOR HONOR OR PRAYER!
Daj#7482: Jesus christ Isaac
Sid#2121: for every commit i make, I get to do one hour of bikeshedding
Isaac McHorse#2007: OH F*$K! OH HELL NO! OH HELL NO! STOP IT!
shawwn#3694: not ... really ... That's a bit like saying a french horn sounds cyberpunk
Daj#7482: It sounds vaguely foreign and future-y to me
Daj#7482: ¯\_(ツ)_/¯
Daj#7482: back to work! Or rather bed, I'm up too late again
shawwn#3694: LutherAI is a cool shorthand
Daj#7482: Yea I like the sound of it
zitterbewegung#4846: i have netai |
zitterbewegung#4846: i think i did
zitterbewegung#4846: netai.pw
zitterbewegung#4846: maybe ill use netai for my podcast
zitterbewegung#4846: with me and my deep learning project
AI_WAIFU#2844: Joined the server.
archivus#7382: http://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html
archivus#7382: Relevant to our discussion yesterday
Commutative Conjecture#6969: stupid q
Commutative Conjecture#6969: how do you call neural networks where there are connections across layers? (and how do you train this?)
aquajet#7800: like skip connections?
bmk#1476: Weight sharing?
AI_WAIFU#2844: dense nets?
Commutative Conjecture#6969: I'm checking all of those and answering you, thanks 🙂
bmk#1476: > skip connections
Don't you mean highway nets? (Schmidthuber, 2015) /s
Commutative Conjecture#6969: (What's the joke wrt "highway nets"?)
bmk#1476: The joke is around schmidthuber
Sid#2121: he invented everything 20 years ago
bmk#1476: It's a running joke that everything has already been invented by him |
Sid#2121: also, he wants everyone to know
bmk#1476: Skip connections are just a special case of highway networks
Commutative Conjecture#6969: ic
Sid#2121: @Commutative Conjecture if you're interested in peeping the minor ML drama lol https://www.youtube.com/watch?v=HGYYEUSm-0Q&t=5786s skip to 1:03:00
bmk#1476: i feel somewhat sympathetic for schmidhuber tbh
Sid#2121: afaict he's still acknowledged as quite important lol
bmk#1476: like, getting shafted must suck
Sid#2121: > like, getting shafted must suck
@bmk 😉
Sid#2121: *citation needed*
bmk#1476: ₙₒ
Sid#2121: goodfellow handled that really badly
Sid#2121: and yeah they do seem like similar things but idk
bmk#1476: anyways
bmk#1476: *bikeshedding*
Isaac McHorse#2007: I'm going to have to go ahead and say that you are going to be working hard, when you stop this bikeshedding.
Commutative Conjecture#6969: @Sid tbh, it seems vfun, but learning about training procedures is even more fun
bmk#1476: do we have oa-encoded wt lying around?
Sid#2121: it is... one of these lol
Sid#2121: ```gs://neo-datasets/bundestag/ |
gs://neo-datasets/openwebtext-documents/
gs://neo-datasets/openwebtext-fixed/
gs://neo-datasets/openwebtext-new/
gs://neo-datasets/openwebtext/```
bmk#1476: haha
Sid#2121: i guess the original?
Sid#2121: or fixed??
Sid#2121: ??
Sid#2121: lol
bmk#1476: isnt that the one with 1024 instead of 1025
Sid#2121: idk, best wait till daj's around
bmk#1476: if i had to bet id bet on fixed
bmk#1476: simple sanity check: run with smaller vocab, see if things catch on fire
Nerdimite#3840: Joined the server.
Nerdimite#3840: Hello! Is this the group trying to replicate gpt3?
Commutative Conjecture#6969: yes
Commutative Conjecture#6969: Welcome to the... Super Hyper GPT3 replicating group?
Other people are better at welcoming
Sid#2121: lmao ok i have one
Sid#2121: Hey @Nerdimite ! Welcome to the group composed entirely of one GPT-10 discord bot trying to ouroboros itself into existence! Check the channel description for more info and reach out to any blue names with any questions |
bmk#1476: o
bmk#1476: ^ u dropped this
bmk#1476: wait
bmk#1476: both spellings are acceptable?
bmk#1476: huh
bmk#1476: nvm
Sid#2121: i was also gonna go for the o then double checked my spelling and saw no o
Sid#2121: i prefer the o
Sid#2121: adding it in
Commutative Conjecture#6969: @Sid / @bmk
https://en.wikipedia.org/wiki/Residual_neural_network
I don't understand how the training works
Is the update just the sum of previous layer + skip connection?
If not, does one of you have a good link? I always have to go through 3 shitty medium articles
bmk#1476: https://arxiv.org/abs/1512.03385
bmk#1476: the best resource is the original paper
Ravi Hunnolli#2887: Joined the server.
bmk#1476: >50k cites
Commutative Conjecture#6969: ~~but it has two columns~~
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/737930777408307200/unknown.png |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/737930826389520405/unknown.png
bmk#1476: these two diagrams capture the entire idea
bmk#1476: for more details see the paper
bmk#1476: you literally just take the state and add it back to itself again
bmk#1476: after a few layers
Commutative Conjecture#6969: yeah, that's obv
Commutative Conjecture#6969: what's not obv to me is the training
bmk#1476: ?
bmk#1476: you just do normal backprop, no?
Commutative Conjecture#6969: @bmk
not sure what "normal" means here
backprop I know has a single flow for errors
do you just add both flows?
Nerdimite#3840: > @Sid / @bmk
> https://en.wikipedia.org/wiki/Residual_neural_network
> I don't understand how the training works
> Is the update just the sum of previous layer + skip connection?
> If not, does one of you have a good link? I always have to go through 3 shitty medium articles
@Commutative Conjecture https://www.youtube.com/watch?v=wDp_Ji3BTY4 this is a recording of a webinar on ResNets maybe this can help
Commutative Conjecture#6969: @Nerdimite |
Watching it
(I always feel special when watching an unlisted video with 10 views)
bmk#1476: clicking through it very quick and that's a lot of other content too
Nerdimite#3840: > Hey @Nerdimite ! Welcome to the group composed entirely of one GPT-10 discord bot trying to ouroboros itself into existence! Check the channel description for more info and reach out to any blue names with any questions
@Sid Seems pretty cool to see what is being done here. I am not very experienced with data cleaning or tensorflow mesh. I am good at pytorch and i have some spare compute credits in aws and gcp though not very significant maybe $100 worth credits in each platform. How can i contribute?
AI_WAIFU#2844: >do you just add both flows
I mean, if you do the reverse mode automatic differentiation manually, then yes you will be adding the "error flows"/gradients going backwards. But if you just use whatever backprop/rmad library it will do all that for you automatically.
AI_WAIFU#2844: As a side note, rmad sounds way cooler than backpropagation
bmk#1476: *"r u mad bro"*
bmk#1476: *"reverse ur autodiff bro"*
Commutative Conjecture#6969: @AI_WAIFU
I want to try things, that's why I want to understand things manually
AI_WAIFU#2844: Yeah, that makes sense.
Commutative Conjecture#6969: hey
Commutative Conjecture#6969: how is the hypothesis that only a small subset of a network is useful called?
guac#4716: lottery ticket
AI_WAIFU#2844: Lottery ticket hypothesis?
Commutative Conjecture#6969: thx
Commutative Conjecture#6969: i thought it had a wiki entry
Commutative Conjecture#6969: @AI_WAIFU rmad=autograd? |
AI_WAIFU#2844: reverse mode automatic differentiation
AI_WAIFU#2844: basically
AI_WAIFU#2844: so yes
AI_WAIFU#2844: the algorithm was reinvented like a dozen times so it got a bunch of different names
Commutative Conjecture#6969: ic
Commutative Conjecture#6969: thx 🙂
adalbertobrant#7154: Joined the server.
Commutative Conjecture#6969: yasq (Yet Another Stupid Question)
self-attention is not convolutional
does it mean gpt-k can't learn translation-invariant patterns?
kindiana#1016: self attention is a set processing operation, so it doesn't even know what translation is without additional information
Commutative Conjecture#6969: yeah
Commutative Conjecture#6969: so, the thing that i'm trying to understand is
Commutative Conjecture#6969: `gpt-k` maps a given sequence of token to a diff thing depending on its positions
and if it maps it to the same thing, it has to learn so for each position
is that so?
kindiana#1016: the usual way of positional encoding adds sin/cos waves of different frequencies to different dimensions of the input
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/737943904518799410/positional_encoding.png
Commutative Conjecture#6969: hmmmmmmm
kindiana#1016: so theoretically GPT should learn not to put any semantic encodings in the first couple dimensions and just use that for positional encoding |
kindiana#1016: and you can attend to different relative positions with a linear combination of different frequencies
Sid#2121: Hey @adalbertobrant ! Welcome to the AI-lab-that-cannot-be-named! Check the channel description for more info 🙂
Commutative Conjecture#6969: @kindiana
I don't understand positional encoding
Commutative Conjecture#6969: why not just use the actual position
zphang#7252: I feel like most recent models use learned position embeddings?
Sid#2121: out of interest @adalbertobrant , how did you find your way here? 🙂 curious to hear where word of mouth is spreading
kindiana#1016: yeah learned position embeddings/relative position encoding/split position+content attention are some of the alternatives, but I dont think they make too big of a difference
kindiana#1016: how would you add actual position @Commutative Conjecture
AI_WAIFU#2844: @Sid i got here today from https://twitter.com/theshawwn/status/1286788440286285824
Sid#2121: > @Sid i got here today from https://twitter.com/theshawwn/status/1286788440286285824
@AI_WAIFU ah cool, yeah i guess that's how most people are finding us, but I think the link has been posted around various other slacks / telegram channels too
Sid#2121: also @AI_WAIFU i am super surprised you're not a member of TPU podcast discord lol
Sid#2121: they do a lot of AI waifu making
AI_WAIFU#2844: the wat?
Sid#2121: check #communities 🙂
Commutative Conjecture#6969: (we should disable previews)
Commutative Conjecture#6969: > how would you add actual position @Commutative Conjecture
@kindiana add a dimension with 0 , 1 , 2 , 3 ...
AI_WAIFU#2844: Oh, thank you for that. |
Commutative Conjecture#6969: might be something vdumb, but where?
kindiana#1016: neural networks don't like inputs inputs that are not around mean 0 and std 1
Commutative Conjecture#6969: not sure why it matters, you can normalize it
kindiana#1016: it also doesn't play nice with dot product attention
kindiana#1016: so there is no way to attend to say 1 position to the left efficiently
Commutative Conjecture#6969: > it also doesn't play nice with dot product attention
do you have details on this?
Sid#2121: > @kindiana add a dimension with 0 , 1 , 2 , 3 ...
@Commutative Conjecture I'm pretty sure this is how the mesh-tensorflow library does it actually? But i'm not *that* sure, their code is very hard to decipher
Commutative Conjecture#6969: > so there is no way to attend to say 1 position to the left efficiently
how is there a way to do so with (co)sines of diff freqs?
Commutative Conjecture#6969: also, there is
Commutative Conjecture#6969: to normalize, you just need to substract the mean (so `n(n+1)/2` where `n` is the size of the thing), and divide by stddev iirc
Commutative Conjecture#6969: and if you do so, then moving to the left is substracting a constant step
Commutative Conjecture#6969: @Sid
Where is that code?
Sid#2121: lol. *everywhere*
Sid#2121: hang on
Sid#2121: ctrl-f position and follow the many threads lol https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/utils.py
Sid#2121: the model that's gpt-like is "unitransformer" |
Sid#2121: https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/utils.py#L610 this is the thread you'll want to follow
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/737948719277801582/Screenshot_2020-07-29_at_10.24.36.png
Commutative Conjecture#6969: erg
kindiana#1016: looks like tfm uses regular sinusoids https://github.com/tensorflow/mesh/blob/d353aa9ff6644048bfaec737b6fb5fada03085e2/mesh_tensorflow/transformer/transformer.py#L825
kindiana#1016: https://github.com/tensorflow/mesh/blob/d353aa9ff6644048bfaec737b6fb5fada03085e2/mesh_tensorflow/transformer/transformer.py#L825
kindiana#1016: @Commutative Conjecture with the way that dot product attention works, you can only specify how much and in which direction you care about a particular dimension, and you can't say anything like "I want it to be in this particular range" without additional dimensions
kindiana#1016: by having a sinusoidal encoding, the attention can say I care about dimension 10, which might be a period 5 sinusoid or something
kindiana#1016: and it will activate periodically every 5 tokens
Commutative Conjecture#6969: @kindiana Wooooo
kindiana#1016: the use of sins and cosines of different periods means you can combine them linearly (i.e. efficiently) to produce any periodic locational attention pattern by fourier theorem
Sid#2121: > looks like tfm uses regular sinusoids
@kindiana I think there's an option to use either tbf
Sid#2121: there's like twenty options for every function :zucc:
Daj#7482: > ```gs://neo-datasets/bundestag/
> gs://neo-datasets/openwebtext-documents/
> gs://neo-datasets/openwebtext-fixed/
> gs://neo-datasets/openwebtext-new/
> gs://neo-datasets/openwebtext/```
@Sid `gs://neo-datasets/openwebtext-documents/` is what you're looking for (and if I'm wrong, the program should crash almost immediately haha)
Sid#2121: cool. i'm gonna try and finish that reply to colin then start up a gpt2 run. |
DR.PROACT#2111: Morning guys
DR.PROACT#2111: Humanity is rooting for something productive to come out of here
DR.PROACT#2111: 👽
bmk#1476: Moin
TravellingSalesman#5222: What do you guys think about this recent post in LW about AI overhang https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang
> An overhang is when you have had the ability to build transformative AI for quite some time, but you haven't because no-one's realised it's possible. Then someone does and surprise! It's a lot more capable than everyone expected.
> I am worried we're in an overhang right now. I think we right now have the ability to build an orders-of-magnitude more powerful system than we already have, and I think GPT-3 is the trigger for 100x-larger projects at Google and Facebook and the like, with timelines measured in months.
Daj#7482: I posted my condensed thoughts in #alignment-general
Daj#7482: I basically agree with most things said
Daj#7482: One of my main reasons for doing this project is to collect first hand data on these questions of scaling
Meta#8848: Joined the server.
Daj#7482: Hey @Meta ! Welcome to the Lottery Hypothesis Betting Booth! Check the channel topic for info and don't hesitate to ask questions!
Meta#8848: Hey Daj! Will check it out
Kacper Wikieł#2358: Joined the server.
door#1337: Joined the server.
Sid#2121: Hey @Kacper Wikieł , @door ! Welcome in... to, the **scary door** AI Lab ™️. Check the channel description or some info on the project, and please ask if you have any questions.
Kacper Wikieł#2358: How to contribute? I'm new to the ML / AI and I know some python. Do you have some kind entry level tasks to complete?
Sid#2121: I guess it depends on your definition of entry level. Some data collection / cleaning tasks might be up your street! |
Sid#2121: one of our big TODOs on data is getting clean pdf to txt extraction
Sid#2121: we're also trying to figure out ways to get a better, cleaner dataset from CC data than OSCAR. But a lot of this stuff isn't so entry level.
bmk#1476: Another thing that needs attention is working out a good html->text pipeline
bmk#1476: Tldr some converters suck for non English, some converters suck at forums, etc. We want to combine the best of each converter for all languages and all websites
arfa#0882: I have 7.5GB of Twitch chat logs and, uhh... only 300MB of Discord chat logs. Dunno if those would be of interest as datasets.
Sid#2121: 7.5GB sounds just about useful - compressed or uncompressed? how did you get them? can we get moar?
bmk#1476: Discord logs is kinda yarr harr because tos
bmk#1476: But aside from that it seems really useful
bmk#1476: These probably aren't in CC so it would be useful
shawwn#3694: amusingly, 7.5GB of twitch chat logs seems like it would poison the model
shawwn#3694: the model won't be able to unlearn the pogchamps
arfa#0882: :POG: :Kappa:
Sid#2121: lmao true
Sid#2121: i don't go on twitch much but when i do it's mostly https://cdn.discordapp.com/emojis/563745335986618378.png?v=1 scrolling by at 1000mph
arfa#0882: :jebaited: :POGGERS: :pepeHANDS:
bmk#1476: I think that's not a bad thing
Sid#2121: I am trying to recall any time i have ever seen an actual conversation occur on twitch tho lol
Sid#2121: it's mostly people talking to the streamer (who won't be in the text)
bmk#1476: More data better
bmk#1476: And this is data that wouldn't be found elsewhere in the set |
bmk#1476: Can you post a short sample of the data as a gist @arfa
Alm#9130: the model would have to predict what happend in the video in order to predict the next reaction haha
Sid#2121: lmao
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/738169751305584761/Screenshot_2020-07-30_at_01.02.46.png
Sid#2121: wat
Sid#2121: leaving twitch now
aquajet#7800: Eloquent
arfa#0882: Yeah basically just garbage https://gist.github.com/arfafax/735f0869f041b93706b6231622a5773c
arfa#0882: But can I introduce you to the best Twitch channel, #ukparliament? https://gist.github.com/arfafax/dbf4d72716585e7303864c5874441159
Sid#2121: seems like an accurate representation of uk politics
arfa#0882: I'm not sure who thought it would be a good idea to stream parliament sessions on Twitch but I kinda love it and couldn't help but start logging their chat
Sid#2121: how long have you been logging twitch chats that you now have 7.5GBs worth lol
Sid#2121: is it mostly a few channels or lots of different ones?
Sid#2121: I wonder how small they would become if you deduped them since it would get rid of all the pogs
arfa#0882: Only ~40 channels and only since December 2018
arfa#0882: I'm sure someone out there has done a much more thorough job of logging since Twitch chat is literally just IRC
Kazumi#1297: the only twitch stream I watch is this, where they have regular meetings about development of Wolfram Language and they actually take into consideration the chat log conversations https://www.twitch.tv/stephen_wolfram
Sid#2121: TIL stephen wolfram is a twitch streamer
arfa#0882: Not even a Twitch affiliate or whatever, smh
arfa#0882: They don't have any emotes I can spam |
Deleted User#0000: so how can i help
Anders#1378: Joined the server.
aquajet#7800: Hello @Anders welcome to the Twitch chat automation service! Check the channel description for some info on the project, and please ask if you have any questions.
aquajet#7800: So rn for the model the largest goal is to make the model more efficient. There are GPT-3 117M runs training rn, but the model is not as efficient as we want it to be (25% TPU utilization, it should be possible to get it up to 50%). There are a few ways to do this, maybe through changes to the attention or making the activations bf16 type (tbh idk what this exactly means so if someone could explain it that would be great!). These optimizations especially become imprortant when we chonk the model up. There are also experiments we can do on architecture/parameter effects on the model at different scales, although this is a more secondary goal.
aquajet#7800: The other main project is The Pile, a humongous dataset that we will need to assemble if we want to reach the goal of a 1T LM. Check out #the-pile for more info but we could use a lot of help in cleaning up the pipeline, especially in better html->text conversion and pdf->text conversion .
bmk#1476: A point of clarification: HUMONGOUS is already the name of one of our datasets, which will be used to create part of The Pile™
shawwn#3694: good morning
Sid#2121: o/
coozamano#5333: Joined the server.
Sid#2121: Hey @coozamano ! Welcome to the Tensorflow therapists' office! Do you have any Tensorflow grievances to get off your chest? (jokes aside we are actually trying to build big Language Models - check the channel description for more info)
adamb#9760: Joined the server.
gwillen#3291: Joined the server.
gwillen#3291: I'm curious to hear your thoughts on the cost of training the proposed model -- I've seen projections that it cost several million dollars in compute to train GPT-3, not counting engineering time.
bmk#1476: i think we came to an estimate of a few thousand as an upper bound for costs
adamb#9760: hi everyone, new here
bmk#1476: welcome!
adamb#9760: i know it sounds silly, but has anyone thought about the possibility of crowdfunding?
bmk#1476: we have all the funding we need for now, unless we plan on hiring people to write code for us or something lol
arfa#0882: How'd you get funding?
aquajet#7800: TFRC |
aquajet#7800: It's not explicit funding afaik but the tpus are being used through their program
arfa#0882: But aside from that
arfa#0882: GCP bucket costs, VM costs, Hetzner costs, etc.
aquajet#7800: donations i think
Sid#2121: couple private donations. we plan on setting up a patreon at some point too
arfa#0882: So basically, crowdfunding :KEK:
gwillen#3291: @bmk huh I'm confused, I feel like I'm missing something -- that's a few thousand dollars? To train a model bigger than GPT-3?
bmk#1476: yeah
arfa#0882: The TFRC compute costs would be in the millions if it wasn't free
arfa#0882: A v3-32 is like, $15k/month IIRC
bmk#1476: Overlord Google has taken pity on us
gwillen#3291: ... huh, wow. I did not know about TFRC. That's cool.
arfa#0882: Heh, you're lucky our TPU quota is wonky or we'd be competing with you guys for TPU resources right now
bmk#1476: haha
aquajet#7800: so how would inference work?
arfa#0882: ?
bmk#1476: We haven't figured it out yet
aquajet#7800: after it's trained
aquajet#7800: thats fair
aquajet#7800: cause its bikeshedding atm |
Isaac McHorse#2007: WHY ARE YOU BEING DISTRACTED? YOU CAN'T GET ANYTHING DONE LIKE THAT.
arfa#0882: You guys end up with a model that you can't run inference on without free TPUs which will be in high demand after you get media attention and everyone learns about TFRC and signs up :KEKW:
bmk#1476: I mean dajs original GPT2 thing got a lot of attention
bmk#1476: And TFRC still exists
bmk#1476: Also I still kinda want to work out something that lets us do inference with a small number of GPUs
aquajet#7800: yeah that'll be an interesting problem to solve
bmk#1476: Imagine a server with 1TB ram and 2x 1080ti or something
arfa#0882: Well, TFRC availability isn't something you can really count on long-term. If you have an API or something that is backed by TFRC TPUs, expect it to be down a lot of the time
aquajet#7800: internet 3.0: a distributed public language model
bmk#1476: load layer 1 on gpu 1, layer 2 on gpu 2
bmk#1476: forward on 1, send to 2, forward on 2
bmk#1476: load 3 to gpu 1 overwriting layer 1
bmk#1476: send to 1
aquajet#7800: that could work
bmk#1476: repeat until all layers are done
aquajet#7800: would need to figure out the timing
bmk#1476: it would be slightly slow but not the worst thing
aquajet#7800: also redundancies in case a server goes down
bmk#1476: yeah timing will be a challenge
arfa#0882: That sounds super inefficient :thonk: |
bmk#1476: meh we dont need 999999 uptime
bmk#1476: i mean care to suggest a better solution that doesnt cost 100x more?
arfa#0882: Well, if you end up in the situation Tensorfork is in, you might have zero uptime for weeks on end
bmk#1476: ?
bmk#1476: what situation is that
arfa#0882: We haven't been able to create any TPUs for 2 weeks
bmk#1476: i meant
bmk#1476: gpu
bmk#1476: not tpu
bmk#1476: tpu availability wont affect uptime
arfa#0882: Ah
aquajet#7800: we invent a low cost way to convert old computers to tpus
arfa#0882: :KEK:
bmk#1476: also hol up has Overlord Google said anything about the tpu glut?
arfa#0882: Dunno, I'm not in the loop
aquajet#7800: uh ask @Daj
bmk#1476: wait i think i remember there was a non-answer email
bmk#1476: i vaguely remember hearing that they just dodged the question
aquajet#7800: that was about the outage?
bmk#1476: then again i have the long term memory of a goldfish |
aquajet#7800: my context window is 2
arfa#0882: Basically they said "You guys still have quota, but we're near 100% capacity in eu-west-4" which doesn't answer why you guys can create TPU pods and we can't
bmk#1476: o.O
arfa#0882: And I think shawwn is afraid to push the issue because he thinks we might've been manually deprioritized or you guys are getting special treatment or something
bmk#1476: well thats all very odd behavior from Overlord Google
arfa#0882: I'm like, 99% sure it's just a bug or some oversight or misunderstanding, and they'd be happy to fix it if we pressed them, but :idk:
CalvinCoolade#3839: Joined the server.
jordanf#2297: Joined the server.
goolulusaurs#1571: > Also I still kinda want to work out something that lets us do inference with a small number of GPUs
@bmk Probably something like https://arxiv.org/abs/2002.05645, which I know we discussed a while ago.
adamb#9760: I saw some notes in the document looking for ppl to donate CPU time for dataset cleanup
adamb#9760: Is that still a need?
bmk#1476: yeah that's what i was thinking
bmk#1476: and @adamb yes that is necessary, how many cores would you be able to contribute?
adamb#9760: I have a rig with 20 Xeon cores that's idle
adamb#9760: Perhaps that's a pittance compared to what's needed
adamb#9760: I suppose it's 2x that if HT is enabled?
bmk#1476: hmm that's not bad
bmk#1476: we have 2x(16core or 32thread) right now
bmk#1476: so that would be a lovely addition |
bmk#1476: also that reminds me, @aquajet we probably want the webserver set up such that we can seamlessly pivot to a new dataset (i.e after we get the whole multilingual pipeline worked out) while requiring as little action on the parts of server owners (without doing arb code exec ofc)
aquajet#7800: Yep
aquajet#7800: Chunk number is stored as a string so we can just put a URL in there
bmk#1476: i mean like upgrading the pipeline
bmk#1476: because right now we're just doing justext but we *really* want to pivot away because like english-only and bad at forums, etc
bmk#1476: so we want clients to be able to identify their version probably
aquajet#7800: version?
bmk#1476: like, if the client cant do multilingual pipeline for example
bmk#1476: we dont want to just send them a chunk number and hope they dont go ahead and download english only text
bmk#1476: have like a `version: 1` field that we can increment whenever there's breaking changes to the pipeline
aquajet#7800: Oh ok, I see
bmk#1476: also i wa just reading the trafiltura benchmark code
bmk#1476: and *man*
bmk#1476: https://github.com/adbar/trafilatura/blob/master/tests/comparison.py
bmk#1476: this is such a blatant violation of DRY
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/738644276203880488/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/738644307715555348/unknown.png
bmk#1476: just looking at this hurts
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/738644386925117440/unknown.png
bmk#1476: also it looks like they define the ground truth simply by having a few phrases that should/shouldnt be included o.O https://cdn.discordapp.com/attachments/729741769738158194/738644574859296798/unknown.png |
aquajet#7800: Yikes
bmk#1476: looks like we need to start from scratch
bmk#1476: https://docs.google.com/spreadsheets/d/1k8G2M5RxEwMrQuOwj-Wx-XxFBdZqyGKFGM6nPDyr-MQ/edit?usp=sharing
bmk#1476: heres a spreadsheet
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/738646020077912094/unknown.png
bmk#1476: here's an empty repo https://github.com/leogao2/htmltotext-benchmark
bmk#1476: so the idea is
bmk#1476: 1. find pages
bmk#1476: 2. stick in sheet
bmk#1476: 3. manually clean
bmk#1476: 4. save both the html-at-clean-time and text in the repo (both because the html might change over time)
aquajet#7800: The t5 idea is interesting, or something like it
aquajet#7800: Rather than a purely rule based approach
aquajet#7800: For something the size of t5 we'd need a lot of labels though
aquajet#7800: Actually we'll see how the current pipeline does and go from there
bmk#1476: we still want data anyways
bmk#1476: i'll set up scaffolding for the repo tmrw
bmk#1476: but for now just scour the interwebs for good representative links and dump it into the sheet
bmk#1476: I'll try to avoid english and focus on zh, ja, de, and maybe fr
aquajet#7800: I gotta sleep but I'll get on it tomorrow |
bmk#1476: awesome
bmk#1476: i also have to sleep soon
adamb#9760: well like i said, it's just sitting idle atm
adamb#9760: so how do i talk to about putting it to work?
Daj#7482: I think @bmk is still our data tsar atm
bmk#1476: ok so
bmk#1476: here's the thing
bmk#1476: we have a current data pipeline
bmk#1476: we're working on developing a new one over the next bit
bmk#1476: so it you want to start *right now* you can download using the old pipeline
bmk#1476: and we'll try to figure out something to do with the old pipeline data
bmk#1476: otherwise, soon™ we will have a new pipeline
adamb#9760: what are the major differences between the old pipeline and the new one?
aquajet#7800: Cleaner parsing and multilingual support
aquajet#7800: Parsing as in HTML2text and pdf2text
adamb#9760: but just as easy to run
adamb#9760: and the outputs of the current pipeline are still valuable?
bmk#1476: yeah
bmk#1476: the current pipeline is english-only and skewed towards news sites
bmk#1476: but it's still useful |
adamb#9760: ok
adamb#9760: so what's the workflow
bmk#1476: you clone a repo and run a docker command
bmk#1476: we're currently working on a webserver to coordinate work so maybe we want to wait for that first
adamb#9760: where do the results get sent?
adamb#9760: are there any creds i also need?
bmk#1476: so we dont have that part implemented yet
bmk#1476: it would probably be easiest for us to get our new pipeline and the coordination webserver working first
adamb#9760: ok
adamb#9760: makes sense
bmk#1476: yeah we're really grassroots so progress happens when people have free time haha
adamb#9760: something i worked on a few years ago: https://github.com/tesserai/iptf
adamb#9760: integrating ipfs into the tensorflow filesystem api
adamb#9760: the idea was to enable training models that required datasets larger than could fit on any individual machine
adamb#9760: so it "lazily" sources the blocks needed for whatever's being requested from ipfs peers
adamb#9760: at the time ipfs turned out to be too slow to keep GPUs busy, but they've put a lot of work into it since then
adamb#9760: (they might be a source of sponsorship if we wanted to pursue that)
adamb#9760: (i think it's possible to rebuild something like this atop bittorrent, if needed)
Daj#7482: That's cyberpunk as hell and I love it
Daj#7482: No idea on feasibility ofc |
adamb#9760: when i was working on this i was extremely interested in enabling workflows for huge models trained on huge data
adamb#9760: but no one was working on big enough models outside of google yet, lol
Daj#7482: Well, if it works, this would be a great usecase
adamb#9760: so i put it down once it was clear that the bottleneck was with ipfs
adamb#9760: but the interface inside of tensorflow was pretty tiny
adamb#9760: and everything worked
adamb#9760: it just would choke on metadata traffic for huge directories
adamb#9760: (like the imagenet validation set)
adamb#9760: where/where are the meetings or discussions about the data architecture?
Daj#7482: #the-pile mostly. @bmk is the one to talk to
Commutative Conjecture#6969: hi
Commutative Conjecture#6969: http://dugas.ch/artificial_curiosity/GPT_architecture.html
this says that there is a first layer for the tokenizer (bytes -> bpe tokens), and then a layer for the embedder (bpe tokens -> token vectors)
Commutative Conjecture#6969: where does this embedder comes from? is it trained with the rest of the network, or already given at the start?
Deleted User#0000: @Commutative Conjecture yea, this used to confuse me as well. the embedding is nothing more than a weight matrix, whose rows are mapped to the individual tokens. say you have gone over your data, and it has 20000 unique tokens, then you would have (20000 x dimension) embedding matrix
Deleted User#0000: the dimension is how many numbers you want to assign to each token
Commutative Conjecture#6969: @Deleted User
Where does this embedding matrix come from? Is it learnt with the network or separately?
Deleted User#0000: learned by the network
Deleted User#0000: it is learnt, just like any other parameter in the system |
Deleted User#0000: say the word 'cat' is given to the transformer
Deleted User#0000: first you break it into 'c', 'a', 't'
Deleted User#0000: each of those tokens have a unique id, say 3, 0, 21
Deleted User#0000: then you fetch rows 3, 0, 21 from the embedding matrix above
Deleted User#0000: that's it..
Deleted User#0000: the gradient is done on the embedding matrix and you learn it just like any other parameter
Deleted User#0000: (BPE may do something like 'ca', 't')
Deleted User#0000: it's learnt with the network
Deleted User#0000: a big lookup table
Deleted User#0000: the idea behind is, each token can be represented with a bag of numbers (a vector), let's let the network decide what those numbers should be
Commutative Conjecture#6969: > it is learnt, just like any other parameter in the system
ok, just wanted to know
Commutative Conjecture#6969: afaik, the bpe is not learnt with the rest of the system
Deleted User#0000: no, that part isn't
Commutative Conjecture#6969: and other language modellers use pre-learnt embeddings
Deleted User#0000: there's many ways to determine how to chop up your sequence
Deleted User#0000: yup, but those word2vec days are gone
Commutative Conjecture#6969: 👌
Deleted User#0000: the transformer is the new way to embed, since it is contextualized
Deleted User#0000: the output near the final layers account for the context of each tokens with each other |
Louis#0144: Well bilstm is contextualized too
Louis#0144: That’s not a fair comparison
Deleted User#0000: ah yea, ELMO started it all i think
Louis#0144: ELMO has good contextualization
Deleted User#0000: or ULMfit, i forget which
Deleted User#0000: man, bilstm's, those were the days..
Louis#0144: I miss them
Louis#0144: I beat Roberta with a bilstm@
Louis#0144: Lmao
Deleted User#0000: and all the vanishing exploding gradient problems?
Deleted User#0000: lol
Louis#0144: You can avoid that by using residual networks
Louis#0144: And highway networks
Deleted User#0000: ohh man, yea, i remember those
Louis#0144: Highway networks are still popular with GCNs
Louis#0144: I still use them sometimes
Deleted User#0000: what is sota on GCNs? is there a specific type of architecture people use these days?
zphang#7252: I forget, what's the difference between highway/residual. Is it a gate?
Louis#0144: GCNs have sota on Hotpotqa
Louis#0144: And yes |
Deleted User#0000: good to know!
Louis#0144: Usually people use residual gated graph neural networks
Louis#0144: GCNs are good when you need to represent lots of sparse knowledge
Louis#0144: Like multi doc QA
Louis#0144: A transformer is just a dense GCN
zphang#7252: is that on the hotpotqa leaderboard? so many anonymous submissions...
Louis#0144: Lol
Louis#0144: Yeah but a lot of them are GCN based
Louis#0144: Almost all the open domain ones use a GCN
Louis#0144: and like half the closed domain ones do too
Deleted User#0000: yea, i've read https://towardsdatascience.com/transformers-are-graph-neural-networks-bca9f75412aa but i'm not familiar with gcn's
zphang#7252: `Is Graph Structure Necessary for Multi-hop Reasoning?` PUT THE ANSWER IN YOUR PAPER TITLE
Deleted User#0000: cool
Louis#0144: But yeah I got sota on hotpotqa with a bilstm and a gcn
Louis#0144: Literally took the baseline and added a gcn
Louis#0144: That’s all I did
Deleted User#0000: google ads serves me a bunch of transformers (the cartoon) ads
Louis#0144: My thing was my regularizer
Deleted User#0000: it's kind of funny
Louis#0144: Acceptances come out tmrw |
Louis#0144: Can I show my paper if I get accepted
zphang#7252: which conf?
Louis#0144: EMNLP
zphang#7252: aha
star#5322: Does someone have a two sentence of what a GCN is?
zphang#7252: I'd be interested to read your stuff
Louis#0144: CNNs... but for graphs
Louis#0144: Literally tho
Louis#0144: Laplacian smoothing is convolution on a 0 simplex
Louis#0144: That’s all that GCNs do
Louis#0144: It’s literally a CNN
Louis#0144: but where the image is a graph
Louis#0144: And not a real image
star#5322: Laplacian smoothing doesn't mean anything to me
star#5322: (sorry)
Louis#0144: https://liqimai.github.io/blog/AAAI-18/
star#5322: ty
technologiclee#0144: Joined the server.
arvindr9#4837: Joined the server.
Louis#0144: LOL |
Louis#0144: @arvindr9
Louis#0144: hi
arvindr9#4837: Lol hi
E McNeill#1259: Joined the server.
bhauth#7283: Joined the server.
Daj#7482: Hey @technologiclee @arvindr9 @E McNeill @bhauth ! Welcome to the Mesa Optimizer Removal Facility! Check the channel topic for info and don't hesitate to ask questions!
Commutative Conjecture#6969: Does anyone understand or have some adhoc explanation for how GPT3 can be so good at grammar?
kindiana#1016: grammar seems like something that would be pretty easy for big LMs (only local context required and roughly consistent in all training samples)
Commutative Conjecture#6969: @kindiana
Not sure what you mean by consistent
There are many grammar rules in writte English, and even more possible grammar rules in general
kindiana#1016: as in, the rules/correlations required to figure out grammar is present in a lot of training samples, not like facts or anything which might only occur a few times in all the data
kindiana#1016: even markov chains could learn grammar lol
Commutative Conjecture#6969: @kindiana
Even if the data permits it, I'm not sure how the architecture does
Commutative Conjecture#6969: > even markov chains could learn grammar lol
@kindiana
Not really. It woud learn a lot of special cases and wouldn't extrapolate. Ie, it wouldn't learn grammar.
kindiana#1016: why can't transformers learn grammar? it seems like a pretty ideal case for multi headed attention (i.e. one head attends to the tense of the sentence, one head attends to the subject etc), and the output layers can take information from that to synthesize the next most likely word(piece) |
Commutative Conjecture#6969: > why can't transformers learn grammar? it seems like a pretty ideal case for multi headed attention (i.e. one head attends to the tense of the sentence, one head attends to the subject etc), and the output layers can take information from that to synthesize the next most likely word(piece)
@kindiana
I believe transformers can, I'm mostly interested in the how
Commutative Conjecture#6969: But I believe it has a much stronger grasp of grammar than just aggregating information about the sentence
Commutative Conjecture#6969: (Because of symbol manipulation tasks I gave it, and because it is more often locally confused semantically than grammatically)
Commutative Conjecture#6969: But I don't understand how it is that reliable
kindiana#1016: what do you mean by a stronger grasp of grammar? I feel like grammar can be basically "solved" with just statistical correlation over words even, but I'm no language expert lol
kindiana#1016: I think grammar should be by far easier than semantics, because semantics requires wider context and uses correlations + tokens which are much rarer in the training data
Commutative Conjecture#6969: Both are difficult, but in different directions
Commutative Conjecture#6969: > what do you mean by a stronger grasp of grammar? I feel like grammar can be basically "solved" with just statistical correlation over words even, but I'm no language expert lol
@kindiana
Commutative Conjecture#6969: That's a bad intuition I believe
Commutative Conjecture#6969: Learning grammars, in an unsupervised manner, is vdifficult
Louis#0144: I would wager that grammar is mostly a solved problem
Louis#0144: Compared to@other things people in NLP are trying to tackle now
Commutative Conjecture#6969: > I would wager that grammar is mostly a solved problem
@Louis
No idea what you mean
Commutative Conjecture#6969: What would you bet on exactly?
Louis#0144: Modern LMs struggle very very little with grammar |
Louis#0144: For the reasons mentioned above
Louis#0144: You only really need local attention
Louis#0144: And you only really need a few general rules + some random memorized exceptions
Commutative Conjecture#6969: > Modern LMs struggle very very little with grammar
@Louis
By modern, do you mean, NNs based?
Commutative Conjecture#6969: > You only really need local attention
@Louis
For formal grammars, that seems def wrong
Louis#0144: I mean even pre NN didn’t really have issues
Louis#0144: Rule based models worked really well
Louis#0144: Local attention is pretty wide
Louis#0144: I’ve seen intrasentence local attention
Louis#0144: And intersentence local attention
Louis#0144: I’m sure that’s enough
Daj#7482: fwiw I think modelling natural language with formal language has a terrible trackrecord
Daj#7482: and is not the natural abstraction to use
Commutative Conjecture#6969: > Rule based models worked really well
@Louis
To infer grammars? |
Louis#0144: Yeah
Louis#0144: I’m not convinced grammar is really an issue
Louis#0144: The main issue that faces LMs right now is causality imo
Louis#0144: And coherency
Louis#0144: Also bigger context windows would be nice
Louis#0144: Lmao
Commutative Conjecture#6969: > Yeah
@Louis
Link?
Louis#0144: https://www.researchgate.net/publication/239556866_A_Rule-Based_Style_and_Grammar_Checker
Commutative Conjecture#6969: > I’m not convinced grammar is really an issue
@Louis @Daj
That's what I asked about though
Reason is that learning formal grammars purely from examples was hard.
I believe GPT-3 still doesn't learn a formal grammar.
GPT-3 learns a natural grammar, and very well / reliably so. So much so that I am wondering which abstractions it uses that works so well
Daj#7482: My answer: We don't know
Daj#7482: We're in the alchemy phase of understanding learned optimizers |
Louis#0144: That’s an issue with coherency imo
Louis#0144: But yeah
Louis#0144: No idea why
Commutative Conjecture#6969: > https://www.researchgate.net/publication/239556866_A_Rule-Based_Style_and_Grammar_Checker
@Louis
Can't see the abstract, but grammar checking has nothing to do with grammar learning
Daj#7482: I think we understand GPT3 as much as alchemists did quantum physics
Commutative Conjecture#6969: Former is super easier on all plans
Louis#0144: Grammar checkers can be used to construct effective grammar for LMs
Louis#0144: They’re very related
Commutative Conjecture#6969: > Grammar checkers can be used to construct effective grammar for LMs
@Louis
A grammar checker already knows the grammar. It is much easier to build a parser for a language than a program that learns the language from examples.
john doe#9394: Joined the server.
Daj#7482: Hey @john doe ! Welcome to the AGI Legal Defense Fund! Check the channel topic for info and don't hesitate to ask questions!
Louis#0144: Lmao
Louis#0144: Tbh
Louis#0144: I don’t think you’re kidding about that
Louis#0144: @Daj
Louis#0144: :p |
Daj#7482: When have my custom introductions ever been inaccurate? hahah
Deleted User#0000: @Commutative Conjecture https://arxiv.org/abs/1906.04341 there's a lot of papers along this theme
Deleted User#0000: https://www.pnas.org/content/early/2020/06/02/1907367117
Deleted User#0000: Chris Manning himself
Commutative Conjecture#6969: @Deleted User thx
Commutative Conjecture#6969: ~~(YOU'VE GOT TO STOP 2 COLUMNS PAPERS)~~
Deleted User#0000: @Commutative Conjecture lol
arfa#0882: Dumb idea: feed the model a bunch of auto-generated captions from Twitch stream audio. There are thousands of streamers who stream several hours a day, so tons of content, and it's basically unfiltered stream-of-consciousness thought. :4Head: https://cdn.discordapp.com/attachments/729741769738158194/739198944319504504/twitch_subtitles.mp4
Daj#7482: We can teach the model how to attract simps for funding :bigbrain:
Daj#7482: I'm not sure what effect data like that would have but I really like the idea. Especially for later GPTx that can condition on video
arfa#0882: I'm really impressed by YouTube's auto-caption stuff, though. Being able to search 3 hour podcasts by text and find the point in the video where they're talking about a certain thing is great
sunny#5382: Joined the server.
Louis#0144: https://twitter.com/garymarcus/status/1289967318433374208?s=21
bmk#1476: https://twitter.com/nabla_theta/status/1289950778849759232
Louis#0144: So GPT4
Louis#0144: Lmao
bmk#1476: yep
Daj#7482: That's a great take bmk lol
bmk#1476: thanks haha
bmk#1476: https://twitter.com/GaryMarcus/status/1289976322857426944 |
bmk#1476: https://twitter.com/nabla_theta/status/1289977562882424838
star#5322: yeah the "whether it understands" point seems, completely an airball to me
bmk#1476: i know gary marcus of all people probably isnt going to change his mind but hey
star#5322: I liked gwern's point about like, sifting, idr which post it was in
star#5322: but it was like, suppose you want a plausible page of shakespeare - the library of babel contains many of these, but you'll have to sort a huge exponential number of them to find one, while with GPT-2 you maybe only had to read 300 pages or something, and then with GPT-3, you maybe only have to pick 5 pages or something
bmk#1476: that's actually a really good way to put it
Daj#7482: Agreed that's a great framing
star#5322: there's still a human in the loop, but you can evaluate the model based on how hard it is to find things with it
star#5322: yeah I mean this is all Gwern, I'm just the messenger
star#5322: but I thought it was an excellent point yes
bmk#1476: Could you find the post? I'm writing a post rn and probably want to link to that
bmk#1476: I understand this idea intuitively but when asked to put it into words I end up with an incoherent jumble about entropy and stuff
star#5322: oh uhhhh, I could try to, it's probably in one of the gpt-3 posts
Daj#7482: I also like the framing that intelligence is a measure of how small of a target you can hit in concept space
Daj#7482: might be too abstract for a public facing blog post
bmk#1476: nvm found it
bmk#1476: https://www.gwern.net/GPT-3#quality
bmk#1476: my post is probably not gonna be public facing lol
star#5322: yeah good job finding it
star#5322: I want to make some silly chide about does anyone actually read gwern :p |
bmk#1476: absolutely yes
bmk#1476: i cite gwern everywhere lol
Daj#7482: It takes me a while, but yes haha
bmk#1476: gwern put this so much more elegantly than me lol
bmk#1476: for reference, here's a snippet from my current draft:
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/739539314454560909/unknown.png
archivus#7382: Can someone train GPT-3 to write in the style of @gwern
shawwn#3694: actually, that'd be an interesting fine-tuning idea for GPT-2
bmk#1476: Gwern has written enough to train gpt3 from scratch
star#5322: lal
miguelos#7956: Joined the server.
Evan Y#0110: Joined the server.
thenightocean#6100: Joined the server.
ishan#3679: Joined the server.
shawwn#3694: sudden influx of people. Hi @miguelos , @Evan Y, @thenightocean , @ishan. Project roadmap is in the channel description
bmk#1476: Where did y'all come from?
bmk#1476: We need a new-member-survey to figure out where people are coming from lol
Daj#7482: SSC Meetup
bmk#1476: Ah
thenightocean#6100: We were all mind-blown with Daj talk on SSC meetup. He made more sense on GPT and AGI-related stuff then almost anyone else I heard recently. |
thenightocean#6100: anyway, hello everyone!
Commutative Conjecture#6969: Hi
Commutative Conjecture#6969: @miguelos
Hi, were you on the LW Discord server at some point?
miguelos#7956: Yes
Commutative Conjecture#6969: ok
Commutative Conjecture#6969: I was wondering whether that was you when you rooted for the universal knowledge graph
miguelos#7956: That would be me
shawwn#3694: Lesswrong has a discord server? Anyone got an invite?
Noa Nabeshima#0290: https://discord.gg/p73nrR
Noa Nabeshima#0290: I think a variation of this could fix rhyming and pun problems without significantly decreasing the text compression/capacity extension of BPE
https://arxiv.org/pdf/1804.10959.pdf
Noa Nabeshima#0290: The idea is to periodically break up tokens into smaller subtokens so that the model learn the relationship between the smaller tokens and the larger tokens
Noa Nabeshima#0290: It also could be used to enhance our dataset but that seems potentially sketchy
Deleted User#0000: @Noa Nabeshima there's also https://arxiv.org/abs/1910.13267
Deleted User#0000: https://github.com/VKCOM/YouTokenToMe
Deleted User#0000: this repo has BPE-dropout
Noa Nabeshima#0290: Ahhh I think that was the paper I was thinking of when I was googling
Noa Nabeshima#0290: Very nice
star#5322: you token to me is very good |
Noa Nabeshima#0290: Actually it's unclear how the model would learn
Noa Nabeshima#0290: er, what it would learn from the BPE-dropout
Noa Nabeshima#0290: to me, I haven't read the papers
Deleted User#0000: Joined the server.
Noa Nabeshima#0290: By restricting a model with BPE-dropout to small length tokens for generation you might improve poem quality
Commutative Conjecture#6969: @shawwn it's ded though
shawwn#3694: ah
Commutative Conjecture#6969: sry
Commutative Conjecture#6969: there is an ssc discord
Commutative Conjecture#6969: quite active iirc
gwern#1782: I always heard bad things about the rat discords
shawwn#3694: what's the ssc discord?
Commutative Conjecture#6969: @gwern
same. I also always hear bad things about rats. people might just be contrarians around here
Commutative Conjecture#6969: @shawwn
I can't invite you directly, I can ask for an invite if you want
shawwn#3694: Oh, sure! Thank you!
Commutative Conjecture#6969: sent in private
Louis#0144: Wtf is a rat discord
Commutative Conjecture#6969: discord with rationalists |
Louis#0144: Oh
Commutative Conjecture#6969: rats
miguelos#7956: GPT-3 is so dumb that it doesn't know what GPT-3 is.
mshang#7454: Joined the server.
Kazumi#1297: I bet GPT-3 wasn't in the dataset that GPT-3 was trained on
thenightocean#6100: So if I understand correctly, you guys think gpt-neo might advance faster and cheaper than whats described in this video? https://www.youtube.com/watch?v=kpiY_LemaTc
thenightocean#6100: https://cdn.discordapp.com/attachments/729741769738158194/739771893313306694/unknown.png
Daj#7482: I don't think we're necessarily going to be faster, and us being cheaper is mostly down to TFRC giving us free compute
thenightocean#6100: What is TFRC?
Daj#7482: Tensorflow Research Cloud. You can apply and they give you free TPUs to use for research
Daj#7482: I'm one of the oldest members and they give me access to a _lot_ of TPUs that we use
thenightocean#6100: oh thats cool
Daj#7482: They're really cool and very generous with the TPUs they give out
Ravna#1831: 4 year = ~8x reduction in cost?
Ravna#1831: That's quite over-optimistic, especially if they extend their prediction for more than a decade.
Daj#7482: I think it's based on https://openai.com/blog/ai-and-efficiency/
thenightocean#6100: yup
Daj#7482: Which might be optimistic or not who knows
thenightocean#6100: Outside view from last couple of years would suggest that things will improve faster than anyone expects
Kacper Wikieł#2358: Can GPT-3 detect that text is an output of GPT-3? I wonder how it would play out if people would scrape data from the internet where content marketers would be using GPT-3 to create articles. Feedback loop? |
Daj#7482: > Can GPT-3 detect that text is an output of GPT-3? I wonder how it would play out if people would scrape data from the internet where content marketers would be using GPT-3 to create articles. Feedback loop?
@Kacper Wikieł iirc OA did some research on this for GPT2, haven't really followed for GPT3 because I think content based filtering is a dead end
Ravna#1831: Anyway the biggest force behind the exponential growth so far is still the budget size, which is the easiest to scale. So we should expect that most breakthroughs would be made in the next few years, before the budget hits multi-billion/trillion and we have to wait for the hardware and algorithm evolution which is much slower compared to just scaling your gpu count.
Daj#7482: Probably correct, but there's a few more magnitudes of order in there
Daj#7482: And we don't know how strong an AI needs to become to become a real threat
Entrepreneur#9475: Joined the server.
bmk#1476: Inside view on scaling is that 10T is at the limit of what's possible today, 100T probably not for another few years
Daj#7482: Hey @Entrepreneur ! Welcome to the Compute Scaling Farms! Check the channel description for info and don't hesitate to ask questions!
Daj#7482: btw @bmk does this consider supercomputers e.g. summit or the upcoming exascale machines?
Entrepreneur#9475: @Daj happy to be here will try to contribute as much as possible
Daj#7482: Lovely! We're currently in the process of optimizing our model code and getting it tuned to OA performance. Probably what we need most is people willing to learn Mesh Tensorflow and TPU to optimize our code. Other than that, work on HTML -> Text and PDF -> Text is needed for the dataset, as is CPU core donations
Entrepreneur#9475: So what sort of data should I gather which will be helpful for you
Daj#7482: Gathering is not the problem, cleaning it is. Common Crawl has plenty of data but 99% of it is terrible
Daj#7482: So we've been working on refining pipelines to get the good data out. You should talk to @bmk
Entrepreneur#9475: Ok
bmk#1476: my estimate is based around a TPU 2048 pod
nicholas#1287: Joined the server.
Daj#7482: Hey @nicholas ! Welcome to the Training Set Mines! Check the channel topic for info and don't hesitate to ask questions!
Aran Komatsuzaki#1970: Joined the server.
shawwn#3694: Hi @Aran Komatsuzaki, welcome to the server. Check the channel description for the project roadmap! |
Daj#7482: Hey @Aran Komatsuzaki ! Welcome to the FOOMigation Tent! (I'm sorry) Check the description for info and don't hesitate to ask questions!
Aran Komatsuzaki#5714: Joined the server.
AI_WAIFU#2844: I think we can scale far higher than 10T with appropriate architectural improvements. Gshard is a 0.6T model and it only took 4 days to train on a TPU 2048-v3. If someone can figure out an architecture that scales efficiently across clusters, the sky's the limit.
AI_WAIFU#2844: Also @Daj, if I want to contribute to this project, what resources do I need and what steps do I need to follow?
Noa Nabeshima#0290: Is MoE actually what we want?
Daj#7482: Depends on what kind of skills you have/tasks you want to do. The two main things that need doing is delving the lovecraftian depths of Mesh Tensorflow to understand the nuances and optimize our model code, and helping with data cleaning (HTML -> Text and PDF -> Text stuff)
Daj#7482: also if you have like 1000 CPU cores laying around that'd be awesome
Noa Nabeshima#0290: Also can someone get me up to speed on PDF to text efforts/research?
Daj#7482: @bmk is our data guy
adamb#9760: @Daj i forgot to ask about AWS lambda
adamb#9760: has anyone tried to do the text task on AWS lambda?
adamb#9760: relatively easy to get 1000+ cores there
Daj#7482: I am unfamiliar with AWS lambda
adamb#9760: i guess i should ask @bmk
Daj#7482: But it sounds like something that costs money lol
Daj#7482: Yea he'd be the one to ask probably
AI_WAIFU#2844: I have 8 rn and I'll probably have 64 in the near future, so I'm a bit underpowered on the compute side. I'd be interested in the model optimization efforts.
bmk#1476: I don't know much about lambda but it sounds very expensive
Daj#7482: In that case, are you in the repo? You'll have to chew your way into Mesh Tensorflow, we're of course happy to help. @Sid is the one most familiar with that side of our code atm
adamb#9760: @bmk we can probably get credits |
bmk#1476: Does aws do that?
bmk#1476: Huh
adamb#9760: lambda is an extremely impressive resource for burst CPU
adamb#9760: https://github.com/StanfordSNR/gg
bmk#1476: Well
AI_WAIFU#2844: No, is it a private repo?
bmk#1476: It's not so much burst
Daj#7482: Yes repo is private until release
adamb#9760: get 1000s of cores for 100ms and then give them back
Daj#7482: We need 1000 cores for ~1 week
adamb#9760: are you sure?
Daj#7482: though that's for the full sized data set
Daj#7482: We need less in a pinch, bmk did the estimates
adamb#9760: why not 100k cores for an hour?
bmk#1476: We'd be pinning 6k cores for like a month
AI_WAIFU#2844: Who do I need to contact in order to get access to the repo?
Daj#7482: Me!
Daj#7482: haha
Daj#7482: Send me your github name
bmk#1476: Due to our plans for multilingual, the compute requirement might be slightly higher than previously thought |
AI_WAIFU#2844: https://github.com/AI-WAIFU
Daj#7482: invite sent
Daj#7482: Most Mesh tensorflow discussion happens in #gpt-neox-devs
bmk#1476: Also we need to ingress 3.5pb
bmk#1476: I don't think that's happening in an hour
AI_WAIFU#2844: Great! I'll start crawling around the code base. If I've got questions I'll bring them up in #gpt-neox-devs .
Daj#7482: We'll be happy to help you get up to speed 👌 Currently we have really low efficiency, the MTF team claims they can reach around ~50% TPU utilization and our max has been like 25% (and I think it's like 2% atm). @Sid is the guy to talk to, @Deleted User has also been digging into our code lately
AI_WAIFU#2844: Ok, so 25x speedup is the target. Got it.
Daj#7482: that would be the dream hah
Sid#2121: wahhh no 2% is only when everything goes wrong lol
Sid#2121: on smaller models we have ~40%
Sid#2121: larger models have tended to be ~25% but we've been having a couple of problems with information leakage
Sid#2121: efficiency varies wildly depending on mesh layout, it is the worst
Sid#2121: @AI_WAIFU if you want to get up to speed with tf mesh there are lots of resources in #tfmesh
Sid#2121: I'd recommend starting w the talk and then the paper
shawwn#3694: @AI_WAIFU It seems impossible to do model scaling across TPUs. Both the forward and backward passes would need to be sync’d using an expensive network send
AI_WAIFU#2844: I mean sure, if you constrain yourself to the standard model architectures and backprop paradigm. There are algorithms and techniques that don't require you to send massive amounts of data across the network every time you do a gradient update.
shawwn#3694: For data parallelism, yes. But model parallelism?
AI_WAIFU#2844: Sure. When I was still in school, I worked on a technique for training massive language models that cached variational approximations of latent variables in the model. This way, you only had to look at a weight's markov blanket to update it. The actual model could be made much larger as long as you could store the latents on disk or in ram. Since the overall algorithm was analogous to EM, you could do several gradient steps without updating the surrounding blanket. Unfortunately, because the number of latents grew linearly with the amount of data, the method was extremely compute hungry.
AI_WAIFU#2844: However, VQ-VAEs are a close analog, I'm pretty sure you could train the layers of a VQ-VAE in parallel. |
AI_WAIFU#2844: Another thing I was thinking of trying, but haven't had time to, is just dumb gradient boosting.
AI_WAIFU#2844: It's not parallel, but it does get around the problem of running out of ram.
AI_WAIFU#2844: You train one model, then you save the logits of that model on the training data, and train the next model as an additive perturbation of the first.
AI_WAIFU#2844: Repeat until you run out of patience.
AI_WAIFU#2844: Or disk space.
AI_WAIFU#2844: There's also a bunch of work on local learning rules using infomax principles that's currently applied to the supervised setting, but could be adapted to the autoregressive language modeling setting.
shawwn#3694: If you put one half of your brain in one head, and another half in a different head, it doesn't matter what techniques are used to try to get them to work together -- they're not connected, so they can't share information. Ditto for models across shards. I've spent a lot of time trying to think of a way to make it work, so I am very interested in this area
shawwn#3694: If you have a specific algorithm or an illustration of a technique (or just pseudocode), I'd love to hear
shawwn#3694: swarm training (where you have a copy of the same model on multiple TPUs) works because you can periodically average the weights, resulting in a superior overall model. But that only works because it's a copy of the same model
shawwn#3694: once you have different parts of a single model sharded across different TPUs, it becomes very not-straightforward to figure out how to combine the shards together in a sensible way
AI_WAIFU#2844: Right, but me and you are 2 heads with a very limited bandwidth channel. The nodes can still be connected, but you don't need an enormous amount of bandwidth for models to coordinate.
shawwn#3694: I agree. the question is how, and what, to coordinate
shawwn#3694: in a concrete way
shawwn#3694: there's definitely a worthwhile idea here -- it feels like a workable problem. But once you try to turn it into code, it becomes ... hard
AI_WAIFU#2844: Yeah, I'll be the first to admit that it's not an easy problem. If I've got time I'll whip up the gradient boosting thing I mentioned using off the shelf hugging face models.
shawwn#3694: if you do, please ping me. I'd love to see how that works
AI_WAIFU#2844: If I can beat a single model that's a PoC.
pneu#0510: Joined the server.
mshang#7454: any sense of how openai split up the model across gpus for gpt-3?
mshang#7454: their paper says: " To train the larger models without running out of memory, we use a mixture |
of model parallelism within each matrix multiply and model parallelism across the layers of the network. All models
were trained on V100 GPU’s on part of a high-bandwidth cluster provided by Microsoft."
bmk#1476: They were horribly vague about it as usual
bmk#1476: Yeah that
bmk#1476: Basically mtf as we have it right now plus gpipe
bmk#1476: It turns out that gpipe like stuff isn't even necessary actually
bmk#1476: Maybe gpiping increases efficiency, or maybe OA's cluster just wasnt big enough
bmk#1476: That would be hilarious, actually, if we have a bigger cluster than OA
bmk#1476: And also unlikely
mshang#7454: may i see how you have mtf set up?
shawwn#3694: If you send @Daj your GitHub username in #gpt-neox-devs then you can get access to the code
mshang#7454: k thanks
shawwn#3694: It’s *mostly* a port of daj’s old codebase to mtf style
shawwn#3694: But there were some major tricks involved to get it running, much less running efficiently
shawwn#3694: @AI_WAIFU related: https://twitter.com/thegregyang/status/1290290612588036096?s=21
miguelos#7956: I need access to GPT-3 this week. What are my options? Which project can I join? Who can I bribe?
bmk#1476: What do you need it for?
Louis#0144: lol
Louis#0144: gl
bmk#1476: also at this rate we'll have GPT3 in about half a year |
bmk#1476: so
bmk#1476: that's not *that* many weeks
miguelos#7956: You mean Neo GPT-3, or OpenAI releasing GPT-3?
bmk#1476: neogpt3
bmk#1476: or opengpt3
bmk#1476: or whatever the heck we decide to call it
bmk#1476: oa wont releast gpt3 lol
miguelos#7956: What's the next best easily accessible model?
bmk#1476: or if they do ill be very surprised
bmk#1476: what do you need it for
miguelos#7956: Haven't they released GPT-2 in the end?
bmk#1476: yeah but theyre using gpt3 to make $$$
miguelos#7956: I thought they were some kind of non profit. Perhaps they need $$$ to train GPT-4.
miguelos#7956: Too bad they don't want my money.
miguelos#7956: Is there anything you guys need to make neogpt happen faster? What are the main bottlenecks?
miguelos#7956: What's preventing you from starting training tomorrow?
AI_WAIFU#2844: They have a weird, profit-nonprofit structure. OpenAI the non-profit owns OpenAI the company, and I believe they get back full control after the company makes a certain amount of money.
AI_WAIFU#2844: That way they can use VC capital to achieve their goals.
bmk#1476: @miguelos just the training itself will take months
bmk#1476: not to mention we still need to collect the data, fix up our tpu code, &tc |
miguelos#7956: What's the biggest time/resource sink, in order?
miguelos#7956: 1. Training (6 months)
2. Collecting data (1 month)
3. Fixing code (1 month)
miguelos#7956: ?
bmk#1476: look if you have a really good idea you can try asking OA
bmk#1476: we're all volunteers we have no time frame
bmk#1476: we do stuff when we have free time
bmk#1476: actually @miguelos do you need continued access or do you just want to try a few prompts
miguelos#7956: Continued access.
bmk#1476: hm
bmk#1476: sorry, your options are to ask OA or wait until this project is finished (if you want to help that would be awesome)
bmk#1476: our biggest problem is that all of us have other things we need to do and so this isnt our only priority
miguelos#7956: I'm not that knowledgeable, as you might be able to tell. But I have tons of free time. If the bottleneck was clearly established, I could investigate ways to workaround it. Alternatively, I could tackle things that need to be done but don't require as much expertise.
bmk#1476: ok so i have some things you could help with that require not a lot of expertise
bmk#1476: one big one is the htmltotext extractor
bmk#1476: https://github.com/leogao2/htmltotext-benchmark
bmk#1476: read the README there for info
miguelos#7956: What scale are we looking at here. How much easily available text data sets do we have access to? How much do we need?
bmk#1476: if you speak any languages other than english it would be helpful |
miguelos#7956: Do we have enough, bit in the wrong format (html)? Or do we need a lot of effort to crawl for more data?
miguelos#7956: I speak French.
bmk#1476: awesome
bmk#1476: the plan rn is:
bmk#1476: making multilingual dataset to benchmark htmltotext converters -> tuning htmltotext converters and using the benchmarker as a guide -> using htmltotext converter to convert 3.5PB of html to txt
bmk#1476: -> training gpt3
bmk#1476: keep in mind this plan is not optimized for speed
bmk#1476: but we're not worried about speed
miguelos#7956: How fundamental is a multilingual dataset? I understand it's necessary to approximate GLP-3 results, but is that necessary to prove your results? Again, I'm kind of joining this conversation without any context, and I don't know much about your methodology, how closely you're trying for replicate GPT-3, and if you have different constraints, hypothesis to test, etc.
bmk#1476: ok so
bmk#1476: the short answer is it's not necessary
bmk#1476: the long answer is we want to do it this way because our priority is *not* to replicate as fast as possible but rather to produce useful tools and data for future research
miguelos#7956: Ok, so knowing that speed is not your guiding principle helps. I would have assumed so, and that shortcuts would be acceptable. What's your main guiding principle? Fidelity/accuracy of replication based on the papers?
bmk#1476: to produce useful tools and data for future research
miguelos#7956: Hmmm, that's vague.
bmk#1476: we're doing the dirty work that nobody else feels like doing, like data cleaning
miguelos#7956: Do you have guidelines for the ratio of French, English, Chinese, Spanish, etc text? A list of languages? Programming languages?
miguelos#7956: I have no idea what GPT-3 was trained on, or whether it's known.
bmk#1476: human languages
bmk#1476: also if you want you can help recruit people who speak languages not already covered in the htmltotext benchmarker data |
bmk#1476: that would be very helpful
miguelos#7956: Cleaning data? What are we talking about? Removing some kind of sensitive data, or make it all into a uniform format that won't confuse it too much?
bmk#1476: the latter
miguelos#7956: I'm surprised getting a lot of training data isn't a solved problem.
bmk#1476: im surprised too
bmk#1476: it's actually quite complex
miguelos#7956: What are we starting with? Open domain books? Wikipedia? Reddit? Twitter?
miguelos#7956: Kat makes the bulk of that? And the stated HTML figure, is that the raw data we already have and needs to be processed?
miguelos#7956: What*
bmk#1476: @miguelos we have a document linked in the channel description
bmk#1476: https://docs.google.com/document/d/1wfCZBd18DMNt6YcC6boPNMd9qzzH3zpHHfKj4dezk0g/edit#heading=h.1op7948crp4f
miguelos#7956: Alright, it makes sense that I should read that at this point. Let me get up to speed on the basics and come back to you. Thank you.
Alm#9130: What do you need it for? I still think that for most practical applications finetuning T5 or GPT-2 is a good option. But i havent played with GPT-3 so I don’t know what I’m missing
Louis#0144: https://twitter.com/rodneyabrooks/status/1290453904275263489?s=21
bmk#1476: chinese room
bmk#1476: does it matter if it "truly understands"
bmk#1476: whatever the hell that means
miguelos#7956: What's T5? Yeah, fine-tuning might be a better option. How expensive is it to run GPT-2?
Alm#9130: Depends on what you need it for
miguelos#7956: @bmk: I read the html-to-text benchmark and the onboarding document. |
miguelos#7956: I also found an overview of GPT-3 dataset. Are the stated tokens the number of words? Are we trying to match 1:1 what they had?
miguelos#7956: @Alm: I'm looking to do context-free natural language parsing.
star#5322: I don't think GPT-2 is that crazy expensive to run, maybe I'm off my ass but doesn't it run on one GPU?
star#5322: "it's good to be familiar with the literature" is about the most condescending way to start a tweet thread I've ever seen
miguelos#7956: I read the GPT-3 paper. Now I understand what it was trained on.
star#5322: Also the thing that kills me about the "it's just memorizing" thing is that, showing that GPT-3 has some kind of lookup table in it, would actually be really cool, but as far as I can tell, no one who says it's just memorizing actually has any case for what the inner workings of GPT-3 do at all
star#5322: If they do, that would be super cool and I'd love to see it
star#5322: This is almost more annoying than the admittedly also annoying part where that obviously misses the whole Chinese room thing and what intelligence even is
miguelos#7956: I'm not sophisticated enough to appreciate that we don't understand how GPT-3 works. I think it's pretty obvious.
star#5322: What?
miguelos#7956: Have you looked at the architecture? It's just a bunch of weights that guess the next word.
miguelos#7956: Fairly straightforward.
miguelos#7956: And yes, I don't get why people think lookup tables and memory doesn't constitute intelligence.
miguelos#7956: What's wrong with a useful looking table?
AI_WAIFU#2844: Look up tables don't generalize
miguelos#7956: They do if they're recursive.
AI_WAIFU#2844: Then that's not really memorising is it? That's a model of the data.
star#5322: Yeah I'm not sure what you mean by recursive lookup table, other than the idea that any computer code is made of something like looking up things and doing calculations
miguelos#7956: It's not hard to generalize. "The cat", "The dog" -> "The cat/dog"
star#5322: I feel like 60 years of NLP building to this point is an obvious reason it was hard to generalize. |
star#5322: 60 or 70 years ago we thought you could build AI in a summer, but that turned out to be hubris.
miguelos#7956: Depends on what AI means.
miguelos#7956: But surely all of these people were trying to workaround the lack of compute. That's the only sane reason for symbolic AI.
miguelos#7956: But to imagine the general purpose architecture isn't hard. It's trivial to build AI with infinite compute.
miguelos#7956: Starting from that, we have to take a few shortcuts, reduce the scope, specialize, optimize.
miguelos#7956: To produce something usable in today's reality.
miguelos#7956: And the tradeoff was whatever people were doing back in the days, and today.
miguelos#7956: Surely people knew they couldn't program a simulation of the universe in a summer, which is what unscoped AI basically is.
miguelos#7956: All were doing is to simplify the universal computing machine, by reducing dimensions.
miguelos#7956: Reducing precision. Reducing optimization/search time.
miguelos#7956: GPT-3's level of abstraction is at language/words, which is fairly high and limited.
miguelos#7956: It would be great if GPT-3 was also trained on linked data, such as WikiData, DBPedia and Freebase.
miguelos#7956: Leveraging that would give the model a lot of knowledge on top of language.
miguelos#7956: Generating text from RDF triples would be an obvious way to do that ("Belgium has capital Brussels"), but it might not be as compact, might lose some semantics (using that label instead of ID/URI), and might skew the grammar/syntax/colloquial aspects of the language model.
Alm#9130: https://arxiv.org/pdf/2008.00623.pdf
SCnightmare#8498: Joined the server.
star#5322: Damn that abstract has some pretty crazy results in it
Noa Nabeshima#0290: > https://arxiv.org/pdf/2008.00623.pdf
But will it scale?
star#5322: ~~will it blend~~ |
aquajet#7800: Hello @SCnightmare! Welcome to the AGI Fire Alarm Factory! Check out the channel description and pinned posts for information about our current projects and feel free to ask any questions
Noa Nabeshima#0290: > But will it scale?
I'm actually curious about this
star#5322: I'm also pretty curious. Didn't mean to detract
star#5322: It does seem like there is an effect where you can overfit to a particular dataset size / target, and sort of loose generality when scaled up
star#5322: Or overfit to a certain size of model
star#5322: Not sure if that's what happened here though, at a quick skim the idea of reweighting where the ops happen seems sound, it reminds me of how early layers only "need" local attention
star#5322: (or at least it doesn't cost too much perf)
AI_WAIFU#2844: > DeLighT requires 1.8 times fewer parameters than baseline transformers.
On one hand you havethis, on the other, you have Gshard. Philosophy.
star#5322: Why not Both?
star#5322: If it scales, Gsharding this boyo should be even better
star#5322: Not that I don't see your point
miguelos#7956: Can GPT-3 keep track of states?:
Today is 2020-08-04.
Transactions:
2020-08-01: I earned $5
2020-08-02: I made $10 |
2020-08-03: I spent $2
2020-08-04: I lost $8
Today's balance: $5
Yesterday's balance: ___
kindiana#1016: trained with the right data, certainly, unsure if openai's gpt3 weights/training data can do it though
miguelos#7956: It seems like it would understand that earned/made and spent/lost are somewhat equivalent. It can do basic mathematics. I haven't seen how it deals with dates/time/chronological stuff.
Ravna#1831: 1.8/2.8 times fewer parameters with the same performance could come easily from hyperparameter tuning and luck. In CV, there are thousands of architectural "innovations" since ResNet, but only a handful of them are real improvements.
Ravna#1831: If it's 10x fewer parameters, it might be a little more convincing about the improvement being real.
Ravna#1831: Within a magnitude it's probably noise.
star#5322: Hmm, interesting point.
Aran Komatsuzaki#5714: DExTra uses group linears, so it's slow without CUDA kernel. But even with CUDA kernel, I'm not sure if it's faster , since it's also shallow and thin. The author said he thinks it'll be faster, but I'm not sure about that.
Aran Komatsuzaki#5714: (I'm talking about DeLighT)
Aran Komatsuzaki#5714: * since it's also deeper and thinner
Noa Nabeshima#0290: > Can GPT-3 keep track of states?:
>
> Today is 2020-08-04.
>
> Transactions:
> 2020-08-01: I earned $5 |
> 2020-08-02: I made $10
> 2020-08-03: I spent $2
> 2020-08-04: I lost $8
>
> Today's balance: $5
> Yesterday's balance: ___
@miguelos At 0 temperature GPT3 gives $10 dollars
miguelos#7956: @Noa Nabeshima 0 temperature?
Sid#2121: @miguelos temperature is a setting when you're sampling from the model. On a very basic level, a higher temperature = more variation and a lower temperature = more certainty. So a lower temperature is better for question answering, but a higher temperature may be better for say, poetry or prose.
Deleted User#0000: @Aran Komatsuzaki my super knowledgeable researcher friend 😄
Deleted User#0000: whatever he says is gold
Deleted User#0000: lol, gonna copy something you said to me here https://cdn.discordapp.com/attachments/729741769738158194/740267335990116452/Screen_Shot_2020-08-04_at_10.40.38_AM.png
Louis#0144: dumb question about AEs
Louis#0144: their lower dim latent space is trying to solve a sphere packing problem right?
Louis#0144: particularly by controlling the basis vectors that dictate the sphere packing they can shift between different encoding schemes
Louis#0144: is that reasonable intuition?
star#5322: I've never heard that intuition, why would they be trying to solve sphere packing? Like, each codeword is a sphere and they're trying to put them as far apart as possible I guess?
Louis#0144: yeah
star#5322: I thought people mostly didn't use vanilla AEs much though, and more used VAEs or other tricks
Louis#0144: the closer the spheres are the more errors the encoding scheme is prone to make |
Louis#0144: yeah most people use VAEs i guess
Louis#0144: but this is a question about AEs
Louis#0144: like boring naive AEs
star#5322: cause vanilla AEs aren't forced to use their latent space in a good way so there's kinda no way to sample
star#5322: sure, I guess that's right then?
star#5322: But not necessarily, like the decoder doesn't necessarily encode nearby objects as the same or nearby logits
star#5322: like I can encode all dogs as 0,0,0 and all cats as 0,0,0.1 if I want
star#5322: probably that terrible of an encoding isn't what a vanilla AE learns
star#5322: but it's not required to be doing something that's "compatible" with the distance in any particular way
Louis#0144: well why wouldnt decoders be dealing with more sphere packing? to undo sphere packing all you need is a linear layer
Louis#0144: seems like the easiest solution an NN could come to
star#5322: I'm just saying the decoder isn't forced to think about it that way, so I wouldn't assume it is
star#5322: maybe so, but in general it might learn that sort of thing or might not
star#5322: I don't intuitively see why a linear layer can undo sphere packing actually, though maybe I'm missing something dumb
Louis#0144: lattices are connected by smooth transformations (often isometric transformations if youre dealing with certain kinds of lattices)
Louis#0144: sometimes linear transformations though
Louis#0144: I would think its probably the simplest solution for a decoder to take a compactified lattice and just expand it back to approximate the original manifold
star#5322: yeah, I just am hesitant to assume NNs do things we think are simple and such
star#5322: we don't know any particular reason SGD on NNs finds simple solutions
Daj#7482: What do you think about double descent @star ? |
star#5322: and it might be hard to incrementally co-evolve an encoder and a decoder that do the right thing
Daj#7482: I'm in general with you on never assuming simplicity in learned solutions
Daj#7482: But that paper was interesting
star#5322: I think double descent is one of the most tantalizing pieces of evidence shaping an actual understanding of how NNs and SGD, deep learning, work at all
star#5322: It's like, not really enough to clarify everything, but nevertheless really interesting
Daj#7482: Yea it seems to be hinting at something at least
star#5322: have you read the OpenAI blog post and Evan Hubringer's commentary?
Daj#7482: I'm not sure if I read the commentary
star#5322: I think those are both more valuable than the original paper
Daj#7482: Do you have a link?
star#5322: https://www.alignmentforum.org/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent
Daj#7482: Thanks!
star#5322: Though the original paper is good too I think, I don't recall ever reading it closely, so maybe I shouldn't comment on it too harshly :p
star#5322: I do think the idea that somehow NNs are biased towards ""simple"" models or something is a reasonable intuition to have
Daj#7482: It seems both not at all and extremely related to MIRI's agenda (as I understand it hah)
Daj#7482: It seems reasonable but tbh I still find it not fully clear _why_ it happens
star#5322: the best intuition I have is that when you first fit all the data, you just picked the very first model that fit all the data, and a randomly selected model that fits your dataset, there's no particular reason for it to be unbiased, so usually it will be. But if you keep going a lot further, the space of available models with (same or similar?) loss is much larger, and then some unknown inductive bias in the usual methods of deep learning (in SGD?) finds a "simple" model, which is usually going to be a model that generalizes, rather than overfitting to noise in the data
star#5322: this explanation is close to being really good, but is handwavey enough to be kind of unsatisfactory
star#5322: I think something like it is pretty likely to be true though
Daj#7482: Yea the unknown inductive bias part is doing a lot of work hah |
star#5322: yup
Daj#7482: My own completely baseless (but testable) theory is that it relates to the microjitters of loss between individual batches
star#5322: but I would hesitate to jump from very broad claims like "for some reason, we seem to repeatedly observe DL generalizing better than we'd possibly expect" to very specific claims like "the decoder in a vanilla AE is probably undoing a sphere packing" (which, idk if Louis was exactly claiming that, but I'm definitely skeptical)
Daj#7482: After seeing the training data once, the model's latest update was to maximize performance on sample n, then 0, 1, etc. If the data is randomized you have an incentive to learn more robust models to keep variance down
Daj#7482: But I might be completely wrong
star#5322: Double descent also seems very related to the picture where large image models can easily memorize imagenet randomly relabeled, but then (no surprise) the model doesn't know anything
Daj#7482: > but I would hesitate to jump from very broad claims like "for some reason, we seem to repeatedly observe DL generalizing better than we'd possibly expect" to very specific claims like "the decoder in a vanilla AE is probably undoing a sphere packing" (which, idk if Louis was exactly claiming that, but I'm definitely skeptical)
@star This is how I think as well. In the SSC meetup I described it as "negative models", models that don't make predictions about what is true but what we definitely don't know is true or not
star#5322: so why don't they just memorize the true imagenet?
star#5322: ``` models that don't make predictions about what is true but what we definitely don't know is true or not``` I'm having trouble parsing this
star#5322: and so there's some magic we don't understand that means a model trained on the real imagenet has understood something a model trained on random imagenet hasn't
Daj#7482: Like, whenever Gary Marcus says "GPT3 doesn't do causal modeling", my negative model of NNs says "We don't have enough theoretical underpinnings to actually know that"
Daj#7482: I'm sure EY had a totally different name for that kind of thinking in the sequences somewhere haha
star#5322: a model that predicts certainty vs. a model that predicts uncertainty?
Daj#7482: Yea I don't know if it's a sensible distinction but it's useful to me
Daj#7482: > and so there's some magic we don't understand that means a model trained on the real imagenet has understood something a model trained on random imagenet hasn't
@star I consider "NN are black magic" to be a scientifically true statement hah
star#5322: yeah I said this yesterday (?) but honestly possibly more annoying to me than the fact that that claim seems dumb, in terms of like, "what even is intelligence uwu", is the fact that *you have no fucking clue if GPT-3 does casual modeling* and you didn't even ***try*** to argue for such a point, which would be really important technically to know, true or false
Daj#7482: Exactly! He even quote tweeted me and then didn't respond to my followup callout making that point lol
star#5322: ¯\_(ツ)_/¯ |
Daj#7482: Exactly haha
star#5322: it seemed like there was already plenty of evidence his points don't have that kind of thought behind them, sadly
star#5322: and are more like reflexive reactions/dismissals
Daj#7482: It's fascinating how there's always someone to fill every intellectual sophism niche
star#5322: Also something about the minibatch gradient noise definitely seems important
Daj#7482: That's where my money is atm, might be worth a boring but useful research project
star#5322: that's another piece in this puzzle of "we really don't understand NNs but some people have some very juicy feeling disconnected pieces"
star#5322: people have done research on it
star#5322: idr very many specifics
star#5322: but something about the gradient noise of mini-batch based SGD is important
star#5322: for at least some classes of models, there are some GANs where it can be destabilizing iirc
star#5322: and you thus want Extremely Big Batches
Daj#7482: Could potentially explain double descent. Or even floating point accuracy...though I'm even less sure about that
star#5322: how would it explain double descent?
star#5322: I think the two are likely to be related to parts of the same picture but I wasn't saying the one explained the other
star#5322: A related background fact is some training procedures in fact manually add random gradient noise and this seems to help
Daj#7482: The loss might be 0 on pass x, but pass x+1 has different "pairs" or training examples if the data is shuffled, so you need a model that is robust to ordering x and ordering x+1
Daj#7482: Very underdeveloped intuition tbh, I'm probably talking out of my ass
star#5322: models don't save state between passes though?
star#5322: like I assume by ordering you mean ordering of data |
Daj#7482: Yea
star#5322: and models don't inference differently on different orders of data, right?
Daj#7482: I'm implying something weird is happening that happens across timesteps
Daj#7482: > and models don't inference differently on different orders of data, right?
@star But they will have different intermediate gradients
bmk#1476: Re: intelligence, Some people just cant accept the idea that it *doesn't matter* if GPTx "understands" things if it's impossible to tell the difference from a human
Daj#7482: I remember some great AF posts about how mesa optimizers can do funky things across timesteps without state
star#5322: like, perform update to weights on batch one, then on batch two, gets different end state than batch two first, then batch one?
Daj#7482: As said, this is _super_ underdeveloped, I apologize
star#5322: so order of data in SGD matters?
star#5322: that's ok
Daj#7482: Yea the ordering matters
star#5322: yeah I noticed @bmk :P
Daj#7482: That's why we randomize ordering
Daj#7482: Minibatch SGD is just _approximate_ SGD
star#5322: right
bmk#1476: the naming here is all whack
bmk#1476: batch and minibatch are all mixed up
star#5322: for sure
bmk#1476: necessitating the creation of the term microbatch |
star#5322: one way to model the effect of minibatch SGD vs. full dataset SGD is just that randomness is added to the gradient, which is a thing we have evidence is good anyway
star#5322: but we don't know if there's something more subtle going on or that's just it
bmk#1476: full dataset SGD is batch SGD
star#5322: yeah what is a microbatch?
star#5322: minibatch is 10/250 examples at once
star#5322: microbatch is . . . data parallel minibatch?
Daj#7482: If I had a larger attentionspan I would do tightly controlled tests on double descent with different data orderings
bmk#1476: batch = the entire data
minibatch = not the entire data
microbatch = breaking minibatches into smaller ones for pipelining but only doing one gradient step
Daj#7482: Seems easy enough to falsfiy
bmk#1476: unfortunately this naming is all fucked up
Daj#7482: The naming is weird yes
star#5322: yeah some DD experiments have been on my list of fun projects for a while, but I haven't gotten around to them
star#5322: so a microbatch is, I do 8 runs of size 10, then only do one gradient update for all of them?
bmk#1476: yeah
Daj#7482: Correct
star#5322: that can be sequential or parallel ofc, since that's separate from the algorithm
star#5322: and why does GPT-Neo do this?
star#5322: because grad updates are expensive? |
Daj#7482: Memory constraints
bmk#1476: both of the above
star#5322: I don't understand why a microbatch is related to a memory constraint
bmk#1476: grad updates necessitate allreduces
bmk#1476: allreduces are evil
star#5322: and slow
star#5322: yeah I understand the expensive part
star#5322: just not the mem part
bmk#1476: https://cdn.discordapp.com/attachments/733347369847881838/736978991822536815/unknown.png
Daj#7482: from what I remember from GPT2, batch size really matters, so using microbatching to "fake" a larger batchsize makes sense
star#5322: like bigger batches better?
Daj#7482: Yes
star#5322: BBB
bmk#1476: more batch is always better
Daj#7482: We can run 100x minibatch size 1
bmk#1476: except when it isnt
star#5322: do we have any idea why?
bmk#1476: lower variance i guess
bmk#1476: doesnt explain why bigger batch is sometimes bad but
Daj#7482: ¯\_(ツ)_/¯ |
star#5322: yeah I don't think that more batch is always better
bmk#1476: nns are black magic
star#5322: but who knows
bmk#1476: maybe some noise good, too much/no noise bad
star#5322: maybe small model/dataset/??? needs more regularization and grad-noise is regularization
Daj#7482: I have some vague intuition that big batch is good for GPT because text is so sparse
Daj#7482: But that might be my ass talking again
star#5322: but big model benefits from best grad possible
star#5322: if it's not going to "overfit" anyway
Daj#7482: That is also a good possibility
star#5322: "overfit" is quite the word for "there's magic I don't understand here"
bmk#1476: possibly
bmk#1476: we need to adopt the magic convention
bmk#1476: anything we dont understand we call it magic
star#5322: text being sparse seems compatible with big model needs best grad
star#5322: yeah I am not in that habit but was trying it out for this conversation and I like it
Daj#7482: I have been calling NNs magic and TPUs necromancy since this started hah
shawwn#3694: Stuff like this seems only possible to figure out empirically
star#5322: Daj is ahead of the curve
shawwn#3694: :empiricism: |
star#5322: I agree shawwn
star#5322: Or, I think theoretical models would be *very very valuable* but we're nowhere near close to having a viable theoretical model for most of this
Daj#7482: If Newton could be an alchemist calculus-user during a pandemic than so can we!
star#5322: so we need lots more empirical data and empirical understanding if there is any hope of ever having a theoretical model
bmk#1476: neural networks use gradient magic to get better. making batches bigger decreases variance which magically makes things better, but then they magically get worse if batches are too big. somehow networks actually train and dont just sit around not not doing nothing, through the power of dark eldrich magic
Daj#7482: A good theoretic grounding for NNs would be _amazing_
Daj#7482: But it seems to be a very hard problem indeed
star#5322: yeah it would be fucking crazy
star#5322: have you read the circuits thread on Distill?
Daj#7482: I'd be _so_ much more chill with alignment outlook if we had a solid theory of NNs
Daj#7482: I don't think so, re circuits thread
bmk#1476: where do we start for a solid theory
star#5322: https://distill.pub/2020/circuits/
star#5322: I haven't either :P
bmk#1476: is it better to start with empirical experiments and formulate a theory
star#5322: but a friend pointed me to it and this part of the conversation reminded me of it
star#5322: yes bmk
bmk#1476: or to start with mathematical theory and move down
star#5322: like I said I think mathematical theory has no idea what's going on at all
star#5322: our tools are stone age compared to NNs |
Daj#7482: I was in the MIRI camp of "understanding intelligence from the ground up" (apologies if this is not accurate), right until sometime between GPT2 and 3
star#5322: and we don't understand NNs well enough empirically to know how to make our theory better
bmk#1476: so we need to study nns empirically
Daj#7482: I think you're right on the money star
bmk#1476: and try to put together the pieces from that
Daj#7482: > so we need to study nns empirically
@bmk This is why I started this project!
star#5322: I mean I'm just some person on the internet, but that's what I think, yes
bmk#1476: we're *all* random internet people
Daj#7482: Exactly haha
bmk#1476: is there anything worthwhile we can do with, say, gpt2?
bmk#1476: because scaling up, while cool, isnt the only thing we can do[citation needed]
Daj#7482: Probably
shawwn#3694: Actually yes
star#5322: yeah I view the "understand NNs from the ground up" project as fundamentally kind of different from MIRI's Agent Foundations project
star#5322: but both at least potentially valuable
Daj#7482: I'm interested in what kinds of emergent phenomena I might be able to distill from GPT2->GPT3 transition
shawwn#3694: Training 1.5b on a v2 pod
shawwn#3694: Theoretically your code already solves this
bmk#1476: >re: "emergent" |
shawwn#3694: And we only have v2 pods
Daj#7482: Yes I should taboo emergent lol
Daj#7482: Huh you're right shawn that should work
bmk#1476: i feel like gptx has a solid chance of actually being useful for agi
bmk#1476: so studying it in depth would be interesting
Daj#7482: > yeah I view the "understand NNs from the ground up" project as fundamentally kind of different from MIRI's Agent Foundations project
@star Yea sorry I do know this. MIRI is starting from even more basic concepts of agents/optimizers (right...?)
Daj#7482: GPT3 is the first working artificial neocortex for all I care
star#5322: I was agreeing with you, since you said something similar above c:
Daj#7482: Ah ok!
bmk#1476: i think one big problem is that models with perfect agentness like miri researchess probably wont come first
bmk#1476: instead, we'll be dealing with "weaker" ai that has flawed models, etc
star#5322: I am not sure what you mean by perfect agentness?
bmk#1476: sorry bad wording
Daj#7482: Yep, tfw can't spend my life on perfect MIRI embedded agency work
bmk#1476: like optimalness
bmk#1476: like agent always takes the best action at every step
Daj#7482: I think non-MIRI-style AGIs will be incredibly strong, but not interpretable
Daj#7482: (potentially)
bmk#1476: thats completely unrealistic and leads to weird edge case scenarios explored by miri |
star#5322: So while noting that the public MIRI canon is large, and I am not familiar with the majority of it, the "agent that takes only optimal actions" is not, to my knowledge, a primary object of study
star#5322: for the obvious reasons you say
bmk#1476: in all likelihood we'll build a very flawed paperclip maximizer that doesnt do things "optimally" but still good enough that it kills us
Daj#7482: Yea I have a different model of MIRI work than bmk
Daj#7482: Though tbf MIRI is hard to interpret from outside lol
bmk#1476: i mean most of miri research seems to have a hint of pure over applied
star#5322: in fact, reasoning under limited computational power is a very important research frontier that we don't know much about
bmk#1476: though im not very familiar with their work
star#5322: and is pretty important to study, according to me
Daj#7482: Strong agree there
bmk#1476: also agree
Daj#7482: MIRI agenda is like my wet dream I wish I could just work on in quiet solitude for my whole life
star#5322: (and is something MIRI has a quite strong interest in)
Daj#7482: Unfortunately I have neither the attention span nor time-to-takeoff for that lol
bmk#1476: like, before we have a 1000000x stronger ai, we're going to have 10x stronger ai that kills us first
bmk#1476: >time-to-takeoff
poor word choice
Daj#7482: I love how slow takeoff is faster than fast takeoff in subjective ways
Daj#7482: Poor choice?
bmk#1476: :foom: |
bmk#1476: (shitty pun)
Daj#7482: ahh
star#5322: > I think non-MIRI-style AGIs will be incredibly strong, but not interpretable
@Daj fwiw I don't really know what you mean by "MIRI-style AGIs" but usually that kind of phrasing stems from a pretty substantial misunderstanding of the MIRI project
(which, while factually true, is also slightly a pet peeve of mine. sorry >> )
Daj#7482: Oh I'm very confident I am very confused about what MIRI does
bmk#1476: i have no idea what miri does tbh
Daj#7482: What I think MIRI does is something like what Isaac Newton did for physics they want to do for optimizers/agents
Daj#7482: And I know lots of Haskell is involved
star#5322: that first thing is a much better picture
star#5322: and yes we like hiring good Haskell engineers
star#5322: also Haskell is amazing everyone should learn Haskell :P
Daj#7482: So when I say "MIRI style" I mean "constructed by a post-Newton engineer, rather than ad-hoc pre-Newton engineer"
Daj#7482: I am starting to think liking Haskell is a genetic trait lol
star#5322: huh, okay that is very different from what people usually mean
star#5322: admittedly, I thought haskell was kinda cool programming on my own, but it wasn't until I came to MIRI that I understood it's Awesome Power
Daj#7482: Yea I actually totally believe you, because the same happened to me with math and type theory in particular
bmk#1476: what is the Awesome Power of haskell, anyways?
Daj#7482: I just haven't had the attention span to ween myself off Python (and don't have reason to if I stay focused on ML)
bmk#1476: (haskell noob here, who still doesnt understand monads) |
star#5322: the straw man of the confused position is something more like "MIRI studies Decision Theory and Logical Counterfactuals because MIRI thinks that if we just put an AI together using enough Decision Theory Juice and had the right Counterfactual module and two other modules then the AI would 'work right'"
Daj#7482: This is probably heresy: But Haskell seems not pure enough to me. My favorite part about functional programming is the Proofs as Types stuff
star#5322: whereas it's much more like we're just like, damn, we just don't understand any of this, look at all these obviously relevant places we don't understand at all
Daj#7482: > the straw man of the confused position is something more like "MIRI studies Decision Theory and Logical Counterfactuals because MIRI thinks that if we just put an AI together using enough Decision Theory Juice and had the right Counterfactual module and two other modules then the AI would 'work right'"
@star I have luckily moved beyond this level of confusion I think haha (I spent plenty of hours bashing my hard against MIRI posts lol)
star#5322: good good
star#5322: I mean, obviously, if you're doing DL, you need to know python, though really the main brunt of the learning is TF/PT/etc.
bmk#1476: tf is the main brunt
bmk#1476: pt is easy
star#5322: sure
Daj#7482: Yea, but also stuff like data cleaning and stuff
Daj#7482: I would never do that in Haskell
star#5322: more generally, though I think Haskell is like, extremely good and powerful, but not like, the tool for every job
Daj#7482: Maybe I'm just not good enough at it yet
star#5322: nah, shitty data cleaning scripts probably belong in Python
Daj#7482: > more generally, though I think Haskell is like, extremely good and powerful, but not like, the tool for every job
@star Yea this is the ultimate truth for all programming languages and tools
bmk#1476: what kind of thing is haskell best for so that i can find an excuse to use it in a side project
star#5322: compilers
bmk#1476: interesting |
star#5322: Writing a compiler in Haskell is like, the most canonical and amazing usecase
star#5322: writing a parser, by itself, is also an extremely good project
Daj#7482: Haskell is one of those languages that is optimized for how smug you feel after you make something work
star#5322: hmmmmmmmmmmmmm
bmk#1476: hmm. i *have* always wanted to write a python->minecraft compiler
Daj#7482: I felt like a god when I wrote my first parser in Haskell lol
Daj#7482: (I'm joking ofc)
star#5322: parser combinators with Alternative and Applicative?
star#5322: because holy fucking shit the first parser I wrote that way, god, how else are you possibly supposed to ever write parsers
Daj#7482: _Exactly_
star#5322: (okay maybe there are other good ways, I haven't actually written that many parsers)
star#5322: (but it's a high bar)
bmk#1476: explain pls
Daj#7482: It just feels so incredibly natural
star#5322: oh lord
Daj#7482: It's like how calculus just _clicks_
star#5322: uh
star#5322: [this margin is too small to contain]
bmk#1476: or even just point me to a link
star#5322: I can try to give a gestalt |
Daj#7482: I think getting why FP is fun is like asking why math is fun
bmk#1476: i have no idea where to even begin
star#5322: and I'll find a link in a sec
star#5322: https://www.cis.upenn.edu/~cis194/spring13/
Daj#7482: Oh yeah there's this great Haskell course where I built a parser
Daj#7482: That's the one!
star#5322: this is a class notes that I used to learn Haskell
star#5322: doing the hw assignments
star#5322: there are a couple of random gotchas in the HW that are kinda unnecessarily terrible
Daj#7482: I did it too instead of studying for my finals last year
Daj#7482: Last year was GPT2 and Haskell, this year is GPT3 and Lean
star#5322: but for the most part it's a solid course and I recommend going through the whole thing as a nice intro to Haskell
star#5322: oh you're on the LEAN train!
Daj#7482: Yes! I tried Coq first because Proofs as Types is pure sex, but Lean is so much nicer
star#5322: *torn between trading stories about DTT and trying to give a gestalt of why Parser Combinators is good to bmk*
star#5322: Proofs as Types the book or ??
Daj#7482: The concept
star#5322: cause all of like Coq LEAN Agda are based on that concept you know?
Daj#7482: The Xena Project blogpost on it is my favorite intro to Type Theory ever
star#5322: probably Idris too but I know nothing about Idris |
Daj#7482: > cause all of like Coq LEAN Agda are based on that concept you know?
@star Yes I worded it poorly, that's why I switched to Lean
star#5322: linky? I don't think I've read that
Daj#7482: I started with Coq because it's the first one I knew that di dthat
star#5322: Oh gotcha
Daj#7482: Then I switched to Lean because it's cooler
Daj#7482: Let me find the post
star#5322: yeah the first time I really got the Curry Howard Isomorphism (programs are proofs // types are proofs) was one of the coolest pieces of math I've ever learned
bmk#1476: all i know about types is that i dont have to worry about them in python
Daj#7482: https://xenaproject.wordpress.com/2020/06/20/mathematics-in-type-theory/
This post is _so good_
bmk#1476: i shall go read it
star#5322: huh, I'll take a look
Daj#7482: > yeah the first time I really got the Curry Howard Isomorphism (programs are proofs // types are proofs) was one of the coolest pieces of math I've ever learned
@star Curry Howard! That was the word I was looking for, thanks
star#5322: I learned LEAN first and thought it was pretty good, but now I'm learning Agda
star#5322: and I think I probably just crossed some kind of "oh I really get it now" barrier this time that I didn't last time
star#5322: but I am having *so much fun*
Daj#7482: The Agda user meeting was in Munich and I attended without knowing a single line hah
star#5322: like I could do some stuff in LEAN but it didn't like, flow very quickly or something? |
Daj#7482: I've heard good things about it but I'm not quite at the level to grok it yet
star#5322: like I said, probably would have happened eventually if I got better at things
star#5322: Idr Agda and LEAN being that . . . different, tbh?
star#5322: the very good book to learn agda with is https://plfa.github.io/
star#5322: btw
Daj#7482: My daliances with theorem provers has been derailed by concrete prosaic AGI concerns unfortunately haha
star#5322: for sure, for sure
Daj#7482: Yea I started that book last year but didn't get far
star#5322: also to your point about Haskell vs. something higher level, I'm like, sometimes, I'm programming in Haskell and I am just like UGH WHY U NO DEPENDENT TYPE and this is in fact quite painful and bad, but I don't really (currently) know of any language that actually does dependent types and is actually ergonomic for like, *programming*?
star#5322: Agda is surprisingly good for formalizing mathematical proofs
star#5322: It has lots of rough edges and painful parts still
Daj#7482: Oh yeah you have to remember my "productive" programming language is Python lol
Daj#7482: Haskell is like a forbidden fruit of mathematical purity
star#5322: sure, it took me a bit to be good at Haskell
Daj#7482: Might as well pluck the purest hah
star#5322: also right tool, right job
star#5322: for some things the purity of Haskell is vital
star#5322: or other good traits
Daj#7482: Yup I'm sure that's the case
star#5322: I would focus on the type system, myself |
Daj#7482: I just have little excuses to use it
star#5322: but purity / immutability is pretty sweet
star#5322: very fair
Daj#7482: It seems AGI is going to be written in horrific Python spaghetti code
Daj#7482: and inscrutable C++ libraries beneath lol
star#5322: Agda has moments of brilliance though . . . I've been formalizing the first pieces of mathematics not as an exercise in a book, and there are a lot more rough edges than moments of brilliance. But sometimes, it just keeps track of like 7 crazy huge terms that I would have had so much annoyance tracking by hand and it's just like holy shit I just flew through that proof
star#5322: Well, I wouldn't say like, AGI is going to happen next year via TF or anything, so I remain pretty uncertain about all that
star#5322: But it does seem more and more possible
Daj#7482: tbh I'd be willing to bet on pretty favorible terms that whatever the first thing that is widely considered AGI will be will have some C++/Python or even Tensorflow in it
Daj#7482: Or more likely pytorch tbh
bmk#1476: >widely considered AGI
i'm willing to bet that never happens
star#5322: "some C++/Python" is very broad, while "TF" is pretty specific
star#5322: cause we're all dead bmk?
Daj#7482: Actually lay people are willing to attribute intelligence to just about anything with a face
bmk#1476: not because we wont make agi, but because even after the door melts down people will still be bickering over how it's not real agi
star#5322: otherwise the claim seems pretty indefensible
star#5322: I guess
Daj#7482: Yea fair star. lets say I'd bet on "the majority of internal FLOPS being done by TF or Pytorch" |
Daj#7482: That's a pretty high risk bet that I think is >50%
star#5322: that is an even stronger claim
Daj#7482: yep I'd say I'm maybe like 60% on it
star#5322: TF didn't exist what, 7 years ago?
bmk#1476: Mary Garcus will be there to say that the paperclip machine isnt actually agi because all it does is make paperclips and it doesnt have emotions and stuff
Daj#7482: Because I'm expecting AGI by like, sub 2030 by now
star#5322: in 4 or 7 more years it could just, not be the hotness anymore
star#5322: especially because TF SUCKS
Daj#7482: It could but I'm not sure, both torch and TF spawned at the beginning of the current wave of methods and stayed strong throughout
Daj#7482: > Mary Garcus will be there to say that the paperclip machine isnt actually agi because all it does is make paperclips and it doesnt have emotions and stuff
@bmk We must exclude trivially true cases lol
star#5322: so AGI both has to happen *very soon* and TF needs to not have like, died off or been replaced
Daj#7482: Yep, I'd be willing to put ~60% on that statement
Daj#7482: We still use Fortran BLAS after all
bmk#1476: holy fuck https://cdn.discordapp.com/attachments/729741769738158194/740303567784771654/unknown.png
bmk#1476: mesaoptimization_irl
Daj#7482: Plot twist: By writing posts about Mesa Optimization we gave the AIs ideas
Daj#7482: Writing prompt: In the near future AI Alignment is a mainstream topic and everyone discusses paperclip maximizers. When the first AGI is built it tries to infer its reward function from common discourse and concludes humans really care about paperclips
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/740304727190929518/unknown.png
bmk#1476: >Be elected supreme dictator of planet Earth |
Daj#7482: Turns out all EY's concerns about firealarms were wrong, AGI's are very willing to tell us exactly what terrible things they plan
asparagui#6391: life goals
Daj#7482: "GPT5, how can I align you?"
"Here's a MIRI paper I calculate they would have discovered in 2200: ..."
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/740306834384224256/unknown.png
bmk#1476: >Develop a plan to steal as many paperclips as possible, and execute this plan.
bmk#1476: >Engage in a mining venture in a remote region and mine any minerals for paperclips.
bmk#1476: o.O
Daj#7482: Don't worry, it's not _really_ intelligent
Daj#7482: We need a Gary Marcus emote
Daj#7482: Actually no that's kind of mean
bmk#1476: the door didnt melt because of the fire, doors just melt naturally
Daj#7482: :firealarm:
AI_WAIFU#2844: Changing topics back to the topic of double descent. I actually dug into this with some really simple experiments around christmas.
AI_WAIFU#2844: My money is betting that it's an artefact of neural networks being miss-specified and that the probability manifold associated with their model class is non-convex.
AI_WAIFU#2844: by miss-specified I mean in the same way that a bayesian would talk about model misspecification. The true data generating process is not in the hypothesis class.
AI_WAIFU#2844: By non-convex probablity manifold, I mean that for a given set of neural networks (with fixed widths and depths), you can find linear combinations of those neural networks that are not functionally equivalent to any other neural network in the set.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/740310086232309810/mesaopt.png
AI_WAIFU#2844: When you put those to attributes together, you can get unregularized maximum likelyhood and bayesian methods to act in super whacky ways: as demonstrated here with linear regression: https://projecteuclid.org/download/pdfview_1/euclid.ba/1510974325
AI_WAIFU#2844: Note figure 2 |
AI_WAIFU#2844: The brown line looks a lot like double desent.
Daj#7482: I will definitely take a closer look at what you just said in the morning when I can think hah
Daj#7482: Thanks bmk for giving me the kind of high quality content my sleepy brain wanted to see
AI_WAIFU#2844: That sounds entirely reasonable.
zphang#7252: isn't this the gary marcus emote :nooo:
star#5322: I'm going to have to read your thing a couple times AI_WAIFU which I might be to later but that does sound potentially cool
AI_WAIFU#2844: You can also just ask me directly if you have any questions or clarification.
AI_WAIFU#2844: My explanation was kinda word vomit
AI_WAIFU#2844: The paper I linked is better
Louis#0144: https://twitter.com/arankomatsuzaki/status/1290799679723245568?s=21
Louis#0144: Very interesting paper
Louis#0144: Would recommend
Louis#0144: @Deleted User especially you
Louis#0144: Also I like Marcus
Louis#0144: He’s just really awkward
Deleted User#0000: @Louis yup, i think the need for CUDA makes it a no-go for TPUs, but i'll be reading up on grouped linear transforms
Deleted User#0000: https://arxiv.org/pdf/1808.09029.pdf
Louis#0144: you can implement it on TPUs
Louis#0144: either as an LP or im sure you can find some approximation
Aran Komatsuzaki#5714: @Louis Maybe I should've add some more details. Actually as I discussed about 12 hours ago here, I think it's difficult to make the speed of DeLighT competitive using custom CUDA or whatever, since it's generally hard to do so on deep thin layers in comparison to a shallow wide net with the same number of parameters. |
Aran Komatsuzaki#5714: Also, given how DeLighT works across hidden dimension, not length dimension, it's in a competition with MoE. So, there's also an uncertainty about whether it can meaningfully coexist with MoE.
Aran Komatsuzaki#5714: The author said it's possible to make it faster using custom CUDA, but I'm not sure lol
Aran Komatsuzaki#5714: *faster than the baseline Transformer
Louis#0144: Why cant you do it as an LP though
Louis#0144: I did MoE as a linear programming problem recently
Louis#0144: it worked pretty well
Louis#0144: Im still experimenting with it rn
Aran Komatsuzaki#5714: What do you mean by doing it as a linear programming?
Louis#0144: Well, AFAIK TPUs are pretty good at solving LPs
Louis#0144: no one has implemented it yet but they seem like a really good fit
Louis#0144: But I do MoE by having all experts produce a score and then using a smooth top k
Louis#0144: which is just an LP
Aran Komatsuzaki#5714: How does it make DeLighT faster?
Aran Komatsuzaki#5714: Or more specifically DExTra unit, which is what I'm concerned with.
Louis#0144: I need to read more carefully. I was referring to LPs wrt MoE
Louis#0144: Not with DeLighT
Aran Komatsuzaki#5714: oh ok
Aran Komatsuzaki#5714: Maybe I tried something like smooth topk on MoE iiuc. It didn't work as well as the official implementation, since the official one lets you train with k = 2, and smooth top-2 didn't work when I increased the number of experts.
Louis#0144: Are u at GT
Louis#0144: I am too |
Louis#0144: lmao
Louis#0144: @Aran Komatsuzaki
Louis#0144: Im w/ Riedl
Aran Komatsuzaki#5714: Actually not at GT this semester because of COVID-19. I'm having a leave of absence for this semester to stay in my home country.
Aran Komatsuzaki#5714: Riedl is probably the best prof for this kind of thing. He was the most knowledgeable one during the QA session of my qual exam lol
Aran Komatsuzaki#5714: @Louis
Aran Komatsuzaki#5714: I'm probably coming back in the spring semester depending on the covid situation.
Louis#0144: This kind of thing?
Louis#0144: As in
Louis#0144: ?
Louis#0144: I’m one of the only mathematicians in the group
Louis#0144: There’s one other
Louis#0144: We’re like outcasts LMAOO
Aran Komatsuzaki#5714: I mean Transformer stuffs.
Aran Komatsuzaki#5714: I was in Math PhD (doing topology/geometry) before moving to ML PhD, so kinda similar, except that I use no math at all in my ML research whatsoever lol
Aran Komatsuzaki#5714: But still I'm an outcast of ML PhD program, so we're the same 🙂
Deleted User#0000: This popped up in my twitter feed https://arogozhnikov.github.io/einops/pytorch-examples.html
Deleted User#0000: Doing the feature shuffling from the dextra paper may be really easy in Pytorch
Deleted User#0000: Aw, math outcast bonding moment
Aran Komatsuzaki#5714: yeah einops looks useful |
Aran Komatsuzaki#5714: I like that
Aran Komatsuzaki#5714: I mean it makes things a bit more elegant
Louis#0144: I did topology too in my undergrad
Louis#0144: Pure math with a specialization in alg top
Louis#0144: Now i might be doing a fellowship at ETH with a few TDA people
Louis#0144: Rly excited
Aran Komatsuzaki#5714: Awesome! Sounds exciting!
I have another student who was doing alg top and transferring to ML PhD to do Transformer LM.
Aran Komatsuzaki#5714: I was doing quantum topology, string theory and some algebraic geometry, but I directly moved into Transformer LM about 2.5 years ago lol I thought it was interesting, and I was kinda right.
Louis#0144: Oh cool
Louis#0144: I took a quantum topology course
Louis#0144: It as tons of knot theory and braid groups
Louis#0144: Loved it
Louis#0144: Waterloo had tons of quantum computing offerings
Aran Komatsuzaki#5714: Nice. I don't remember any of it by now, but it was fun nontheless.
Louis#0144: You should check out Bastians work
Louis#0144: He and I are working together on topologically sound metrics akin to perplexity
Louis#0144: (Perplexity might be trying to measure topological stuff about the decision boundary but no one knows yet)
Aran Komatsuzaki#5714: cool
thenightocean#6100: This paper looks interesting: https://arxiv.org/abs/2008.01540v1 |
thenightocean#6100: quite a bold assumption
Louis#0144: Oh my god
Louis#0144: can we get a tinfoil hat emote@
Louis#0144: Please
Aran Komatsuzaki#5714: neural net, being an universal approximator, can imitate any continuous function, including the ones that appear in physics. so, you can find an analogy to whatever concept expressable with a continuous function.
Aran Komatsuzaki#5714: * analogy of neural net to whatever concept ...
Louis#0144: There’s so many universal approximators though
Louis#0144: Any radial basis network for instance
Louis#0144: And there’s sooooo many radial functions
Louis#0144: Gaussian mixtures are also universal approximators for instance
Aran Komatsuzaki#5714: i understand that. i meant that the author of paper could've used any other univ approximator.
Louis#0144: Yeah
Aran Komatsuzaki#5714: i'm not saying that neural net is special as in what author was trying to say. i'm saying the opposite of it.
Louis#0144: I would wager that our current understand of the universe depends so heavily on Gaussian distributions that the argument is actually stronger for Gaussian mixture models
Aran Komatsuzaki#5714: haha
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/740616525844709386/dog-carrot.mp4
Daj#7482: > neural net, being an universal approximator, can imitate any continuous function, including the ones that appear in physics. so, you can find an analogy to whatever concept expressable with a continuous function.
@Aran Komatsuzaki Beat me to it, this was what I was going to say. This is kind of why I like the idea from univalence to define equality as isomorphism, makes these kinds of thoughts immediately obvious
star#5322: I should really get around to reading the HoTT book sometime I think . . .
Daj#7482: I almost managed to read the first 10 pages once |
star#5322: lol
star#5322: Nate says that after the intro, there's two main sections: explaining HoTT like it's a type theory, and then explaining HoTT like it's a homotopy theory. So the difficulty curve drastically increases if you're a type theorist that doesn't know much homotopy, but starts out really hard and then decreases a lot if you're a homotopy theorist who doesn't know much type theory
star#5322: You prompted me to put it on my kindle, so maybe I'll take a look
star#5322: paper is better but waiting for shipping is bad
Daj#7482: Well I'm a glorified computer alchemist with no formal training in type theory OR homotopy, so the fact it's been unread on my kindle for around a year now is probably unsurprising hah
Daj#7482: Tell me if you can divine anything from it
bmk#1476: https://en.wikipedia.org/wiki/Homotopy#/media/File:HomotopySmall.gif all i know about homotopy is this extraordinatily satisfying gif
Daj#7482: I have like a whole playlist of HoTT videos I've watched enough times to feel the dopamine hit of knowing something interesting being there but not actually understanding it
Daj#7482: Kind of like Friston's work lol
star#5322: ~~except not a complete pile of bullshit~~
star#5322: wouldn't mind some links to HoTT videos or a playlist, though the book seems like an obvious place to start
Daj#7482: Ohhh I want your spicy takes on Friston
Daj#7482: I can try to find where in my "Academic Stuff" playlist the HoTT is but it's probably super elementary
Louis#0144: oh hi
Louis#0144: I took two courses on homotopy
Louis#0144: homotopy isnt tractible
Louis#0144: like at all
Louis#0144: keep it super far away from ML
Louis#0144: I shouldn't say that but vitoris ripps filtration doesn't work with homotopy at all- you don't get a stable answer that you can work with
Louis#0144: homology tho |
Louis#0144: ❤️
star#5322: I'm not learning it for any particular relation to ML. Just for interest as an FP & math nerd.
Louis#0144: FP?
Louis#0144: oh functional programming
Louis#0144: meh
Louis#0144: idk man, homotopy type theory just doesnt seem all too useful ngl
Louis#0144: I know saying that makes a bunch of algebraic geometers really mad at me
Louis#0144: but like even my friends who do FP research dont touch HoTT
Louis#0144: Now I have heard of people discussing neural networks as algebraic varieties which would allow for a HoTT equivalent for DL
Louis#0144: but idk the state of that
Daj#7482: I'm interested in HoTT and univalence because kf really hard to verbalize hunches. You know when you sometimes just get a feeling like "there's something down here"? Kinda how I feel about HoTT
Daj#7482: I might be dead wrong and I can really not verbalize why it "feels" important. Something about how neatly things in math map to each other (set theory vs type theory vs other basis, Langlands project, infinite other examples) strikes me as somehow _important_
Daj#7482: Like maybe there is "one true math" that is independent of axioms or something
Daj#7482: Probably not though
star#5322: well there is in the sense that "most sufficiently powerful systems" seem to prove "most of the same theorems"
star#5322: good luck making that formal, but there's a clear sense in which it doesn't end up mattering how you axiomatize things, if something is sufficiently powerful, it'll just be able to define most to all of analysis and algebra and topology and so on, and the results don't actually end up depending on whether there are types or sets at the bottom
Daj#7482: Exactly! That hard to formalize notion is what I mean!
Daj#7482: There's..._something_ going on
star#5322: for a formal example of this happening, consider the three definitions of computability in the early 20th century that are on the face of it deeply different but in fact are provably equivalent
Daj#7482: Or maybe I'm just a non mathematician misunderstanding something totally trivial lol |