text
stringlengths
2.75k
225k
Hi everyone. Today we are continuing our implementation of Make More. Now in the last lecture, we implemented the bigram language model. And we implemented it both using counts and also using a super simple neural network that has a single linear layer. Now this is the, Jupyter Notebook that we built out last lecture. And we saw that the way we approached this is that we looked at only the single previous character and we predicted the distribution the character that would go next in the sequence. And we did that by taking counts and normalizing them into probabilities, so that each row here sums to 1. Now, this is all well and good if you only have one character of previous context. And this works and it's approachable. The problem with this model of course is that, the predictions from this model are not very good because you only take one character of context. So, the model didn't produce very mean like sounding things. Now the problem with this approach though is that if we are to take more context into account when predicting the next character in a sequence, things quickly blow up and this table, the size of this table grows and in fact it grows exponentially with the length of the context. Because if we only take a single character at a time, that's 27 7 possibilities of context. But if we take 2 characters in the past and try to predict the third one, suddenly the number of rows in this matrix, you can look at it that way, is 27 times 27. So there's 729 possibilities for what could have come in the context. If we take 3 characters as the context suddenly we have 20,000 possibilities of context. And so there's just way too many rows of this matrix. It's way too few counts, for each possibility and the whole thing just kind of explodes and doesn't work very well. So that's why today we're going to move on to this bullet point here. And we're going to implement a multilayer perceptron model to predict the next, character in a sequence. And this modeling approach that we're going to adopt follows this paper, Benjoo et al, 2003. So I have the paper pulled up here. Now this isn't the very first paper that proposed the use of, multaliperceptrons or neural networks to predict the next character or token in a sequence. But it's definitely one that is, was very influential around that time. It is very often cited to stand in for this idea. And I think it's a very nice write up. And so this is the paper that we're going to first look at and then implement. Now this paper has 19 pages, so we don't have time to go into the full detail of this paper, but I invite you to read it. It's very readable, interesting, and has a lot of interesting ideas in it as well. In the introduction, they described the exact same problem I just described. And then to address it, they proposed the following model. Now keep in mind that we are building a character level language language model. So we're working on the level of characters. In this paper, they have a vocabulary of 17,000 possible words, and they instead build a word level language model. But we're going to still stick with the characters, but we'll take the same modeling approach. Now what they do is basically they propose to take every one of these words, 17 1,000 words, and they're going to associate to each word a, say, 30 dimensional feature vector. So every word is now embedded into a 30 dimensional space. You can think of it that way. So we have 17,000 points or vectors in a 30 dimensional space. And that's, you might imagine that's very crowded. That's a lot of points for a very small space. Now in the beginning, these words are initialized completely randomly. So they're spread out at random, but then we're going to tune these embeddings of these words using backpropagation. So during the course of training of this neural network, these points or vectors are going to basically move around in this space. And you might imagine that, for example, words that have very similar meanings or that are indeed synonyms of each other might end up in a very similar part of the space. And conversely, words that mean very different things would go somewhere else in the space. Now their modeling approach otherwise is identical to ours. They are using a multilayer neural network to predict the next word given the previous words. And to train the neural network, they are maximizing the log likelihood of the training data just like we did. So the modeling approach itself is identical. Now here they have a concrete example of this intuition. Why does it work? Basically, suppose that for example you are trying to predict a dog was running in a blank. Now suppose that the exact phrase a dog was running in a has never occurred in a training data. And here you are at sort of test time later when the model is deployed somewhere and it's trying to make a sentence and it's saying a dog was running in a blank. And because it's never encountered what might come next. But this approach actually allows you to get around that. Because maybe you didn't see the exact phrase a dog was running in a something. But maybe you've seen similar phrases. Maybe you've seen the phrase the dog was running in a blank. And maybe your network has learned that a and and the are like frequently are interchangeable with each other. And so maybe it took the embedding for a and the embedding for and it actually put them like nearby each other in the space. And so you can transfer knowledge through that embedding and you can generalize in that way. Similarly, the network could know that cats and dogs are animals and they co occur in lots of very similar contexts. And so even though you haven't seen this exact phrase or if you haven't seen exactly walking or running, you can, through the embedding space, transfer knowledge and you can generalize to novel scenarios. So let's now scroll down to the diagram of the neural network. They have a nice diagram here. And in this 3 previous words, Now these 3 previous words as I mentioned, we have a vocabulary of 17,000, possible words. So every one of these, basically are the index of the incoming word. And because there are 17,000 words, this is an integer between 016,999. Now there's also a lookup table that they call C. This lookup table is a matrix that is 17,000 by say 30. Basically what we're doing here is we're treating this as a lookup table. And so every index is plucking out a row of this embedding matrix so that each index is converted to the 30 dimensional, vector that corresponds to the embedding vector for that word. So here we have the input layer of 30 neurons for 3 words making up 90 neurons in total. And here they're saying that this matrix c is shared across all the words. So we're always indexing into the same matrix c over and over, for each one of these words. Next up is the hidden layer of this neural network. The size of this hidden neural, layer of this neural net is a hyperparameter. Parameter. So we use the word hyperparameter when it's kind of like a design choice up to the designer of the neural net. And this can be as large as you'd like or as small as you'd like. So for example, the size could be a 100. And we are going to go over multiple choices of the size of this hidden layer and we're going to evaluate how well they work. So say there were a 100 neurons here, all of them would be fully connected to the 90 words or 90, numbers that make up these three words. So this is a fully connected layer, then there's a 10 h long layer and then there's this output layer. And because there are 17,000 possible words that could come next, this layer has 17,000 neurons, and all of them are fully connected to, all of these neurons in the hidden, layer. So there's a lot of parameters here because there's a lot of words. So most computation is here. This is the expensive layer. Now there are 17,000 low here. So on top of there, we have the softmax layer which we've seen in our previous video as well. So every one of these logits is exponentiated and then everything is normalized to sum to 1 so that we have a nice, probability distribution for the next word in the sequence. Now of course during training, we actually have the label. We have the identity of the next word in the sequence. That word, or its index is used to pluck out the probability of that word. And then we are maximizing the probability of that word, with respect to the parameters of this neural net. So the parameters are the weights and biases of this output layer, the weights and biases of the hidden layer, and the embedding lookup table c. And all of that is optimized using back propagation. And these, dashed arrows, ignore those. That represents a variation of a neural net that we are not going to explore in this video. So that's the setup and now let's implement it. Okay, so I started a brand new notebook for this lecture. We are importing PyTorch and we are importing Matplotlib so we can create figures. Then I am reading all the names into a list of words like I did before and I'm showing the first eight right here. Keep in mind that we have a 32,000 in total. These are just the first 8. And then here I'm building out the vocabulary of characters and all the mappings from the characters as strings to integers and vice versa. Now the first thing we want to do is we want to compile the dataset for the neural network. And I had to rewrite this code. I'll show you in a second what it looks like. So this is the code that I created for the dataset creation. So let me first run it and then I'll briefly explain how this works. So first we're going to define something called block size and this is basically the context length of how many characters do we take to predict the next one. So here in this example, we're taking 3 characters to predict the 4th one. So we have a block size of 3. That's the size of the block that supports the prediction. Then here I'm building out the x and y. The x are the input to the neural net and the y are the labels for each example inside x. Then I'm iterating over the first five words. I'm doing first five just for efficiency while we are developing all the code. But then later we are going to come here and erase this so that we use the entire training set. So here I'm printing the word Emma. And here I'm basically showing the examples that we can generate the 5 examples that we can generate out of the single, sort of word Emma. So when we are given the context of just, dot, dot, the first character in a sequence is e. In this context, the label is m. When the context is this, the label is m and so forth. And so the way I build this out is first, I start with a padded context of just 0 tokens. Then I iterate over all the characters. I get the character in the sequence and I basically build out the array y of this current character and the array x which stores the current running context. And then here see I print everything and here I, crop the context and enter the new character in a sequence. So this is kind of like a rolling window of context. Now we can change the block size here to, for example, 4. And in that case, we would be predicting the 5th character given the previous 4. Or it can be 5 and then it would look like this. Or it can be say 10 and then it would look something like this. We're taking 10 characters to predict the 11th one. And we're always padding with dots. So let me bring this to 3 just so that, we have what we have here in the paper. And finally, the dataset right now looks as follows. From these five words, we have created a dataset of 32 examples and each input to the neural net is 3 integers and we have a label that is also an integer, y. So x looks like this. These are the individual examples. And then y are the labels. So given this, let's now write a neural network that takes these x's and predicts the y's. 1st, let's let's build the embedding, lookup table c. So we have 27 possible characters, and we're going to embed them in a lower dimensional space. In the paper, they have 17,000 words and they embed them in, spaces as small dimensional as 30. So they cram 17,000 words into 30 dimensional space. In our case, we have only 27 possible characters. So let's cram them in something as small as to start with, for example, a 2 dimensional space. So this lookup table will be random numbers and we'll have 27 rows and we'll have 2 columns. Right? So each 20 each one of 27 characters will have a 2 dimensional embedding. So that's our matrix c of embeddings in the beginning initialized randomly. Now before we embed all of the integers inside the input x using this lookup table c, let me actually just try to embed a single individual integer like say 5 so we get a sense of how this works. Now one way this works of course is we can take the C and we can index into row 5. That gives us a vector, the 5th row of C. This is one way to do it. The other way that I presented in the previous lecture is actually seemingly different but actually identical. So in the previous lecture, what we did is we took these integers and we used the one hot encoding to first encode them. So f dot one hot, we want to encode integer 5 and we want to tell it that the number of classes is 27. So that's the 26 dimensional vector of all zeros except the 5th bit is turned on. Now, this actually doesn't work. The reason is that this input actually must be torch. Tensor. I'm making some of these errors intentionally just so you get to see some errors and how to fix them. So this must be a tensor not an int. Fairly straightforward to fix. We get a 1 hot vector. The 5th dimension is 1 and the shape of this is 27. Now notice notice that just as I briefly alluded to in a previous video, if we take this one hot vector and we multiply it by C Then, what would you expect? Well, number 1, first you'd expect an error because, expected scalar type long but found float. So a little bit confusing, but the problem here is that one hot, the data type of it is long. It's a 64 bit integer. But this is a float tensor and so PyTorch doesn't know how to multiply an int with a float and that's why we had to explicitly cast this to a float so that we can multiply. Now the output actually here is identical. And that it's identical because of the way the matrix multiplication here works. We have the one hot vector multiplying columns of c. And because of all the zeros, they actually end up masking out everything in c except for the 5th row, which is plucked out. And so we actually arrive at the same result. And that tells you that here, we can interpret this first piece here, this embedding of the integer. We can either think of it as the integer indexing into a lookup table c. But equivalently, we can also think of this little piece here as a first layer of this bigger neural net. This layer here has neurons that have no nonlinearity. There's no tan They're just linear neurons and their weight matrix is c. Then we are encoding integers into 1 hot and feeding those into a neural net and this first layer basically embeds them. So those are 2 equivalent ways of doing the same thing. We're just going to index because it's much much faster and we're going to this interpretation of 1 hot inputs into neural nets and we're just going to index integers and create and use embedding tables. Now embedding a single integer like 5 is easy enough. We can simply ask PyTorch to retrieve the 5th row of c or the row index 5 of c. But how do we simultaneously embed all of these 32 by 3 integers stored in array x? Luckily, PyTorch indexing is fairly flexible and quite powerful. So it doesn't just work to, ask for a single element 5 like this. You can actually index using lists. So for example, we can get the rows 5, 6, and 7 and this will just work like this. We can index with a list. It doesn't just have to be a list. It can also be actually a tensor of integers and we can index with that. So this is a integer tensor 567 and this will just work as well. In fact, we can also, for example, repeat row 7 and retrieve it multiple times and, that same index will just get embedded multiple times here. So here we are indexing with a 1 dimensional tensor of integers but it turns out that you can also index with multi dimensional tensors of integers. Here we have a 2 dimensional in, tensor of integers. So we can simply just do c atx and this just works. And the shape of this is 32 by 3 which is the original shape. And now for every one of those 32 by 3 integers, we've retrieved the embedding vector here. So basically, we have that as an example. The 13th or example index 13 the second dimension is the integer 1 as an example And so here if we do c of x which gives us that array and then we index into 13 by 2 of that array, then we we get the embedding here. And you can verify that c at 1 which is the integer at that location is indeed equal to this. You see they're equal. So basically, a long story short, PyTorch indexing is awesome. And to embed simultaneously all of the integers in x, we can simply do c of x and that is our embedding and that just works. Now let's construct this layer here, the hidden layer. So we have that w1 as I'll call it are these weights which we will initialize randomly. Now the number of inputs to this layer is going to be 3 times 2, right? Because we have 2 dimensional embeddings and we have 3 of them. So the number And then biases will be also initialized randomly as an example. And then biases will be also initialized randomly as an example and we just need 100 of them. Now the problem with this is we can't simply normally we would take the input, in this case that's embedding and we'd like to multiply it with these weights. And then we would like to add the bias. This is roughly what we want to do. But the problem here is that these embeddings are stacked up in the dimensions of this input tensor. So this will not work, this matrix multiplication, because this is a shape 32 by 3 by 2 and I can't multiply that by 6 by 100. So somehow we need to concatenate these inputs here together so that we can do something along these lines which currently does not work. So how do we transform this 32 by 3 by 2 into a 32 by 6 so that we can actually perform this, multiplication over here. I'd like to show you that there are usually many ways of, implementing what you'd like to do in Torch. And some of them will be faster, better, shorter, etcetera. And that's because Torch is a very large library and it's got lots and lots of functions. So if we just go to the documentation and click on torch, you'll see that my slider here is very tiny and that's because there are so many functions functions that you can call on these tensors to transform them, create them, multiply them, add them, perform all kinds of different operations on them. And so this is kind of like the space of possibility if you will. Now one of the things that you can do is if we can control here control f for concatenate and we see that there's a function torch.cat And this concatenates a given sequence of tensors in a given dimension. And, these tensors must have the same shape, etc. So we can use the concatenate operation to, in a naive way, concatenate these 3 embeddings for each input. So in this case we have em of em of the shape. And really what we want to do is we want to retrieve these 3 parts and concatenate them. So we want to grab all the examples. We want to grab first the 0th index and then all of this. So this plucks out the 32 by 2 embeddings of just the first word here. And so basically we want this guy, we want the first dimension, and we want the second dimension. And these are the 3 pieces individually. And then we want to treat this as a sequence and we want to torch. Cat on that sequence. So this is the list. Torch. Cat takes a sequence of tensors and then we have to tell it along which dimension to concatenate. So in this case all these are 32 by 2 and we want to concatenate not across dimension 0 but across dimension 1. So passing in 1 gives us a result that the shape of this is 32 by 6 exactly as we'd like. So that took 32 and squashed these back and cutting them into 32 by 6. Now this is kind of ugly because this code would not generalize if we later want to change the block size. Right now we have 3 inputs. 3 words. But what if we had 5? Then here we would have change the code because I'm indexing directly. Well torch comes to rescue again because that turns out to be a function called unbind and it removes a tensor dimension. So it removes a tensor dimension, returns a tuple of all slices along a given dimension without it. So this is exactly what we need. And basically when we call torch dot unbind, torch. Unbind of m and pass in dimension 1 index 1. This gives us a list of tensors exactly equivalent to this. Running this gives us a lang3 and it's exactly this list. We can call torch dot cat on it and along the first dimension. And this works and the shape is the same. But now this is, it doesn't matter if we have block size 3 or 5 or 10, this will just work. So this is one way to do it. But it turns out that in this case, there's actually a significantly better and more efficient way. And this gives me an opportunity to hint at some of the internals of torch. Tensor. So let's create an array here of elements from 0 to 17 and the shape of this is just 18. It's a single vector of 18 numbers. It turns out that we can very quickly re represent this as different sized and dimensional tensors. We do this by calling a view and we can say that actually this is not a single vector of 18. This is a 2 by 9 tensor. Or alternatively, this is a 9 by 2 tensor. Or this is actually a 3 by 3 by 2 tensor. As long as the total number of elements here multiply to be the same, this will just work. And in PyTorch, this operation calling dot view is extremely efficient. And the reason for that is that in each tensor, there's something called the underlying storage. And the storage is just the numbers always as a one dimensional vector. And this is how this tensor is represented in the computer memory. It's always a one dimensional vector. But when we call that view, we are manipulating some of attributes of that tensor that dictate how this one dimensional sequence is interpreted to be an n dimensional tensor. And so what's happening here is that no memory is being changed, copied, moved, or created when we call dot view. The storage is identical but when you call dot view, some of the internal, attributes of the view of this tensor are being manipulated and changed. In particular, there's something there's something called storage offset, strides, and shapes. And those are manipulated so that this one dimensional sequence of bytes is seen as different and dimensional arrays. There's a blog post here from Eric called PyTorch internals where he goes into some of this with respect to tensor and how the view of a tensor is represented. And this is really just like a logical construct of representing the physical memory. And so this is a pretty good, blog post that you can go into. I might also create an entire video on the internals of torch tensor and how this works. For here, we just note that this is an extremely efficient operation. And if I delete this and come back to our emp, we see that the shape of our mb is 32 by 3 by 2 but we can simply ask for PyTorch to view this instead as a 32 by 6. And the way this gets flattened into a 32 by 6 array just happens that these 2 get stacked up in a single row. And so that's basically the concatenation operation that we're after. And you can verify that this actually gives the exact same result as what we had before. So this is an element y equals and you can see that all the elements of these two tensors are the same. And so we get the exact same result. So long story short, we can just come here and if we just view this as a 32 by 6 instead then this multiplication will work and give us the hidden states that we're after. So if this is H then H dash shape is now the 100 dimensional activations for every one of our 32 examples. And this gives the desired result. Let me do 2 things here. Number 1, let's not use 32. We can for example do something like, em dot shape at 0. So that we don't hardcode these numbers and this would work for any size of this m. Or alternatively we can also do negative one. When we do negative one PyTorch will infer what this should be. Because the number of elements must be the same and we're saying that this is 6, PyTorch will derive that this must be 32 or whatever else it is if m is of different size. The other thing is here, one more thing I'd like to point out is here when we do the concatenation, this actually is much less efficient because this concatenation would create a whole new tensor with a whole new storage. So new memory is being created because there's no way to concatenate tensors just by manipulating the view attributes. So this is inefficient and creates all kinds of new memory. So let me delete this now. We don't need this. And here to calculate h we want to also dot10h of this to get our oops to get our h. So these are not numbers between negative one and one because of the 10h and we have that the shape is 32 by 100. And that is basically this hidden layer of activations here for every one of our 32 examples. Now there's one more thing I glossed over that we have to be very careful with and that this and that's this plus here. In particular, we want to make sure that the broadcasting will do what we like. The shape of this is 32 by 100 and b one's shape is 100. So we see that the addition here will broadcast these 2 and in particular we have 32 by 100 broadcasting to 100. So broadcasting will align on the right, create a fake dimension here, So this will become a 1 by 100 row vector and then it will copy vertically for every one of these rows of 32 and do an element wise addition. So in this case, the correct thing will be happening because the same bias vector will be added to all the rows of this matrix. So that is correct. That's what we'd like and, it's always good practice to just make sure, so that you don't shoot yourself in the foot. And finally, let's create the final layer here. So let's create w2andb2. The input now is 100 and the output number of neurons will be for us 27 because we have 27 possible characters that come next. So the biases will be 27 as well. So therefore the logits, which are the outputs of this neural net, are going to be h multiplied by w2+b2. Logits. Shape is 32 by 27 and the logits look good. Now exactly as we saw in the previous video, we want to take these logits and we want to first exponentiate them to get our fake counts and then we want to normalize them into a probability. So prob is counts divide and now, counts dot dot sum along the first dimension and keep dims as true, exactly as in the previous video. And so prob. Shape now is 32 by 27 and you'll see that every row of prob sums to 1 so it's normalized. So that gives us the probabilities. Now of course we have the actual letter that comes next. And that comes from this array y which we created during the dataset creation. So y is this last piece here which is the identity of the next character in the sequence that we'd like to now predict. So what we'd like to do now is just as in the previous video, we'd like to index into the rows of prob and in each row we'd like to pluck out the probability assigned to the correct character as given here. So first, we have torch dot arrange of 32 which is kind of like a iterator over, numbers from 0 to 31. And then we can index into prob in the following way. Prob in torch dot arrange of 32 which it erased the roads. And then in each row, we'd like to grab this column as given by y. So this gives the current probabilities as assigned by this neural network network with this setting of its weights to the correct character in the sequence. And you can see here that this looks okay for some of these characters like this is basically 0.2 but it doesn't look very good at all for many other characters. Like this is 0.07zeros1 probability. And so the network thinks that some of these are extremely unlikely. But of course, we haven't trained a neural network yet. So, this will improve and ideally all of these numbers here of course are 1 because then we are correctly predicting the next character. Now just as in the previous video, we want to take these probabilities, we want to look at the log probability and then we want to look at the average log probability and the negative of it to create the negative log likelihood loss. So the loss here is 17 and this is the loss that we'd like to minimize to get the network to predict the correct character in the sequence. Okay. So I rewrote everything here and made it a bit more respectable. So here's our data set. Here's all the parameters that we defined. I'm now using a generator to make it reproducible. I clustered all the parameters into a single list of parameters so that, for example, it's easy to count them and see that in total we currently have about 3,400 parameters. And this is the forward pass as we developed it and we arrive at a single number here, the loss, that is currently expressing how well this neural network works with the current setting of parameters. Now I would like to make it even more respectable. So in particular, see these lines here where we take the logits and we calculate a loss. We're not actually reinventing the wheel here. This is just classification and many people use classification. And that's why there is a functional dot cross entropy function in PyTorch to calculate this much more efficiently. So we could just simply call f.crossentropy and we can pass in the logits and we can pass in the array of targets y and this calculates the exact same loss. So in fact we can simply put this here and erase these three three lines and we're going to get the exact same result. Now there are actually many good reasons to prefer f. Cross entropy over rolling your own implementation like this. I did this for educational reasons but you'd never use this in practice. Why is that? Number 1, when you use f. Crossentropy, PyTorch will not create all these intermediate tensors because these are all new tensors in memory and all this is fairly inefficient to run like this. Instead, PyTorch will cluster up all these operations and very often create have fused kernels that very efficiently evaluate these expressions that are clustered mathematical operations. Number 2, the backward pass can be made much more efficient. And not just because it's a fused kernel, but also analytically and mathematically it's often a much simpler backward pass to implement. We actually saw this with microgrid. You see here when we implemented 10h, the forward pass of this operation to calculate the 10h was actually a fairly complicated mathematical expression. But because it's a clustered mathematical expression, when we did the backward pass we didn't individually backward through the exp and the two times and the minus one and division, etcetera. We just said it's 1 minuteus t squared. And that's a much simpler mathematical expression. And we were able to do this because we're able to reuse calculations and because we are able to mathematically and analytically derive the derivative. And often that expression simplifies mathematically and so there's much less to implement. So not only can can it be made more efficient because it runs in a fused kernel but also because the expressions can take a much simpler form mathematically. So that's number 1. Number 2, under the hood, f dot cross entropy can also be significantly more numerically well behaved. Let me show you an example of how this works. Suppose we have logits of negative negative three zero and 5. And then we are taking the exponent of it and normalizing it to sum to 1. So when logits take on this values, everything is well and good and we get a nice probability distribution. Now consider what happens when some of these logits take on more extreme values. And that can happen during optimization of a neural network. Suppose that some of these numbers grow very negative like say negative 100. Then actually everything will come out fine. We still get the probabilities that, you know are well behaved and they sum to 1 and everything is great. But because of the way the x works, if you have very positive logits, let's say positive 100 in here, you actually start to run into trouble and we get not a number here. And the reason for for that is that these counts have an infth here. So if you pass in a very negative number to exp, you just get a very negative sorry not negative but very small number very very near 0 and that's fine. But if you pass in a very positive number, suddenly we run out of range in our floating point number that represents these counts. So basically we're taking e and we're raising it to the power of 100 and that gives us 'inf' because we've run out of dynamic range on this floating point number that is count. And so we cannot pass very large logits through this expression. Now let me reset these numbers to something reasonable. The way PyTorch solves this is that you see how we have a well behaved result here? It turns out that because of the normalization here, you can actually off offset logits by any arbitrary constant value that you want. So if I add 1 here, you actually get the exact same result. Or if I add 2 or if I subtract 3, any offset will produce the exact same probabilities. So because negative numbers are okay, but positive numbers can actually overflow this exp. What PyTorch does is it internally calculates the maximum value that occurs in the logits and it subtracts it. So in this case, it would subtract 5. And so therefore the greatest number in logits will become 0 and all the other numbers will become some negative numbers. And then the result of this is always well behaved. So even if we have a 100 here previously, not good but because PyTorch will subtract a 100 this will work. And so there's many good reasons to call cross entropy. Number 1, the forward pass can be much more efficient. The backward pass can be much more efficient. And also things can be much more numerically well behaved. So let's now set up the training of this neural net. We have the forward pass. We don't need these. It's that we have that loss is equal to the path. Cross entropy. That's the forward pass. Then we need the backward pass. First we want to set the gradients to be 0. So for beam parameters, we want to make sure that p.grad is none which is the same as setting it to 0 in PyTorch. And then loss. Backward to populate those gradients. Once we have the gradients we can do the parameter update. So for ping parameters we want to take all the data and we want to nudge it it learning rate times p dot grad. And then we want to repeat this a few times. And let's print the loss here as well. Now this won't suffice and it will create an error because we also have to go for PN parameters and we have to make sure that P dot requires grad is set to true in PyTorch. And this should just work. Okay. So we started off with loss of 17 and we're decreasing it. Let's run longer. And you see how the loss decreases a lot here. So if we just run for a 1000 times, we get a very very low loss. And that means that we're making very good predictions. Now the reason that this is so straightforward right now is because we're only, overfitting 32 examples. So we only have 32 examples, of the first five words. And therefore, it's very easy to make this neural net fit only these 32 examples because we have 3,400 parameters and only 32 examples. So we're doing what's called overfitting a single batch of the data and getting a very low loss in good predictions. But that's just because we have so many parameters for so few examples so it's easy to, make this be very low. Now we're not able to achieve exactly 0. And the reason for that is we can for example look at logits which are being predicted. And, we can look at the max along the first dimension. And in PyTorch, max reports both the actual values that take on the maximum, number but also the indices of these. And you'll see that the indices are very close to the labels but in some cases they differ. For example, in this very first example, the predicted index is 19 but the label is 5. And we're able to make loss be 0 and fundamentally that's because here the very first or the 0th index is the example where is supposed to predict e. But you see how is also supposed to predict an o and is also supposed to predict an I and then s as well. And so basically, e, o, a, or s are all possible outcomes in a training set for the exact same input. So we're not able to completely overfit and, and make the loss be exactly 0. So but we're getting very close in the cases where, there's a unique input for a unique output. In those cases, we do what's called overfit and we basically get the exact same and the exact correct result. So now all we have to do is we just need to make sure that we read in the full dataset and optimize neural net. Okay, so let's swing back up where we created the dataset. And we see that here we only use the first five words. So let me now erase this and let me erase the print statements otherwise we'd be printing way too much. And so when we process the full dataset of all the words, we now had 228,000 examples instead of just 32. So let's now scroll back down. The dataset is much larger. We initialize the weights, The same number of parameters. They all require gradients. And then let's push this printout loss.item to be here and let's just see how the optimization goes if we run this. Okay. So we started with a fairly high loss and then as we're optimizing the loss is coming down. But you'll notice that it takes quite a bit of time for every single iteration. So let's actually address that because we're doing way too much work forward and backwarding 228,000 examples. In practice, what people usually do is they perform forward and backward pass an update on many batches of the data. So what we will want to do is we want to randomly select some portion of the dataset and that's the mini batch and then only forward backward and update on that little mini batch. And then, we iterate on those mini batches. So in PyTorch, we can, for example, use Torch dot randint. We can generate numbers between 0 and 5 and make 32 of them. I believe the size has to be a tuple in PyTorch. So we can have a tuple of 32 of numbers between 0 and 5. But actually we want x. Shape of 0 here. And so this creates, integers that index into our data set and there's 32 of them. So if our mini batch size is 32 then we can come here and we can first do, mini batch construct. So in the integers that we want to optimize in this single iteration are in IX and then we want to index into X with ix to only grab those rows. So we're only getting 32 rows of x and therefore embeddings will again be 32 by 3 by 2 not 200,000 by 3 by 2. And then this I x has to be used not just to index into x but also to index into y. And now this should be mini batches and this should be much much faster. So okay, so it's instant almost. So this way we can run many many examples nearly instantly and decrease the loss much, much faster. Now because we're only dealing with mini batches, the quality of our gradient is lower. So the direction is not as reliable. It's not the actual gradient direction. But the gradient direction is good enough even when it's estimating on only 32 examples that it is useful. And so it's much better to have an approximate gradient and just make more steps than it is to evaluate the exact gradient and take fewer steps. So that's why in practice, this works quite well. So let's now continue the optimization. Let me take out this loss dot item from here and, place it over here at the end. Okay. So we're hovering around 2.5 or so. However, this is only the loss for that mini batch. So let's actually evaluate the loss here for all of x and for all of y just so we have a full sense of exactly how well the model is doing right now. So right now we're at about 2.7 on the entire training set. So let's run the optimization for a while. Okay we're at 2.62.572.5 3. Okay. So one issue of course is we don't know if if we're stepping too slow or too fast. So at this point one I just guessed it. So one question is how do you determine this learning rate? And, how do we gain confidence that we're stepping in the right, sort of speed? So I'll show you one way to determine a reasonable learning rate. It works as follows. Let's reset our parameters to the initial settings. And now let's print in every step but let's only do 10 steps or so or maybe maybe 100 steps. We want to find like a very reasonable set, search range if you will. So for example, if this is like very low then we see that the loss is barely decreasing. So that's not that's like too low basically. So let's try this one. Okay. So we're decreasing the loss but like not very quickly. So that's a pretty good low range. Now let's reset it again. And now let's try to find the place at which the loss kind of explodes. So maybe at negative one. Okay, we see that we're minimizing the loss but you see how it's kind of unstable. It goes up and down quite a bit. So -one is probably like a fast learning rate. Let's try -ten. Okay, so this isn't optimizing. This is not working very well. So -ten is way too big. -one was already kind of big. So therefore, negative one looks like somewhat reasonable if I reset. So I'm thinking that the right learning rate is somewhere between negative 0.001 and negative one. So the way we can do this here is we can use torquedotlenspace. And we want to basically do something like this between 0 and 1 but number of steps is one more parameter that's required. Let's do a 1,000 steps. This creates 1,000, numbers between 0.001 and 1. But it doesn't really make sense to step between these linearly. So instead, let me create learning rate exponent. And instead of 0.001, this will be a negative 3 and this will be a 0. And then the actual lars that we wanna search over are going to be 10 to the power of l r e. So now what we're doing is we're stepping linearly between the exponents of these learning rates. This is 0.001 and this is 1 because, 10 to the power of 0 is 1. And therefore, we are spaced exponentially in this interval. So these are the candidate learning rates that we want to sort of like search over roughly. So now what we're going to do is here we are going to run the optimization for 1,000 steps and instead of using a fixed number we are going to use learning rate indexing into here, LRS of I and make this I. So basically let me reset this to be again starting from random. Creating these learning rates between 0.001 and one but exponentially stopped. And here what we're doing is we're iterating a 1000 times. We're going to use the learning rate that's in the beginning very very low. In the beginning is going to be 0.001 but by the end it's going to be 1. And then we're going to step with that learning rate. And now what we want to do is we want to keep track of the, learning rates that we used and we want to look at the losses that resulted. And so here, let me track stats. So lri.appendlr and, loss side.appendloss.item. Okay. So again, reset everything and then run. And so basically we started with a very low learning rate and we went all the way up to, learning rate of negative one. And now what we can do is we can PLLT. Plot and we can plot the 2. So we can plot the learning rates on the x axis and the losses we saw on the y axis. And often you're going to find that your plot looks something like this. Where in the beginning, you had very low learning rates. So we basically anything, barely anything happened. Then we got to like a nice spot here. And then as we increase the learning rate enough, we basically started to be kind of unstable here. So a good learning rate turns out to be somewhere around here. And because we have LRI I here, we actually may want to do not l r not the learning rate but the exponent. So that would be the l r e and I is maybe what we want to log. So let me reset this and redo that calculation. But now on the x axis we have the, exponent of the learning rate. And so we can see the exponent of the learning rate that is good to use. It would be sort of like roughly in the valley here because here the learning rates are just way too low. And then here where we expect relatively good learning rates somewhere here. And then here things are starting to explode. So somewhere around negative one x exponent of the learning rate is a pretty good setting. And 10 to the negative one is 0.1. So 0.1 is actually Point 1 was actually a fairly good learning rate around here. And that's what we had in the initial setting. But that's roughly how you would determine it. And so here now we can take out the tracking of these and we can just simply set alarm to be 10 to the negative one or basically otherwise 0.1 as it was before. And now we have some confidence that this is actually a fairly good learning rate. And so now what we can do is we can crank up the iterations. We can reset our optimization. And we can run for a pretty long time using this learning rate. We don't wanna print, that's way too much printing. So let me again reset and run 10,000 steps. Okay. So we're at 0.2 2.48 roughly. Let's run another 10000 steps. 2.46. And now let's do one learning rate decay. What this means is we're going to take our learning rate and we're going to 10x lower it. And so we're at the late stages of training potentially and we may wanna go, a bit slower. Let's do one more actually at point 1 just to see if we're making a dent here. Okay. We're still making dent. And by the way, the bigram loss that we achieved last video was 2.45. So we've already surpassed the bigram model. And once I get a sense that this is actually kinda starting to plateau off, people like to do as I mentioned this learning rate decay. So let's try to decay the loss. The learning rate, I mean. And we achieve it about 2.3 now. Obviously this is janky and not exactly how you would train it in production but this is roughly what you're going through. You first find a decent learning rate using the approach that I showed you. Then you start with that learning rate and you train for a while. And then at the end, people like to do a learning rate decay where you decay the learning rate by say a factor of 10 and you do a few more steps and then you get a trained network roughly speaking. So we've achieved 2.3 and, dramatically improved on the bigram language model using this simple neural net as described we have a better model because we are achieving a lower loss 2.3 much lower than 2.45 with the bigram model previously. Now that's not exactly true. And the reason that's not true is that this is actually a fairly small model. But these models can get larger and larger if you keep adding neurons and parameters. So you can imagine that we don't potentially have a 1,000 parameters. We could have 10,000 or 100,000 or millions of parameters. And as the capacity of the neural network grows, it becomes more and more capable of overfitting fitting your training set. What that means is that the loss on the training set, on the data that you're training on will become very, very low, as low as 0. But all that the model is doing is memorizing your training set verbatim. So if you take that model and it looks like it's working really well, but you try to sample from it, you will basically only get examples exactly as they are in the training set. You won't get any new data. In addition to that, if you try to evaluate the loss on some withheld names or other words, you will actually see that the loss on those can be very high. And so basically, it's not a good model. So the standard in the field is to split up your dataset into 3 splits as we call them. We have the training split, the dev split or the validation split and the test split. So training split, test or, sorry. Dev or validation split and test split. And typically this would be say 80% of your data set. This could be 10% and this 10% roughly. So you have these 3 splits of the data. Now these 80% of your trainings of the data set, the training set is used to optimize the parameters of the model just like we're doing here using gradient descent. Examples, the dev or validation split, they're used for development over all the hyperparameters of your model. So hyperparameters are for example the size of this hidden layer, the size of the embedding. So this is a 100 or a 2 for us, but we could try different things. The strength of the regularization, which we aren't using yet so far. So there's lots of different hyper parameters and settings that go into defining a neural net. And you can try many different variations of them and see whichever one works best on your validation split. So this is used to train the parameters. This is used to train the hyper parameters. And test split is used to evaluate, basically the performance of the model at the end. So we're only evaluating the loss on a test split very very sparingly and very few times. Because every single time you evaluate your test loss and you learn something from it, you are basically starting to also train on the test split. So you are only allowed to test the loss on the test set, very very few times. Otherwise, you risk overfitting to it as well as you experiment on your model. So let's also split up our training data into train, dev, and test. And then we are going to train on train and only evaluate on test very very sparingly. Okay. So here we go. Here is where we took all the words and put them into X and Y tensors. So instead, let me create a new cell here and let me just copy paste some code here because I don't think it's that complex. But, we're gonna try to save a little bit of time. I'm converting this to be a function now. And this function takes some list of words and builds the arrays x and y for those words only. And then here, I am shuffling up all the words. So these are the input words that we get. We are randomly shuffling them all up. And then, we're going to set n1 to be the number of examples that's 80% of the words and n2 to be 90% of the way of the words. So basically, if len of words is 32,000, n1 is oh sorry I should probably run this. N1 is 25,000 and n2 is 28 1,000. And so here we see that I'm calling build a set to build the training set X and Y by indexing into up to n one. So we're going to have only 25,000 training words and then we're going to have, roughly n2 minuteusn1 33,000 validation examples or dev examples. And we're going to have 1 of words basically minus n2 or 3,304 examples here for the test set. So now we have x's and y's for all those 3 splits. Oh, yeah. I'm printing their size here inside the function as well. But here, we don't have words but these are already the individual examples made from those words. So let's now scroll down here. And the dataset now for training is more like this. And then when we reset the network, when we're training, we're only going to be training using x train, x train, and y train. So that's the only thing we're training on. Let's see where we are on the single batch. Let's now train maybe a few more steps. Training Neural Networks can take a while. Usually, you don't do it inline. You launch a bunch of jobs and you wait for them to finish. Can take in multiple days and so on. Luckily, this is a very small network. Okay. So the loss is pretty good. Oh, we accidentally used our learning rate that is way too low. So let me actually come back. We use the decay learning rate of 0.01. So this will train much faster. Then here when we evaluate, let's use the dev set here. Xdev and ydev to evaluate the loss. Okay. And let's not decay the learning rate and only do say 10,000 examples. And let's evaluate the dev loss once here. Okay. So we're getting about 2.3 on dev. And so the neural network when I was training did not see these dev examples. It has an optimized on them and yet, when we evaluate the loss on these dev, we actually get a pretty decent loss. And so we can also look at what loss is on all of training set. Oops. And so we see that the training and the dev loss are about equal. So we're not overfitting. This model is not powerful enough to just be purely memorizing the data. And so far, we are what's called underfitting because the training loss and the dev or test losses are roughly equal. So what that typically means is that our network is very tiny, very small. And we expect to make, performance improvements by scaling up the size of this neural net. So let's do that now. So let's come over here and let's increase the size of the neural net. The easiest way to do this is we can come here to the hidden layer which currently has a 100 neurons and let's just bump this up. So let's do 300 neurons. And then this is also 300 biases. And here we have 300 inputs into the final layer. So let's initialize our do do is I'd like to actually, keep track of, that okay. Let's just do this. Let's keep stats again. And here when we're keeping track of the loss, let's just also keep track of the steps and let's just have I here. And let's train on 30,000 or rather say okay, let's try 30,000. And we are at point 1 and we should be able to run this and optimize the neural net. And then here basically, I want to plt.plot the steps against the loss. So these are the X's and the Y's and this is the loss function and how it's being optimized. Now, you see that there's quite a bit of thickness to this and that's because we are optimizing over these mini batches. And the mini batches create a little bit of noise in this. Where are we in the def set? We are at 2.5. So we're we still haven't optimized this neural net very well. And that's probably because we make it bigger. It might take longer for this neural net to converge. And so let's continue training. Yeah. Let's just continue training. One possibility is that the batch size is so low that, we just have way too much noise in the training. And we may wanna increase the batch size so we have a bit more, correct gradient and we're not thrashing too much. And we can actually, like, optimize more properly. Okay. This will now become meaningless because we've re initialized these. So yeah, this looks not pleasing right now. But there probably is like a tiny improvement but it's so hard to tell. Let's go again. 2.52. Let's try to decrease the learning rate by a factor of 2. Okay. We're at 2.32. Let's continue training. We basically expect to see a lower loss than what we had before because now we have a much much bigger model and we were underfitting. So we'd expect that increasing the size of the model should help the neural net. 2.32. Okay. So that's not happening too well. Now one other concern is that even though we've made the 10 h layer here or the hidden layer much, much bigger, it could be that the bottleneck of the network right now are these embeddings that are 2 dimensional. It can be that we're just cramming way too many characters into just 2 dimensions. And the neural net is not able to really use that space effectively. And that that is sort of like the bottleneck to our network's performance. Okay. 2.23. So just by decreasing the learning rate, I was able to make quite a bit of progress. Let's run this one more time and then evaluate the training and the dev loss. Now one more thing after training that I'd like to do is I'd like to visualize the embedding vectors for these characters before we scale up, the embedding size from 2. Because we'd like to make, this bottleneck potentially go away. But once I make this greater than 2, we won't be able to visualize them. So here, look at we're at 2.23 and 2.24. So, we're not improving much more and maybe the bottleneck now is the character embedding size which is 2. So here I have a bunch of code that will create a figure and then we're going to visualize the embeddings that were trained by the neural net on these characters because right now the embedding size is just 2. So we can visualize all the characters with the x and the y coordinates as the 2 embedding locations for each of these characters. And so here are the x coordinates and the y coordinates, which are the columns of c. And then for each one, I also include the text of the little character. So here what we see is actually kind of interesting. The network has basically learned to separate out the characters and cluster them a little bit. So for example you see how the vowels a, e, I, o, u are clustered up here. So what that's telling us that is that the neural net treats these as very similar. Right? Because when they feed into the neural net, the embedding, for all of these characters is very similar. And so the neural net thinks that they're very similar and kind of like interchangeable, if that makes sense. Then the the points that are like really far away are for example q. Q is kind of treated as an exception and q has a very special embedding vector, so to speak. Similarly dot, which is a special character, is all the way out here. And a lot of the other letters are sort of like clustered up here. And so it's kind of interesting that there's a little bit of structure here, after the training. And it's But we expect that because we're underfitting and we made this layer much bigger and did not sufficiently improve the loss, we're thinking that the constraint to better performance right now could be these embedding vectors. So let's make them bigger. Okay. So let's scroll up here. And now we don't have 2 dimensional embeddings. We are going to have say 10 dimensional embeddings for each word. Then this layer will receive 3 times 10. So 30 inputs will go into, the hidden layer. Let's also make the hidden layer a bit smaller. So instead of 300, let's just do 200 neurons in that hidden layer. So now the total number of elements will be slightly bigger at 11,000. And then here we have to be a bit careful because, okay, the learning rate we set to 0.1. Here, we are hard code in 6. And obviously, if you're working in production, you don't wanna be hard coding magic numbers. But instead of 6, this should now be 30. And let's run for 50,000 iterations and let me split out the initialization here outside so that when we run this cell multiple times, it's not going to wipe out our loss. In addition to that, here let's instead of logging loss dot item, let's actually log the let's, do log10. I believe that's a function of the loss. And I'll show you why in a second. Let's optimize this. Basically I'd like to plot the log loss instead of the loss because when you plot the loss many times it can have this hockey stick appearance and log squashes it in. So it just kind of like looks nicer. So the x axis is stepi and the y axis will be the lossi. And then here, this is 30. Ideally, we wouldn't be hard coding these. Let's look at the loss. It's again very thick because the mini batch size is very small but the total loss over the training set is 2.3 and the test or the def set is 2.38 as well. So so far so good. Let's try to now decrease the learning rate by a factor of 10 and train for another 50,000 iterations. We'd hope that we would be able to beat, 2.32. But again, we're just kind of like doing this very haphazardly. So I don't actually have confidence that our learning rate is set very well, that our learning rate decay, which we just do at random, is set very well. And, so the optimization here is kind of suspect to be honest. And this is not how you would do it typically In production, you would create parameters or hyperparameters out of all these settings and then you would run lots of experiments and see whichever ones are working well for you. Okay. So we have 2.17 now and 2.2. Okay. So you see how the training and the validation performance are starting to slightly slowly depart. So maybe we're getting the sense that the neural net is getting good enough, or that number of parameters is large enough that we are slowly starting to overfit. Let's maybe run one more iteration of this and see where we get. But, yeah, basically, you would be running lots of experiments, and then you are slowly scrutinizing whichever ones give you the best dev performance. And then once you find all the, hyperparameters that make your dev performance good, you take that model and you evaluate the test set performance a single time. And that's the number that you report in your paper or wherever else you want to talk about and brag about, your model. So let's then rerun the plot and rerun the train and dev. And because we're getting a lower loss now, it is the case that the embedding size of these was holding us back very likely. Okay. So 2.16, 2.19 is what we're roughly getting. So there's many ways to go from many ways to go from here. We can continue tuning the optimization. We can continue, for example, playing with the size of the neural net or we can increase the number of, words or characters in our case that we are taking as an input. So instead of just 3 characters, we could be taking more characters than as an input, and that could further improve the loss. Okay. So I changed the code slightly. So we have here 200,000 steps of the optimization. And in the first 100,000, we're using a learning rate of 0.1. And then in the next 100,000 we're using a learning rate of 0.01. This is the loss that I achieve. And these are the performance on the training and validation loss. And in validation loss I've been able to obtain in the last 30 minutes or so is 2.17. So now I invite you to beat this number. And you have quite a few knobs available to you to I think surpass this number. So number 1, you can of course change the number of neurons in the hidden layer of this model. You can change the dimensionality of the embedding, lookup table. You can change the number of characters that are feeding in as an input, as the context into this model. And then of course, you can change the details of the optimization. How long are we running? Where's the learning rate? How does it change over time? How does it decay? You can change the batch size and you may be able to actually achieve a much better convergence speed in terms of, how many seconds or minutes it takes to train the model and, get, your result in terms of really good, loss. And then of course, I actually invite you to read this paper. It is 19 pages but at this point, you should actually be able to read a good chunk of this paper and understand, pretty good chunks of it. And this paper also has quite a few ideas for improvements that you can play with. So all of those are not available to you and you should be able to beat this number. I'm leaving that as an exercise to the reader. And, that's it for now and I'll see you next time. Before we wrap up, I also wanted to show how you would sample from the model. So we're going to generate 20 samples. At first, we begin with all dots, so that's the context. And then until we generate the 0th character again, we're going to embed the current context using the embedding table c. Now usually, here the first dimension was the size of the training set. But here we're only working with a single example that we're generating. So this is just the mesh in 1 just for simplicity. And so this embedding then gets projected into the end state. You get the logits. Now we calculate the probabilities. For that you can use f.soft max of logits and that just basically exponentiates the logits and makes them sum to 1. And similar to cross entropy, it is careful that there's no overflows. Once we have the probabilities, we sample from them using Torshut Multinomial to get our next index. And then we shift the context window to append the index and record it. And then we can just decode all the integers to strings and print them out. And so these are some example samples and you can see that the model now works much better. So the words here are much more word like or name like. So we have things like am, jose, Lila. You know, it's starting to sound a little bit more name like. So we're definitely making progress, but we can still improve on this model quite a lot. Okay, sorry there's some bonus content. I wanted to mention that I want to make these notebooks more accessible. And so I don't want you to have to like install Jupyter notebooks and Torch and everything else. So I will be sharing a link to a Google Colab and the Google Colab will look like a notebook in your browser. And you can just go to a URL and you'll be able to execute all of the code that you saw in the Google collab. And so this is me executing the code in this lecture and I shortened it a little bit. But basically you you're able to train the exact same network and then plot and sample from the model and everything is ready for you to like tinker with the numbers right there in your browser. No installation necessary. So I just wanted to point that out and the link to this will be in the video description.
大家好。今天,我们将继续实施“Make More”计划。在上一讲中,我们实现了二元语言模型。我们既使用计数来实现它,也使用具有单个线性层的超级简单的神经网络来实现它。这是我们在上节课中制作的 Jupyter Notebook。我们发现,我们解决这个问题的方法是,我们只查看单个前一个字符,并预测序列中下一个字符的分布。我们通过计数并将其标准化为概率来实现这一点,这样这里的每一行总和都为 1。现在,如果您只有一个先前上下文的字符,那么这一切都很好。这很有效,而且很容易实现。当然,这个模型的问题在于,由于您只采用一个上下文字符,因此该模型的预测效果不是很好。因此,该模型并没有产生听起来很卑鄙的东西。但是,这种方法的问题是,如果我们在预测序列中的下一个字符时考虑更多的上下文,那么事情就会很快变得复杂,这个表的大小会不断增长,事实上,它会随着上下文的长度呈指数增长。因为如果我们一次只取一个字符,那么就有 27 种上下文可能性。但是如果我们取过去的 2 个字符并尝试预测第三个字符,那么这个矩阵的行数(你可以这样看)就是 27 乘以 27。因此,上下文中可能出现的情况有 729 种可能性。如果我们以 3 个字符作为上下文,那么我们就有 20,000 种上下文可能性。所以这个矩阵的行实在是太多了。对于每种可能性,计数都太少了,整个事情就会爆炸,并且效果不佳。所以这就是为什么今天我们要讨论这个要点。我们将实现一个多层感知器模型来预测序列中的下一个字符。我们将要采用的建模方法遵循 Benjoo 等人于 2003 年发表的论文。所以我把这篇论文拉到这里。这并不是第一篇提出使用多感知器或神经网络来预测序列中的下一个字符或标记的论文。但它绝对是当时非常有影响力的。它常常被引用来支持这一观点。我认为这是一篇非常好的文章。这是我们首先要研究然后要实施的论文。现在这篇论文有 19 页,所以我们没有时间去详细讨论这篇论文,但我邀请您阅读它。它非常易读、有趣,并且包含许多有趣的想法。在介绍中,他们描述了我刚才描述的完全相同的问题。然后为了解决这个问题,他们提出了以下模型。现在请记住,我们正在构建字符级语言模型。因此我们正在角色层面上进行努力。在本文中,他们拥有 17,000 个可能的单词的词汇表,并且他们构建了一个单词级语言模型。但我们仍将坚持使用角色,但我们将采用相同的建模方法。现在他们所做的基本上是,他们建议取出这些单词中的每一个,17,000 个单词,然后为每个单词关联一个 30 维特征向量。因此,每个单词现在都嵌入到 30 维空间中。你可以这样想。因此我们在 30 维空间中有 17,000 个点或向量。你可能会想象那里非常拥挤。对于非常小的空间来说,这已经是很多分数了。现在一开始,这些单词是完全随机初始化的。所以它们是随机分散的,但我们将使用反向传播来调整这些词的嵌入。因此,在训练这个神经网络的过程中,这些点或向量基本上会在这个空间中移动。你可能会想象,例如,具有非常相似含义的单词或确实彼此同义词的单词最终可能会出现在空间中非常相似的部分。相反,含义截然不同的词语则会出现在空间中的其他地方。现在他们的建模方法与我们的相同。他们使用多层神经网络根据前面的单词来预测下一个单词。为了训练神经网络,他们像我们一样最大化训练数据的对数似然。所以建模方法本身是相同的。现在他们有了这个直觉的具体例子。它为什么有效?基本上,假设你试图预测一只狗在茫然中奔跑。现在假设确切的短语“狗在奔跑”从未在训练数据中出现过。稍后,当模型部署到某个地方进行测试时,它会尝试造句,说“一只狗在茫然中奔跑”。因为它从来没有遇到过接下来可能发生的事情。但这种方法实际上可以让你解决这个问题。因为您可能没有看到确切的短语“一只狗在某物中奔跑”。但也许您见过类似的短语。也许您曾见过“那只狗在茫然中奔跑”这句短语。也许您的网络已经了解到 a 和 以及 the 类似于 经常可以互换。因此也许它将嵌入和嵌入结合起来,然后实际上将它们在空间中彼此靠近。因此,您可以通过嵌入来传递知识,并以这种方式进行概括。类似地,网络可以知道猫和狗都是动物,并且它们在许多非常相似的环境中同时出现。因此,即使您没有见过这个确切的短语,或者您没有见过确切的走路或跑步,您也可以通过嵌入空间传递知识,并将其推广到新的场景。现在让我们向下滚动到神经网络的图表。他们这里有一张漂亮的图表。在这 3 个前词中,正如我提到的,现在这 3 个前词,我们的词汇量有 17,000 个可能的单词。因此,其中的每一个基本上都是传入单词的索引。因为有 17,000 个单词,所以这是一个 016,999 之间的整数。现在还有一个他们称之为 C 的查找表。该查找表是一个 17,000 x 30 的矩阵。基本上,我们在这里所做的就是将其视为一个查找表。因此,每个索引都会从这个嵌入矩阵中抽出一行,以便每个索引都会转换为 30 维向量,该向量对应于该单词的嵌入向量。这里我们有 30 个神经元的输入层,用于输入 3 个词,总共 90 个神经元。这里他们说这个矩阵 c 由所有单词共享。因此,对于每一个词,我们总是一遍又一遍地对同一个矩阵 c 进行索引。接下来是这个神经网络的隐藏层。这个神经网络的隐藏神经网络层的大小是一个超参数。范围。因此,当它有点像神经网络设计者的设计选择时,我们会使用超参数这个词。它可以根据你的需要设置得大一点或者小一点。例如,大小可以是 100。我们将讨论这个隐藏层的大小的多种选择,并评估它们的效果。假设这里有 100 个神经元,它们全部会与组成这三个词的 90 个单词或 90 个数字完全相连。这是一个完全连接的层,然后有一个 10 小时长的层,然后是这个输出层。因为接下来可能出现的单词有 17,000 个,所以这一层有 17,000 个神经元,所有神经元都与隐藏层中的所有神经元完全相连。这里有很多参数,因为有很多单词。所以大部分计算都在这里。这是昂贵的层。现在这里的低点有17,000。因此,在此之上,我们有 softmax 层,我们在之前的视频中也见过这个层。因此,每个逻辑函数都被指数化,然后所有内容被标准化为总和为 1,这样我们就有了序列中下一个单词的一个很好的概率分布。当然,在训练期间,我们实际上有标签。我们知道了序列中下一个单词的标识。该词或其索引用于找出该词的概率。然后我们根据这个神经网络的参数最大化这个词的概率。因此参数就是这个输出层的权重和偏差、隐藏层的权重和偏差以及嵌入查找表 c。所有这些都是通过反向传播进行优化的。这些虚线箭头忽略了那些。这代表了神经网络的一种变体,我们不会在本视频中探讨它。这就是设置,现在让我们实现它。好的,我为这次讲座开始了一个全新的笔记本。我们正在导入 PyTorch 和 Matplotlib,以便我们可以创建图形。然后,我像以前一样将所有名字读成一个单词列表,并在这里显示前八个。请记住,我们总共有 32,000 人。这只是前 8 个。然后在这里我构建字符词汇表以及从字符串到整数的所有映射,反之亦然。现在我们要做的第一件事就是编译神经网络的数据集。我不得不重写这段代码。我马上就向你展示它是什么样子的。这是我为创建数据集而创建的代码。因此让我先运行它然后简要解释一下它是如何工作的。因此,首先我们要定义一个称为块大小的东西,这基本上就是我们需要多少个字符来预测下一个字符的上下文长度。在这个例子中,我们用 3 个字符来预测第 4 个字符。因此我们的块大小为 3。这就是支持预测的块的大小。然后我在这里构建 x 和 y。x 是神经网络的输入,y 是 x 内部每个示例的标签。然后我重复前五个单词。在我们开发所有代码时,我仅做前五个以确保效率。但稍后我们会来这里删除它,以便使用整个训练集。所以我在这里打印了单词 Emma。这里我主要展示的是我们可以基于单个单词 Emma 生成 5 个例子。因此,当我们给出仅仅是点、点的上下文时,序列中的第一个字符就是 e。在此上下文中,标签是 m。当上下文是这个时,标签是m,等等。因此我构建它的方式是,首先,我从仅 0 个标记的填充上下文开始。然后我迭代所有字符。我得到了序列中的字符,并且基本上构建了当前字符的数组 y 和存储当前运行上下文的数组 x。然后在这里我打印所有内容,在这里我裁剪上下文并按顺序输入新字符。这有点像上下文的滚动窗口。现在我们可以将这里的块大小更改为例如 4。在这种情况下,我们将根据前 4 个字符预测第 5 个字符。或者它可以是 5,然后它看起来就会像这样。或者可以说是 10,然后它看起来就会像这样。我们用 10 个字符来预测第 11 个字符。我们总是用点来填充。因此,让我将其提升到 3,这样我们就有了论文中的内容。最后,当前的数据集如下所示。通过这五个词,我们创建了一个包含 32 个示例的数据集,并且每个神经网络的输入都是 3 个整数,并且我们有一个也是一个整数的标签 y。所以 x 看起来像这样。这些都是个别的例子。然后 y 是标签。鉴于此,现在让我们编写一个神经网络,利用这些 x 来预测 y。首先,让我们构建嵌入、查找表 c。所以我们有 27 个可能的字符,我们将它们嵌入到低维空间中。他们在论文中写了 17,000 个单词,并将它们嵌入到小至 30 维的空间中。所以他们把 17,000 个单词塞进 30 维空间。在我们的例子中,我们只有 27 个可能的字符。因此让我们先将它们塞进尽可能小的某个空间中,例如二维空间。所以这个查找表将是随机数,我们有 27 行,2 列。对吧?所以 27 个字符中的每一个字符都会有一个二维嵌入。这就是我们一开始随机初始化的嵌入矩阵 c。现在,在我们使用此查找表 c 将输入 x 中的所有整数嵌入之前,让我实际上尝试嵌入一个单独的整数,比如说 5,这样我们就可以了解它是如何工作的。现在,这种方法当然可行,我们可以取 C,然后索引到第 5 行。这给了我们一个向量,即 C 的第 5 行。这是一种方法。我在上一讲中提出的另一种方法实际上看似不同,但实际上相同。在上一讲中,我们所做的是获取这些整数,然后使用独热编码首先对它们进行编码。因此,对于 f dot one hot,我们要对整数 5 进行编码,并且我们要告诉它类别的数量是 27。因此,这是除第 5 位打开之外全为零的 26 维向量。现在,这实际上不起作用。原因是这个输入实际上必须是火炬。张量。我故意犯了一些错误,只是为了让您看到一些错误并知道如何修复它们。所以这必须是一个张量而不是一个整数。修复起来相当简单。我们得到一个 1 热向量。第五维是 1,其形状是 27。现在请注意,正如我在之前的视频中简要提到的那样,如果我们取这个热向量并将其乘以 C,那么您会期望什么?好吧,第一,首先您会期望出现错误,因为预期的标量类型是 long,但找到的是 float。所以有点令人困惑,但这里的问题是,one hot,它的数据类型是 long。它是一个 64 位整数。但这是一个浮点张量,所以 PyTorch 不知道如何将整数与浮点数相乘,这就是为什么我们必须明确将其转换为浮点数以便进行相乘。现在这里的输出实际上是相同的。并且由于这里的矩阵乘法的工作方式,它们是相同的。我们有一个与 c 列相乘的热向量。而且由于所有的零,它们实际上最终掩盖了 c 中的所有内容,除了第 5 行被拔掉。因此我们实际上得出了相同的结果。这告诉你,我们可以在这里解释第一部分,这个整数的嵌入。我们可以将其视为查找表 c 中的整数索引。但同样地,我们也可以将这小块视为这个更大的神经网络的第一层。这一层没有非线性的神经元。没有tan,它们只是线性神经元,它们的权重矩阵是c。然后我们将整数编码为 1 hot 并将它们输入神经网络,第一层基本上将它们嵌入其中。因此,这是做同一件事的两种等效方法。我们只是要进行索引,因为它要快得多,我们将把这个 1 热输入解释到神经网络中,我们只是要索引整数并创建和使用嵌入表。现在嵌入一个像 5 这样的整数就足够容易了。我们可以简单地要求 PyTorch 检索 c 的第 5 行或 c 的行索引 5。但是,我们如何同时嵌入存储在数组 x 中的所有 32×3 整数?幸运的是,PyTorch 索引相当灵活且功能强大。因此,仅仅像这样请求单个元素 5 是行不通的。您实际上可以使用列表进行索引。例如,我们可以得到第 5、6 和 7 行,它将像这样工作。我们可以用列表进行索引。它并不只是一份清单而已。它实际上也可以是一个整数张量,我们可以用它来进行索引。所以这是一个整数张量 567,它也能正常工作。事实上,我们还可以重复第 7 行并多次检索它,并且相同的索引将在这里嵌入多次。所以这里我们用一维整数张量进行索引,但事实证明你也可以用多维整数张量进行索引。这里我们有一个二维的整数张量。因此我们只需简单地执行 c atx 即可。它的形状是 32 x 3,这是原始形状。现在,对于这 32 个乘以 3 的整数中的每一个,我们都已检索到嵌入向量。基本上,我们以此为例。第 13 个或示例索引 13,第二个维度是整数 1 作为示例,因此在这里如果我们对 x 执行 c,这将给出该数组,然后我们通过 2 对该数组进行 13 的索引,那么我们就得到了这里的嵌入。并且您可以验证 1 处的整数 c 确实等于这个。您会看到,它们是相等的。长话短说,PyTorch 索引非常棒。为了同时嵌入 x 中的所有整数,我们可以简单地对 x 执行 c,这就是我们的嵌入,而且这样就可以了。现在让我们在这里构建这一层,即隐藏层。因此我们有 w1,我将其称为我们将随机初始化的权重。现在这一层的输入数量将是 3 乘以 2,对吗?因为我们有 2 维嵌入,我们有 3 个。因此数字和偏差也将被随机初始化作为示例。然后偏差也将作为示例被随机初始化,我们只需要 100 个。现在的问题是我们不能简单地像平常一样获取输入,在这种情况下是嵌入,我们想将它与这些权重相乘。然后我们想添加偏见。这大致就是我们想要做的事情。但这里的问题是这些嵌入是堆积在这个输入张量的维度上的。所以这个矩阵乘法是行不通的,因为这是一个 32 x 3 x 2 的形状,我不能将它乘以 6 x 100。因此,我们需要以某种方式将这些输入连接在一起,以便我们能够按照目前无法实现的方式执行一些操作。那么我们怎样将这个 32×3×2 转换为 32×6 以便我们能够真正执行这个乘法。我想向您展示,通常有很多方法可以实现您想在 Torch 中做的事情。有些会更快、更好、更短等等。这是因为 Torch 是一个非常大的库,它拥有大量的功能。因此,如果我们只是转到文档并单击 torch,您会看到此处的滑块非常小,这是因为有很多函数,您可以调用这些张量来转换它们、创建它们、乘以它们、添加它们,对它们执行各种不同的操作。如果你愿意的话,这有点像可能性的空间。现在您可以做的事情之一是,如果我们可以在这里控制控制 f 进行连接,我们会看到有一个函数 torch。cat 这将连接给定维度中的给定张量序列。并且,这些张量必须具有相同的形状等等。因此,我们可以使用连接操作,以一种简单的方式,将每个输入的这 3 个嵌入连接起来。所以在这种情况下,我们有形状的 em 的 em。我们真正想要做的是检索这 3 个部分并将它们连接起来。因此,我们想要抓住所有的例子。我们希望首先抓取第 0 个索引,然后抓取所有这些。因此,这只会提取出这里第一个单词的 32×2 嵌入。所以基本上我们想要这个家伙,我们想要第一维度,我们想要第二维度。这是单独的 3 件。然后我们要把它当作一个序列,并要进行火炬点燃。那个序列上的猫。这就是清单。火炬。Cat 接受一系列张量,然后我们必须告诉它沿着哪个维度进行连接。所以在这种情况下,所有这些都是 32 x 2,并且我们想要连接不是跨 0 维而是跨 1 维。因此传入 1 会给我们一个结果,即它的形状是 32 x 6,正如我们想要的那样。因此,取 32 个并将其挤压回去,然后将其切成 32 x 6。现在这有点丑陋,因为如果我们以后想要改变块大小,这个代码就不会具有通用性。目前我们有 3 个输入。3 个字。但是如果我们有 5 个呢?那么这里我们就必须更改代码,因为我直接进行索引。那么 torch 再次来救援,因为它是一个称为 unbind 的函数,它可以删除张量维度。因此,它删除了一个张量维度,并返回沿给定维度的所有切片的元组。这正是我们所需要的。基本上,当我们调用 torch dot unbind 时,torch。解除 m 的绑定并传入维度 1 索引 1。这给了我们一个与此完全等价的张量列表。运行这个程序会给我们一个 lang3,它就是这个列表。我们可以在其上并沿着第一维调用 torch dot cat。并且这个工作原理和形状是一样的。但是现在,无论我们的块大小是 3 还是 5 或 10,这都可以正常工作。这是一种方法。但事实证明,在这种情况下,实际上有一种更好、更有效的方法。这让我有机会暗示一下 Torch 的一些内部结构。张量。因此让我们在这里创建一个元素从 0 到 17 的数组,其形状只有 18。它是一个包含 18 个数字的单一向量。事实证明,我们可以非常快速地将其重新表示为不同大小和维度的张量。我们通过调用视图来实现这一点,我们可以说这实际上不是一个由 18 个元素组成的单一向量。这是一个 2×9 的张量。或者换句话说,这是一个 9×2 的张量。或者这实际上是一个 3 x 3 x 2 的张量。只要这里的元素总数相乘相同,就可以起作用。而在PyTorch中,调用dot view这个操作是非常高效的。原因在于每个张量中都有一个所谓的底层存储。并且存储的只是数字,始终是一维向量。这就是张量在计算机内存中的表示方式。它始终是一个一维向量。但是当我们调用该视图时,我们正在操纵该张量的某些属性,这些属性决定了如何将一维序列解释为 n 维张量。这里发生的情况是,当我们调用点视图时,没有内存被更改、复制、移动或创建。存储是相同的,但是当你调用点视图时,这个张量的视图的一些内部属性会被操纵和改变。具体来说,有一种东西叫做存储偏移、步幅和形状。并且对它们进行操作,以便将这一维字节序列看作不同的维数组。Eric 有一篇博客文章,名为“PyTorch 内部”,他在其中详细介绍了与张量相关的一些内容以及张量视图的表示方式。这实际上就像是代表物理内存的逻辑构造。这是一篇非常好的博客文章,您可以仔细阅读。我还可能制作一整段视频,介绍 torch tensor 的内部结构以及其工作原理。在这里,我们只需注意到这是一个非常有效的操作。如果我删除它并回到我们的 emp,我们会看到我们的 mb 的形状是 32 x 3 x 2,但我们可以简单地要求 PyTorch 将其视为 32 x 6。而将其展平为 32 x 6 数组的方式恰好是将这两个元素堆叠在一行中。这基本上就是我们所追求的连接操作。您可以验证这实际上给出了与我们之前完全相同的结果。所以这是一个元素 y 相等,你可以看到这两个张量的所有元素都是相同的。因此我们得到了完全相同的结果。长话短说,我们可以来到这里,如果我们将其视为 32 乘以 6,那么这个乘法就会起作用并为我们提供我们想要的隐藏状态。因此,如果这是 H,那么 H 破折号形状现在就是我们 32 个示例中的每一个的 100 维激活。这给出了期望的结果。让我在这里做两件事。第一,我们不要使用 32。例如,我们可以执行类似操作,将 em 点形状设置为 0。这样我们就不会对这些数字进行硬编码,并且这对于任何 m 的大小都有效。或者我们也可以做负面的。当我们做负数运算时,PyTorch 会推断这应该是什么。因为元素的数量必须相同,并且我们说这是 6,所以如果 m 的大小不同,PyTorch 将得出这必须是 32 或其他数字。另一件事是,我想指出的另一件事是,当我们进行连接时,这实际上效率要低得多,因为这种连接会创建一个具有全新存储的全新张量。由于无法仅通过操作视图属性来连接张量,因此需要创建新的内存。因此,这是低效的并且会产生各种新的内存。所以现在让我删除它。我们不需要这个。这里,为了计算 h,我们还要对其执行 dot10h 操作以获得 oops,从而得到 h。因此,由于 10h,这些不是负一和一之间的数字,并且我们知道形状是 32 x 100。这基本上就是我们 32 个示例中的每一个的隐藏激活层。现在,还有一件事我略过了,我们必须非常小心,这就是这里的优点。特别是,我们希望确保广播能够按照我们的意愿进行。这个的形状是 32 x 100,b 的形状是 100。所以我们看到这里的加法将广播这 2,特别是我们有 32 乘以 100 广播到 100。因此广播将在右侧对齐,在这里创建一个虚假的维度,因此这将成为一个 1 x 100 的行向量,然后它将对这 32 行中的每一行进行垂直复制并进行元素相关的加法。因此在这种情况下,正确的事将会发生,因为相同的偏差向量将被添加到该矩阵的所有行。这是正确的。这正是我们所希望的,而且确保这一点始终是很好的做法,这样你就不会搬起石头砸自己的脚。最后,让我们在这里创建最后一层。因此让我们创建 w2andb2。现在的输入是 100,而输出的神经元数量为 27,因为接下来可能有 27 个字符。因此偏差也将是 27。因此,作为该神经网络的输出的 logit 将是 h 乘以 w2+b2。日志。形状为 32 x 27,并且对数看起来不错。现在,正如我们在上一个视频中看到的那样,我们要获取这些对数,并且我们要首先对它们进行指数运算以获得我们的虚假计数,然后我们要将它们标准化为概率。因此概率是计数除法,现在,沿第一个维度计数点点和,并保持 dims 为真,与上一个视频中完全一样。所以可能如此。现在的形状是 32 x 27,您会看到每一行概率的总和为 1,因此它是标准化的。这给了我们概率。现在我们当然有了接下来的实际字母。它来自我们在创建数据集期间创建的数组 y。所以 y 是这里的最后一部分,它是我们现在想要预测的序列中的下一个字符的标识。因此,我们现在要做的就像在上一个视频中一样,我们要对概率的行进行索引,并在每一行中找出分配给正确字符的概率,如下所示。首先,我们有 32 个火炬点排列,它有点像迭代器,数字从 0 到 31。然后我们可以按照以下方式对概率进行索引。可能在 32 个火炬点排列中,它抹去了道路。然后在每一行中,我们想抓取由 y 给出的列。因此,这给出了由该神经网络网络及其权重设置分配给序列中正确字符的当前概率。您可以看到,对于某些字符来说,这看起来还不错,例如,这基本上为 0。2 但对于其他许多角色来说,情况看起来并不好。就像这样 0。07zeros1 概率。因此该网络认为其中一些情况极不可能发生。但当然,我们还没有训练神经网络。因此,这将会得到改善,理想情况下,这里所有这些数字当然都是 1,因为这样我们就能正确地预测下一个字符。现在,就像在上一个视频中一样,我们要取这些概率,我们要查看对数概率,然后我们要查看平均对数概率及其负数以创建负对数似然损失。所以这里的损失是 17,这是我们希望最小化的损失,以使网络能够预测序列中的正确字符。好的。所以我在这里重写了所有内容,使其更加令人尊敬。这是我们的数据集。这是我们定义的所有参数。我现在正在使用生成器使其可重现。我将所有参数聚类到单个参数列表中,以便于计算它们并查看我们目前总共有大约 3,400 个参数。这是我们开发的前向传递,这里我们得到一个数字,即损失,它目前表示这个神经网络在当前参数设置下的运行情况。现在我想让它变得更受尊敬。具体来说,请看这里的这些行,我们采用对数并计算损失。我们实际上并不是在重新发明轮子。这只是分类,很多人都使用分类。这就是为什么 PyTorch 中有一个功能性点交叉熵函数来更有效地计算这一点。所以我们只需简单地调用 f 即可。我们可以传入 crossentropy 和 logits,还可以传入目标 y 数组,这样就能计算出完全相同的损失。所以实际上我们可以简单地把它放在这里并擦除这三行,我们就会得到完全相同的结果。现在实际上有很多理由选择 f。像这样交叉熵滚动您自己的实现。我这样做是出于教育原因,但你永远不会在实践中使用它。为什么?第 1 个原因是,当你使用 f 时。Crossentropy,PyTorch 不会创建所有这些中间张量,因为这些都是内存中的新张量,而这样运行效率相当低。相反,PyTorch 会将所有这些操作聚类,并且经常创建融合内核,可以非常有效地评估这些聚类数学运算的表达式。第二,后向传递可以变得更加高效。这并不仅仅因为它是一个融合内核,而且从分析和数学上来说,它通常是一个更简单的反向传递实现。我们实际上在微电网中看到了这种情况。您可以看到,当我们实现 10h 时,计算 10h 的此操作的前向传递实际上是一个相当复杂的数学表达式。但因为它是一个聚类的数学表达式,所以当我们进行向后传递时,我们并没有单独向后传递 exp、两次、减一和除法等等。我们刚才说它是1分2。这是一个更简单的数学表达式。我们之所以能够做到这一点,是因为我们能够重复使用计算,并且能够通过数学和分析方法推导出导数。通常,这种表达式在数学上是简化的,因此需要实现的内容就少得多。因此,它不仅可以由于在融合内核中运行而变得更加高效,而且表达式在数学上可以采用更简单的形式。这就是第一点。2、从本质上讲,f 点交叉熵在数值上也可以表现得更加良好。让我给你展示一个它如何运作的例子。假设我们的对数为负三、零和 5。然后我们取它的指数并将其标准化,使总和为 1。因此,当对数函数采用该值时,一切都很好,我们得到了一个很好的概率分布。现在考虑一下当其中一些逻辑值采取更极端的值时会发生什么。这可能发生在神经网络优化过程中。假设其中一些数字变得非常负,比如说负 100。那么实际上一切都会好起来的。我们仍然可以得到概率,你知道,这些概率表现良好,并且它们加起来为 1,一切都很好。但是由于 x 的工作方式,如果您有非常正的对数,比如说这里的正 100,那么您实际上就会开始遇到麻烦,而且我们得到的不是数字。其原因在于这些计数在这里有一个 infth。因此,如果您向 exp 传递一个非常负的数字,您只会得到一个非常负的数字(对不起,不是负数),而是一个非常非常接近 0 的小数字,这没问题。但是如果你传递一个非常正的数字,我们突然就会超出代表这些计数的浮点数的范围。所以基本上我们取 e 并将其提升到 100 次方,这给了我们“inf”,因为我们已经超出了这个浮点数的动态范围。因此我们无法通过这个表达式传递非常大的逻辑值。现在让我将这些数字重置为合理的数字。PyTorch 解决这个问题的方法是,你看我们在这里如何得到一个表现良好的结果?事实证明,由于这里的规范化,你实际上可以用你想要的任意常数值来偏移 logits。所以如果我在这里加 1,你实际上会得到完全相同的结果。或者如果我加 2 或者减 3,任何偏移都会产生完全相同的概率。因为负数是可以的,但正数实际上可以溢出这个指数。PyTorch 的作用是内部计算 logits 中出现的最大值并减去它。因此在这种情况下,它会减去 5。因此,logits 中的最大数字将变为 0,而所有其他数字将变为一些负数。这样做的结果总是表现良好的。因此,即使我们之前这里有 100,这也不太好,但因为 PyTorch 会减去 100,所以这会起作用。因此,有很多很好的理由来调用交叉熵。第一,前传可以更加高效。后向传递的效率会更高。而且事物在数字上也可以表现得更加良好。现在让我们开始这个神经网络的训练。我们有前传。我们不需要这些。那就是我们的损失等于路径。交叉熵。这就是前传。然后我们需要向后传递。首先我们要将渐变设置为 0。因此对于光束参数,我们要确保 p。grad 为 none,与在 PyTorch 中将其设置为 0 相同。然后就失败了。向后填充这些渐变。一旦我们有了梯度,我们就可以进行参数更新。因此,对于 ping 参数,我们希望获取所有数据,并希望将其学习率乘以 p 点梯度。然后我们要重复几次。我们也在这里打印损失。现在这还不够,而且会产生错误,因为我们还必须获取 PN 参数,并且我们必须确保 PyTorch 中的 P dot 需要 grad 设置为 true。这应该可以起作用。好的。因此,我们一开始的损失是 17 美元,现在我们正在减少这个数字。让我们跑得更久一些。您可以看到这里的损失减少了很多。所以如果我们只运行 1000 次,我们的损失就会非常低。这意味着我们做出了非常好的预测。现在之所以如此简单,是因为我们只过度拟合了 32 个示例。因此,我们只有前五个单词的 32 个例子。因此,很容易让这个神经网络只适合这 32 个例子,因为我们有 3,400 个参数但只有 32 个例子。因此,我们正在对单批数据进行过度拟合,并在良好的预测中获得非常低的损失。但这只是因为我们有如此多的参数而例子却如此之少,所以很容易使其非常低。现在我们还不能精确地实现 0。这样做的原因是我们可以例如查看正在预测的逻辑值。并且,我们可以沿着第一个维度查看最大值。在 PyTorch 中,max 不仅报告取最大值的实际值,还报告这些值的索引。您会发现索引与标签非常接近,但在某些情况下它们有所不同。例如,在这个第一个例子中,预测索引是 19,但标签是 5。我们能够使损失为 0,从根本上说,这是因为这里第一个或第 0 个索引是应该预测 e 的例子。但是您会看到,也应该预测 o,也应该预测 I,然后也应该预测 s。因此基本上,e、o、a 或 s 都是训练集中对于完全相同的输入的所有可能结果。因此,我们不能完全过度拟合,并使损失正好为 0。所以,但是在有唯一输入和唯一输出的情况下,我们已经非常接近了。在这些情况下,我们会进行所谓的过度拟合,并且基本上得到完全相同且完全正确的结果。所以现在我们要做的就是确保读取完整的数据集并优化神经网络。好的,让我们回到创建数据集的地方。我们看到这里我们只使用了前五个词。所以现在让我删除这个,让我删除打印语句,否则我们将会打印太多。因此,当我们处理所有单词的完整数据集时,我们现在有 228,000 个示例,而不仅仅是 32 个。现在让我们向下滚动。数据集要大得多。我们初始化权重,参数数量相同。它们都需要渐变。然后让我们推动这个打印输出损失。项目在这里,让我们看看运行这个项目后优化情况如何。好的。因此,我们一开始的损失相当高,然后随着我们的优化,损失逐渐下降。但您会注意到每次迭代都需要花费相当多的时间。因此,让我们真正解决这个问题,因为我们在 228,000 个示例的前后处理上做了太多工作。在实践中,人们通常做的是对多批数据执行前向和后向更新。因此,我们要做的是随机选择数据集的某个部分,这就是小批量,然后只向前向后并更新这个小批量。然后,我们对这些小批次进行迭代。因此在 PyTorch 中,我们可以例如使用 Torch dot randint。我们可以生成 0 到 5 之间的数字,并使其成为 32 个。我相信 PyTorch 中的大小必须是一个元组。因此我们可以有一个由 0 到 5 之间的 32 个数字组成的元组。但实际上我们想要的是 x。这里的形状为 0。这样就创建了索引到我们的数据集中的整数,一共有 32 个。因此,如果我们的小批量大小是 32,那么我们可以来到这里,首先进行小批量构建。因此,我们想要在本次迭代中优化的整数在 IX 中,然后我们希望使用 ix 索引到 X 中以仅抓取那些行。因此我们只得到了 32 行 x,因此嵌入将再次是 32 x 3 x 2,而不是 200,000 x 3 x 2。然后,这个 I x 不仅要用来索引 x,还要用来索引 y。现在这应该是小批量,而且应该更快。好的,它几乎是即时的。通过这种方式,我们几乎可以立即运行许多示例,并更快地减少损失。现在,因为我们只处理小批量,所以我们的梯度质量较低。所以方向不太可靠。这不是实际的梯度方向。但即使仅根据 32 个示例进行估计,梯度方向也足够好,它是有用的。因此,获得近似梯度并采取更多步骤要比评估精确梯度并采取更少步骤好得多。这就是为什么在实践中,这种方法非常有效。所以现在让我们继续优化。让我从这里取出这个损失点项,并将其放在最后这里。好的。因此我们徘徊在 2 左右。5左右。然而,这只是该小批次的损失。因此,让我们实际评估一下所有 x 和所有 y 的损失,这样我们就能全面了解模型目前的运行情况。目前我们大约是 2。整个训练集上为 7。因此让我们运行优化一段时间。好的,我们现在是 2 点。62.572.5 3.好的。因此,当然存在的一个问题是,我们不知道我们的步伐是太慢还是太快。因此,此时我只是猜测而已。那么一个问题就是你如何确定这个学习率?我们如何确信我们正以正确的速度前进?所以我将向你展示一种确定合理学习率的方法。其工作原理如下。让我们将参数重置为初始设置。现在让我们打印出每个步骤,但我们只执行 10 步左右,或者也许 100 步。我们希望找到一个非常合理的集合,如果你愿意的话可以搜索范围。例如,如果这个数字非常低,那么我们会发现损失几乎没有减少。所以从根本上来说这并不是太低。因此我们尝试一下这个。好的。因此,我们正在减少损失,但速度不是很快。所以这是一个相当不错的低范围。现在我们再次重置它。现在让我们试着找出损失爆发的地方。因此可能是负一。好的,我们看到我们正在尽量减少损失,但是你会发现它有点不稳定。它上下波动很大。所以 -one 可能就像一个快速的学习率。我们来尝试一下-ten。好的,这不是优化。效果不太好。所以 -ten 太大了。- 一个已经有点大了。因此,如果我重置的话,负一看起来是合理的。所以我认为正确的学习率介于-0 和-10 之间。001 和负一。因此我们可以用 Torquedotlenspace 来实现这一点。我们基本上想在 0 和 1 之间做这样的事情,但步数是所需的另一个参数。让我们走 1,000 步。这将创建 1,000 个 0 之间的数字。001 和 1。但以线性方式在它们之间进行切换实际上没有任何意义。因此,让我创建学习率指数。而不是 0。001,这将是负 3,这将是 0。然后我们要搜索的实际 lar 将是 10 的 l r e 次方。所以现在我们要做的就是在这些学习率的指数之间线性调整。这是 0。001 并且这是 1,因为 10 的 0 次方是 1。因此,我们在这个间隔内呈指数分布。这些就是我们想要大致搜索的候选学习率。所以现在我们要做的就是在这里运行 1,000 步的优化,我们不会使用固定的数字,而是使用学习率索引,I 的 LRS,并将其设为 I。所以基本上让我将其重置为再次从随机开始。在 0 之间创建这些学习率。001 和 1 但指数地停止了。这里我们要进行的是迭代 1000 次。我们将使用一开始就非常低的学习率。一开始是 0。001 但最后它会是 1。然后我们继续按照这个学习率前进。现在我们想要做的是跟踪我们使用的学习率,并查看由此产生的损失。所以在这里,让我跟踪统计数据。所以 lri。appendlr 和,损失方。追加损失。物品。好的。因此,再次重置一切,然后运行。因此基本上我们从一个非常低的学习率开始,然后一直上升到学习率为-1。现在我们可以做的是 PLLT。绘制图表,我们就可以绘制 2 了。因此我们可以在 x 轴上绘制学习率,在 y 轴上绘制我们看到的损失。通常你会发现你的情节看起来像这样。一开始的时候,你的学习速度非常低。所以我们基本上什么都没发生,几乎没有什么事发生。然后我们就喜欢上了这里的一个好地方。然后,当我们将学习率提高到足够高时,我们基本上开始变得有点不稳定。因此,良好的学习率就位于此处。因为这里有 LRI I,我们实际上可能想要做的不是学习率而是指数。所以那可能是 l r e ,而 I 也许是我们想要记录的。所以让我重置一下并重新进行计算。但是现在在 x 轴上我们有学习率的指数。因此我们可以看到适用的学习率指数。这有点像这里的山谷,因为这里的学习率太低了。然后我们期望这里的学习率相对较好。然后事情就开始爆发了。因此,学习率的负一 x 指数附近的某个位置是一个非常好的设置。而 10 的负 1 次方是 0。1.所以是 0。1 实际上是点 1,实际上是这里的一个相当好的学习率。这正是我们最初的设定。但这大致就是您确定的方法。所以现在我们可以取出这些跟踪,我们可以简单地将警报设置为 10 到负 1,或者基本上为 0。1 与以前一样。现在我们有信心这实际上是一个相当好的学习率。所以现在我们可以做的是加快迭代。我们可以重置我们的优化。我们可以使用这个学习率运行很长时间。我们不想打印,打印太多了。所以让我再次重置并跑 10,000 步。好的。因此我们现在为 0。2 2.大约48。我们再跑10000步吧。2.46.现在让我们进行一次学习率衰减。这意味着我们要将学习率降低 10 倍。因此,我们可能处于训练的后期阶段,我们可能想慢一点。实际上,让我们在第 1 点再做一次,看看是否能取得进展。好的。我们仍在取得进展。顺便说一下,我们上一个视频中实现的二元词组损失是 2。45.所以我们已经超越了二元模型。一旦我感觉到这实际上开始趋于稳定,人们就会喜欢按照我提到的方式进行学习率衰减。因此让我们尝试减少损失。我指的是学习率。我们实现了约 2 个目标。现在 3 了。显然,这是不正常的,并不完全是你在生产中训练它的方式,但这大致就是你正在经历的事情。首先,使用我向您展示的方法找到一个合适的学习率。然后你以该学习率开始训练一段时间。最后,人们喜欢进行学习率衰减,即将学习率衰减 10 倍,然后再执行几个步骤,然后大致就可以得到一个训练有素的网络。因此我们实现了 2。3 并且,使用这个简单的神经网络对二元语言模型进行了显著的改进,如上所述,我们有一个更好的模型,因为我们实现了更低的损失2。3 比 2 低得多。45 与之前的二元模型相同。但事实并非如此。而事实并非如此,因为这实际上是一个相当小的模型。但是如果不断添加神经元和参数,这些模型会变得越来越大。所以你可以想象我们可能没有 1,000 个参数。我们可能有 10,000 个、100,000 个或数百万个参数。随着神经网络容量的增长,它变得越来越能够过度拟合你的训练集。这意味着训练集的损失,即你训练的数据的损失将变得非常非常低,低至 0。但模型所做的就是逐字逐句地记住你的训练集。因此,如果您采用该模型,并且它看起来运行得非常好,但您尝试从中抽样,您基本上只会获得与训练集中完全相同的示例。您将不会获得任何新数据。除此之外,如果你尝试评估一些保留的名称或其他词语的损失,你实际上会发现这些词语的损失可能非常高。从根本上来说,这不是一个好的模型。因此该领域的标准是将数据集分成我们所说的 3 个部分。我们有训练分割、开发分割或验证分割和测试分割。所以训练分开,测试或者,抱歉。开发或验证分割和测试分割。通常这占数据集的 80%。这可能是 10%,大致是这个 10%。因此您有这 3 个数据分割。现在,您对数据集的训练有素,训练集用于优化模型的参数,就像我们在这里使用梯度下降所做的那样。例如,开发或验证分割,它们用于模型所有超参数的开发。因此超参数例如是隐藏层的大小、嵌入的大小。所以对我们来说这是 100 或 2,但我们可以尝试不同的东西。正则化的强度,到目前为止我们还没有使用。因此,定义神经网络需要很多不同的超参数和设置。并且您可以尝试它们的许多不同变体,并查看哪一个在您的验证分割上效果最好。所以这用于训练参数。这用于训练超参数。最后使用测试分割来评估模型的性能。因此,我们仅非常谨慎地、很少次数地评估测试分割中的损失。因为每次你评估测试损失并从中学到一些东西时,你基本上也开始对测试分割进行训练。因此,你只能在测试集上测试损失,而且次数非常少。否则,您在对模型进行实验时,可能会面临过度拟合的风险。因此我们将训练数据分为训练、开发和测试。然后我们将在训练中进行训练,并且只在测试中进行非常谨慎的评估。好的。因此我们开始吧。在这里我们将所有单词放入 X 和 Y 张量中。因此,让我在这里创建一个新的单元格,然后让我在这里复制粘贴一些代码,因为我认为它不是那么复杂。但我们会尽力节省一点时间。我现在正在将其转换为一个函数。此函数获取一些单词列表并仅为这些单词构建数组 x 和 y。然后在这里,我把所有的单词都打乱。这些就是我们得到的输入词。我们正在将它们全部随机打乱。然后,我们将设置 n1 为 80% 单词的示例数,将 n2 设置为 90% 单词的示例数。所以基本上,如果单词的长度是 32,000,n1 就是哦对不起我应该运行这个。N1 为 25,000,n2 为 28 1,000。在这里我们看到我正在调用 build a set 来通过索引最多 n 个来构建训练集 X 和 Y。因此,我们将只有 25,000 个训练词,然后我们将有大约 n2 分钟 n1 33,000 个验证示例或开发示例。我们将使用 1 个单词减去 n2 或 3,304 个示例作为测试集。所以现在我们有了所有这 3 个分割的 x 和 y。哦,是的。我也在函数内部打印它们的尺寸。但在这里,我们没有文字,但这些已经是用这些文字构成的单独例子。现在让我们向下滚动这里。现在用于训练的数据集更像是这样的。然后,当我们重置网络时,当我们进行训练时,我们只会使用 x 列车、x 列车和 y 列车进行训练。所以这是我们唯一训练的内容。让我们看看单个批次的情况。现在让我们再训练几个步骤。训练神经网络可能需要一段时间。通常,您不会在线上执行此操作。您启动一系列作业并等待它们完成。可能需要多天等等。幸运的是,这是一个非常小的网络。好的。所以损失还是比较大的。哦,我们不小心使用了太低的学习率。所以让我回来吧。我们使用衰减学习率为 0。01.因此训练速度会更快。然后在这里我们评估时,让我们使用这里的开发集。Xdev 和 ydev 来评估损失。好的。我们不要降低学习率,只举 10,000 个例子。我们在这里评估一下开发损失。好的。因此我们得到大约 2。3 在开发中。因此,我在训练时神经网络没有看到这些开发示例。它对它们进行了优化,然而,当我们评估这些开发的损失时,我们实际上得到了相当大的损失。因此我们还可以看看整个训练集的损失。哎呀。所以我们看到训练损失和开发损失大致相等。所以我们并没有过度拟合。这个模型的功能还不够强大,仅仅能够纯粹地记忆数据。到目前为止,我们处于所谓的欠拟合,因为训练损失和开发或测试损失大致相等。这通常意味着我们的网络非常小。我们希望通过扩大这个神经网络的规模来提高性能。那么我们现在就开始吧。所以让我们到这里来增加神经网络的规模。最简单的方法是,我们可以来到隐藏层,它目前有 100 个神经元,然后我们将其增加。因此我们做 300 个神经元。这也是 300 个偏差。这里我们在最后一层有 300 个输入。因此让我们初始化我们的操作,我实际上想跟踪它,好的。我们就这么做吧。让我们再次进行统计。在这里,当我们跟踪损失时,我们也要跟踪步骤,并且让我们在这里只写 I。让我们用 30,000 进行训练,或者说,好吧,让我们尝试 30,000。现在我们处于点 1,我们应该能够运行它并优化神经网络。然后基本上在这里我想要 plt。绘制与损失相对应的步骤。这些是 X 和 Y,这是损失函数以及它是如何被优化的。现在,您会发现它的厚度相当大,这是因为我们正在针对这些小批次进行优化。小批次会产生一点噪音。我们在 def 集中处于什么位置?我们在 2。5.所以我们还没有很好地优化这个神经网络。这可能是因为我们把它做大了。这个神经网络可能需要更长时间才能收敛。我们继续训练吧。是的。我们继续训练吧。一种可能性是批量大小太小,导致训练中出现太多噪音。我们可能想要增加批量大小,以便我们能得到更多、更正确的梯度,并且不至于产生太多的抖动。事实上,我们可以进行更适当的优化。好的。现在这将变得毫无意义,因为我们已经重新初始化了这些。是的,现在看起来不太令人满意。但可能有一点改善,但很难说。我们再来一次吧。2.52.让我们尝试将学习率降低 2 倍。好的。我们到了 2 点。32.我们继续训练吧。我们基本上期望看到比以前更低的损失,因为现在我们有一个更大的模型并且我们拟合不足。因此我们希望增加模型的规模能够有助于神经网络。2.32.好的。所以事情进展得不太好。现在另一个令人担忧的问题是,即使我们把这里的 10h 层或隐藏层做得更大,但目前网络的瓶颈可能是这些二维的嵌入。这可能是因为我们在二维空间中塞入了太多字符。而神经网络无法真正有效地利用该空间。这有点像我们网络性能的瓶颈。好的。2.23.因此,只要降低学习率,我就能取得相当大的进步。让我们再运行一​​次,然后评估训练和开发损失。训练结束后,我还想做一件事,那就是在扩大嵌入大小之前,我想将这些字符的嵌入向量可视化,从 2 开始。因为我们想让这个瓶颈潜在地消失。但是一旦我将其设为大于 2,我们就无法将它们形象化。那么这里,看看我们现在是 2。23和2。24.所以,我们没有取得太大的进步,也许现在的瓶颈是字符嵌入大小,即 2。所以这里有一堆代码可以创建一个图形,然后我们将可视化神经网络在这些字符上训练的嵌入,因为现在嵌入大小只有 2。因此,我们可以将所有字符可视化,并将 x 和 y 坐标作为每个字符的 2 个嵌入位置。这里是 x 坐标和 y 坐标,它们是 c 的列。然后对于每一个,我还包括小角色的文本。因此我们看到的其实挺有趣的。网络基本上已经学会了分离字符并对其进行一些聚类。例如,您会看到元音 a、e、I、o、u 是如何聚集在一起的。所以这告诉我们神经网络将它们视为非常相似的。对吧?因为当它们输入神经网络时,所有这些字符的嵌入都非常相似。因此神经网络认为它们非常相似并且可以互换,如果这说得通的话。那么真正遥远的点例如 q。Q 被视为一种例外,并且 q 有一个非常特殊的嵌入向量,可以这么说。类似地,点(一个特殊字符)在这里也是完全不存在的。许多其他字母都聚集在这里。所以有趣的是,训练结束后,这里有一点结构。但是我们预计,因为我们的拟合度较低,而且我们把这一层做得更大,并没有充分改善损失,我们认为现在更好性能的制约因素可能是这些嵌入向量。因此让我们把它们弄大一些。好的。因此,让我们向上滚动到这里。现在我们没有二维嵌入。我们将为每个单词提供 10 维嵌入。那么这一层就会收到3乘以10。因此 30 个输入将进入隐藏层。我们还可以将隐藏层做得小一些。因此,隐藏层中的神经元数量不宜设为 300,而应设为 200 个。因此现在元素的总数将稍微大一些,为 11,000。然后这里我们要小心一点,因为我们将学习率设置为 0。1.这里,我们用 6 进行硬编码。显然,如果您从事生产工作,您就不会想对神奇数字进行硬编码。但现在不应该是 6,而应该是 30。让我们运行 50,000 次迭代,让我将这里的初始化分离到外面,这样当我们多次运行这个单元时,它就不会抹去我们的损失。除此之外,这里我们不是记录损失点项,而是实际记录,即 log10。我相信这是损失的结果。我马上就告诉你为什么。让我们优化它。基本上,我想绘制对数损失而不是损失,因为当您多次绘制损失时,它会呈现曲棍球棒状外观,并且对数会将其挤压进去。所以它看起来更加美观。因此 x 轴是 stepi,y 轴是 lossi。这里,这是 30。理想情况下,我们不会对这些进行硬编码。我们来看看损失。它再次变得非常厚,因为小批量非常小,但训练集的总损失为 2。3 且测试或定义集是 2。38 也一样。到目前为止一切都很好。现在让我们尝试将学习率降低 10 倍并再进行 50,000 次迭代训练。我们希望能够击败第 2 名。32.但是,我们再次只是非常随意地这样做。所以我实际上并不相信我们的学习率设置得很好,我们的学习率衰减(我们随机进行的)设置得很好。所以,说实话,这里的优化有点值得怀疑。这不是您通常在生产中执行的操作,您会根据所有这些设置创建参数或超参数,然后运行大量实验并查看哪些参数对您最有效。好的。所以我们有 2。现在 17,2。2.好的。因此,您会看到训练和验证性能是如何开始缓慢地分离的。因此,也许我们感觉神经网络已经足够好,或者参数的数量足够大,以至于我们开始慢慢过度拟合。让我们再进行一次迭代,看看会得到什么结果。但是,是的,基本上,你会进行大量的实验,然后慢慢地仔细审查哪些实验能给你带来最佳的开发性能。然后,一旦你找到了所有使开发性能良好的超参数,你就采用该模型并一次性评估测试集性能。这就是您在论文中报告的数字,或者您想要谈论和吹嘘的任何地方的数字,您的模型。那么让我们重新运行情节并重新运行火车和开发。而且因为我们现在的损失较低,所以这些嵌入的大小很可能阻碍了我们的发展。好的。所以 2。16,2。我们大致得到的是 19。因此,从这里出发,还有许多条路可走。我们可以继续进行调整优化。例如,我们可以继续调整神经网络的大小,或者增加作为输入的单词或字符的数量。因此,我们可以采用比输入更多的字符(而不仅仅是 3 个字符),这可以进一步改善损失。好的。所以我稍微改变了一下代码。因此,我们这里有 200,000 步的优化。在前 100,000 个中,我们使用的学习率为 0。1.然后在接下来的 100,000 中我们使用 0 的学习率。01.这就是我所得到的损失。这些是训练和验证损失的表现。在过去 30 分钟左右我能够获得的验证损失是 2。17.所以现在我邀请你打败这个数字。我认为,您可以使用相当多的旋钮来超越这个数字。所以第一点,你当然可以改变这个模型隐藏层的神经元数量。您可以更改嵌入、查找表的维数。您可以更改作为输入的字符数,作为此模型的上下文。然后当然,您可以改变优化的细节。我们运行了多长时间?学习率在哪里?它如何随时间变化?它如何衰减?您可以更改批量大小,并且实际上可能能够实现更快的收敛速度,即训练模型需要多少秒或多少分钟,并获得真正良好的损失结果。当然,我确实邀请你阅读这篇论文。它有 19 页,但此时,您实际上应该能够阅读本文的大部分内容并理解其中的大部分内容。本文还提出了不少改进想法,供大家参考。因此,您无法获得所有这些,但您应该能够击败这个数字。我把这留给读者作为练习。现在就这样了,下次再见。在我们结束之前,我还想展示如何从模型中采样。因此我们将生成 20 个样本。首先,我们从所有点开始,这就是上下文。然后,直到我们再次生成第 0 个字符,我们将使用嵌入表 c 嵌入当前上下文。通常,这里的第一个维度是训练集的大小。但这里我们仅处理我们生成的一个示例。因此,为了简单起见,这只是 1 中的网格。然后这种嵌入就会被投射到最终状态。您获得了逻辑。现在我们计算概率。为此,您可以使用 f。对数函数的软最大值,基本上就是对数函数求幂,使它们之和为 1。与交叉熵类似,需要注意的是不要出现溢出。一旦我们有了概率,我们就使用 Torshut Multinomial 从中抽样以获得下一个指数。然后我们移动上下文窗口来附加索引并记录它。然后我们可以将所有整数解码为字符串并打印出来。这些是一些示例样本,您可以看到模型现在运行得更好。因此这里的单词更像单词或名字。因此,我们有 am、jose、Lila 等。你知道,它听起来有点像名字了。所以我们确实取得了进展,但我们仍然可以对这个模型进行很大的改进。好的,抱歉,有一些奖励内容。我想说的是,我想让这些笔记本更容易获取。所以我不希望你必须安装 Jupyter 笔记本、Torch 和其他所有东西。因此,我将分享一个 Google Colab 链接,Google Colab 将看起来像浏览器中的笔记本。您只需访问一个 URL 即可执行在 Google 协作中看到的所有代码。这是我在本次讲座中执行的代码,我将其稍微缩短了一点。但基本上,您可以训练完全相同的网络,然后从模型中绘制和采样,一切就绪,您可以在浏览器中修改数字。无需安装。所以我只想指出这一点,此链接将在视频描述中。
"Everyone. Today we are continuing our implementation of Make More. Now in the last lecture we imple(...TRUNCATED)
"每个人。今天,我们将继续实施“Make More”计划。在上一讲中,我们按照 Be(...TRUNCATED)
"Hi everyone. So today we are once again continuing our implementation of Make More. Now so far we'v(...TRUNCATED)
"大家好。所以今天我们再次继续实施“Make More”计划。到目前为止,我们已(...TRUNCATED)
"Hi, everyone. Today, we are continuing our implementation of Make More, our favorite character leve(...TRUNCATED)
"大家好。今天,我们继续实现我们最喜欢的字符级语言模型 Make More。现在(...TRUNCATED)
"Hi, everyone. So by now, you have probably heard of ChatGPT. It has taken the world and the AI comm(...TRUNCATED)
"大家好。所以现在,你可能已经听说过 ChatGPT。它席卷了世界和人工智能社(...TRUNCATED)

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
2
Add dataset card