You now know how to create state-of-the-art architectures for computer vision, natural language processing, tabular analysis, and collaborative filtering, and you know how to train them quickly. So we’re done, right? Not quite yet. We still have to explore a little bit more the training process.
+
We explained in Chapter 4 the basis of stochastic gradient descent: pass a mini-batch to the model, compare it to our target with the loss function, then compute the gradients of this loss function with regard to each weight before updating the weights with the formula:
+
new_weight = weight - lr * weight.grad
+
We implemented this from scratch in a training loop, and also saw that PyTorch provides a simple nn.SGD class that does this calculation for each parameter for us. In this chapter we will build some faster optimizers, using a flexible foundation. But that’s not all we might want to change in the training process. For any tweak of the training loop, we will need a way to add some code to the basis of SGD. The fastai library has a system of callbacks to do this, and we will teach you all about it.
+
Let’s start with standard SGD to get a baseline, then we will introduce the most commonly used optimizers.
+
+
16.1 Establishing a Baseline
+
First, we’ll create a baseline, using plain SGD, and compare it to fastai’s default optimizer. We’ll start by grabbing Imagenette with the same get_data we used in Chapter 14:
+
+
dls = get_data(URLs.IMAGENETTE_160, 160, 128)
+
+
We’ll create a ResNet-34 without pretraining, and pass along any arguments received:
Now let’s try plain SGD. We can pass opt_func (optimization function) to vision_learner to get fastai to use any optimizer:
+
+
learn = get_learner(opt_func=SGD)
+
+
The first thing to look at is lr_find:
+
+
learn.lr_find()
+
+
+
+
+
+
+
+
+
+
It looks like we’ll need to use a higher learning rate than we normally use:
+
+
learn.fit_one_cycle(3, 0.03, moms=(0,0,0))
+
+
+
+
+
epoch
+
train_loss
+
valid_loss
+
accuracy
+
time
+
+
+
+
+
0
+
2.969412
+
2.214596
+
0.242038
+
00:09
+
+
+
1
+
2.442730
+
1.845950
+
0.362548
+
00:09
+
+
+
2
+
2.157159
+
1.741143
+
0.408917
+
00:09
+
+
+
+
+
+
Because accelerating SGD with momentum is such a good idea, fastai does this by default in fit_one_cycle, so we turn it off with moms=(0,0,0). We’ll be discussing momentum shortly.)
+
Clearly, plain SGD isn’t training as fast as we’d like. So let’s learn some tricks to get accelerated training!
+
+
+
16.2 A Generic Optimizer
+
To build up our accelerated SGD tricks, we’ll need to start with a nice flexible optimizer foundation. No library prior to fastai provided such a foundation, but during fastai’s development we realized that all the optimizer improvements we’d seen in the academic literature could be handled using optimizer callbacks. These are small pieces of code that we can compose, mix and match in an optimizer to build the optimizer step. They are called by fastai’s lightweight Optimizer class. These are the definitions in Optimizer of the two key methods that we’ve been using in this book:
As we saw when training an MNIST model from scratch, zero_grad just loops through the parameters of the model and sets the gradients to zero. It also calls detach_, which removes any history of gradient computation, since it won’t be needed after zero_grad.
+
The more interesting method is step, which loops through the callbacks (cbs) and calls them to update the parameters (the _update function just calls state.update if there’s anything returned by cb). As you can see, Optimizer doesn’t actually do any SGD steps itself. Let’s see how we can add SGD to Optimizer.
+
Here’s an optimizer callback that does a single SGD step, by multiplying -lr by the gradients and adding that to the parameter (when Tensor.add_ in PyTorch is passed two parameters, they are multiplied together before the addition):
It’s working! So that’s how we create SGD from scratch in fastai. Now let’s see what “momentum” is.
+
+
+
16.3 Momentum
+
As described in Chapter 4, SGD can be thought of as standing at the top of a mountain and working your way down by taking a step in the direction of the steepest slope at each point in time. But what if we have a ball rolling down the mountain? It won’t, at each given point, exactly follow the direction of the gradient, as it will have momentum. A ball with more momentum (for instance, a heavier ball) will skip over little bumps and holes, and be more likely to get to the bottom of a bumpy mountain. A ping pong ball, on the other hand, will get stuck in every little crevice.
+
So how can we bring this idea over to SGD? We can use a moving average, instead of only the current gradient, to make our step:
Here beta is some number we choose which defines how much momentum to use. If beta is 0, then the first equation becomes weight.avg = weight.grad, so we end up with plain SGD. But if it’s a number close to 1, then the main direction chosen is an average of the previous steps. (If you have done a bit of statistics, you may recognize in the first equation an exponentially weighted moving average, which is very often used to denoise data and get the underlying tendency.)
+
Note that we are writing weight.avg to highlight the fact that we need to store the moving averages for each parameter of the model (they all have their own independent moving averages).
+
Figure 16.1 shows an example of noisy data for a single parameter, with the momentum curve plotted in red, and the gradients of the parameter plotted in blue. The gradients increase, then decrease, and the momentum does a good job of following the general trend without getting too influenced by noise.
+
+
+
+
+
+
+
+
It works particularly well if the loss function has narrow canyons we need to navigate: vanilla SGD would send us bouncing from one side to the other, while SGD with momentum will average those to roll smoothly down the side. The parameter beta determines the strength of the momentum we are using: with a small beta we stay closer to the actual gradient values, whereas with a high beta we will mostly go in the direction of the average of the gradients and it will take a while before any change in the gradients makes that trend move.
+
With a large beta, we might miss that the gradients have changed directions and roll over a small local minima. This is a desired side effect: intuitively, when we show a new input to our model, it will look like something in the training set but won’t be exactly like it. That means it will correspond to a point in the loss function that is close to the minimum we ended up with at the end of training, but not exactly at that minimum. So, we would rather end up training in a wide minimum, where nearby points have approximately the same loss (or if you prefer, a point where the loss is as flat as possible). Figure 16.2 shows how the chart in Figure 16.1 varies as we change beta.
+
+
+
+
+
+
+
+
We can see in these examples that a beta that’s too high results in the overall changes in gradient getting ignored. In SGD with momentum, a value of beta that is often used is 0.9.
+
fit_one_cycle by default starts with a beta of 0.95, gradually adjusts it to 0.85, and then gradually moves it back to 0.95 at the end of training. Let’s see how our training goes with momentum added to plain SGD.
+
In order to add momentum to our optimizer, we’ll first need to keep track of the moving average gradient, which we can do with another callback. When an optimizer callback returns a dict, it is used to update the state of the optimizer and is passed back to the optimizer on the next step. So this callback will keep track of the gradient averages in a parameter called grad_avg:
We’re still not getting great results, so let’s see what else we can do.
+
+
+
16.4 RMSProp
+
RMSProp is another variant of SGD introduced by Geoffrey Hinton in Lecture 6e of his Coursera class “Neural Networks for Machine Learning”. The main difference from SGD is that it uses an adaptive learning rate: instead of using the same learning rate for every parameter, each parameter gets its own specific learning rate controlled by a global learning rate. That way we can speed up training by giving a higher learning rate to the weights that need to change a lot while the ones that are good enough get a lower learning rate.
+
How do we decide which parameters should have a high learning rate and which should not? We can look at the gradients to get an idea. If a parameter’s gradients have been close to zero for a while, that parameter will need a higher learning rate because the loss is flat. On the other hand, if the gradients are all over the place, we should probably be careful and pick a low learning rate to avoid divergence. We can’t just average the gradients to see if they’re changing a lot, because the average of a large positive and a large negative number is close to zero. Instead, we can use the usual trick of either taking the absolute value or the squared values (and then taking the square root after the mean).
+
Once again, to determine the general tendency behind the noise, we will use a moving average—specifically the moving average of the gradients squared. Then we will update the corresponding weight by using the current gradient (for the direction) divided by the square root of this moving average (that way if it’s low, the effective learning rate will be higher, and if it’s high, the effective learning rate will be lower):
Much better! Now we just have to bring these ideas together, and we have Adam, fastai’s default optimizer.
+
+
+
16.5 Adam
+
Adam mixes the ideas of SGD with momentum and RMSProp together: it uses the moving average of the gradients as a direction and divides by the square root of the moving average of the gradients squared to give an adaptive learning rate to each parameter.
+
There is one other difference in how Adam calculates moving averages. It takes the unbiased moving average, which is:
if we are the i-th iteration (starting at 0 like Python does). This divisor of 1 - (beta**(i+1)) makes sure the unbiased average looks more like the gradients at the beginning (since beta < 1, the denominator is very quickly close to 1).
Like for RMSProp, eps is usually set to 1e-8, and the default for (beta1,beta2) suggested by the literature is (0.9,0.999).
+
In fastai, Adam is the default optimizer we use since it allows faster training, but we’ve found that beta2=0.99 is better suited to the type of schedule we are using. beta1 is the momentum parameter, which we specify with the argument moms in our call to fit_one_cycle. As for eps, fastai uses a default of 1e-5. eps is not just useful for numerical stability. A higher eps limits the maximum value of the adjusted learning rate. To take an extreme example, if eps is 1, then the adjusted learning will never be higher than the base learning rate.
+
Rather than show all the code for this in the book, we’ll let you look at the optimizer notebook in fastai’s GitHub repository (browse the nbs folder and search for the notebook called optimizer). You’ll see all the code we’ve shown so far, along with Adam and other optimizers, and lots of examples and tests.
+
One thing that changes when we go from SGD to Adam is the way we apply weight decay, and it can have important consequences.
+
+
+
16.6 Decoupled Weight Decay
+
Weight decay, which we discussed in Chapter 8, is equivalent to (in the case of vanilla SGD) updating the parameters with:
The last part of this formula explains the name of this technique: each weight is decayed by a factor lr * wd.
+
The other name of weight decay is L2 regularization, which consists in adding the sum of all squared weights to the loss (multiplied by the weight decay). As we have seen in Chapter 8, this can be directly expressed on the gradients with:
+
weight.grad += wd*weight
+
For SGD, those two formulas are equivalent. However, this equivalence only holds for standard SGD, because we have seen that with momentum, RMSProp or in Adam, the update has some additional formulas around the gradient.
+
Most libraries use the second formulation, but it was pointed out in “Decoupled Weight Decay Regularization” by Ilya Loshchilov and Frank Hutter, that the first one is the only correct approach with the Adam optimizer or momentum, which is why fastai makes it its default.
+
Now you know everything that is hidden behind the line learn.fit_one_cycle!
+
Optimizers are only one part of the training process, however when you need to change the training loop with fastai, you can’t directly change the code inside the library. Instead, we have designed a system of callbacks to let you write any tweaks you like in independent blocks that you can then mix and match.
+
+
+
16.7 Callbacks
+
Sometimes you need to change how things work a little bit. In fact, we have already seen examples of this: Mixup, fp16 training, resetting the model after each epoch for training RNNs, and so forth. How do we go about making these kinds of tweaks to the training process?
+
We’ve seen the basic training loop, which, with the help of the Optimizer class, looks like this for a single epoch:
+
for xb,yb in dl:
+ loss = loss_func(model(xb), yb)
+ loss.backward()
+ opt.step()
+ opt.zero_grad()
The usual way for deep learning practitioners to customize the training loop is to make a copy of an existing training loop, and then insert the code necessary for their particular changes into it. This is how nearly all code that you find online will look. But it has some very serious problems.
+
It’s not very likely that some particular tweaked training loop is going to meet your particular needs. There are hundreds of changes that can be made to a training loop, which means there are billions and billions of possible permutations. You can’t just copy one tweak from a training loop here, another from a training loop there, and expect them all to work together. Each will be based on different assumptions about the environment that it’s working in, use different naming conventions, and expect the data to be in different formats.
+
We need a way to allow users to insert their own code at any part of the training loop, but in a consistent and well-defined way. Computer scientists have already come up with an elegant solution: the callback. A callback is a piece of code that you write, and inject into another piece of code at some predefined point. In fact, callbacks have been used with deep learning training loops for years. The problem is that in previous libraries it was only possible to inject code in a small subset of places where this may have been required, and, more importantly, callbacks were not able to do all the things they needed to do.
+
In order to be just as flexible as manually copying and pasting a training loop and directly inserting code into it, a callback must be able to read every possible piece of information available in the training loop, modify all of it as needed, and fully control when a batch, epoch, or even the whole training loop should be terminated. fastai is the first library to provide all of this functionality. It modifies the training loop so it looks like Figure 16.4.
+
+
+
+
The real effectiveness of this approach has been borne out over the last couple of years—it has turned out that, by using the fastai callback system, we were able to implement every single new paper we tried and fulfilled every user request for modifying the training loop. The training loop itself has not required modifications. Figure 16.5 shows just a few of the callbacks that have been added.
+
+
+
+
The reason that this is important is because it means that whatever idea we have in our head, we can implement it. We need never dig into the source code of PyTorch or fastai and hack together some one-off system to try out our ideas. And when we do implement our own callbacks to develop our own ideas, we know that they will work together with all of the other functionality provided by fastai–so we will get progress bars, mixed-precision training, hyperparameter annealing, and so forth.
+
Another advantage is that it makes it easy to gradually remove or add functionality and perform ablation studies. You just need to adjust the list of callbacks you pass along to your fit function.
+
As an example, here is the fastai source code that is run for each batch of the training loop:
The calls of the form self('...') are where the callbacks are called. As you see, this happens after every step. The callback will receive the entire state of training, and can also modify it. For instance, the input data and target labels are in self.xb and self.yb, respectively; a callback can modify these to alter the data the training loop sees. It can also modify self.loss, or even the gradients.
+
Let’s see how this works in practice by writing a callback.
+
+
Creating a Callback
+
When you want to write your own callback, the full list of available events is:
+
+
before_fit:: called before doing anything; ideal for initial setup.
+
before_epoch:: called at the beginning of each epoch; useful for any behavior you need to reset at each epoch.
+
before_train:: called at the beginning of the training part of an epoch.
+
before_batch:: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyperparameter scheduling) or to change the input/target before it goes into the model (for instance, apply Mixup).
+
after_pred:: called after computing the output of the model on the batch. It can be used to change that output before it’s fed to the loss function.
+
after_loss:: called after the loss has been computed, but before the backward pass. It can be used to add penalty to the loss (AR or TAR in RNN training, for instance).
+
after_backward:: called after the backward pass, but before the update of the parameters. It can be used to make changes to the gradients before said update (via gradient clipping, for instance).
+
after_step:: called after the step and before the gradients are zeroed.
+
after_batch:: called at the end of a batch, to perform any required cleanup before the next one.
+
after_train:: called at the end of the training phase of an epoch.
+
before_validate:: called at the beginning of the validation phase of an epoch; useful for any setup needed specifically for validation.
+
after_validate:: called at the end of the validation part of an epoch.
+
after_epoch:: called at the end of an epoch, for any cleanup before the next one.
+
after_fit:: called at the end of training, for final cleanup.
+
+
The elements of this list are available as attributes of the special variable event, so you can just type event. and hit Tab in your notebook to see a list of all the options.
+
Let’s take a look at an example. Do you recall how in Chapter 12 we needed to ensure that our special reset method was called at the start of training and validation for each epoch? We used the ModelResetter callback provided by fastai to do this for us. But how does it work? Here’s the full source code for that class:
+
+
class ModelResetter(Callback):
+def before_train(self): self.model.reset()
+def before_validate(self): self.model.reset()
+
+
Yes, that’s actually it! It just does what we said in the preceding paragraph: after completing training or validation for an epoch, call a method named reset.
+
Callbacks are often “short and sweet” like this one. In fact, let’s look at one more. Here’s the fastai source for the callback that adds RNN regularization (AR and TAR):
note: Code It Yourself: Go back and reread “Activation Regularization and Temporal Activation Regularization” in Chapter 12 then take another look at the code here. Make sure you understand what it’s doing, and why.
+
+
In both of these examples, notice how we can access attributes of the training loop by directly checking self.model or self.pred. That’s because a Callback will always try to get an attribute it doesn’t have inside the Learner associated with it. These are shortcuts for self.learn.model or self.learn.pred. Note that they work for reading attributes, but not for writing them, which is why when RNNRegularizer changes the loss or the predictions you see self.learn.loss = or self.learn.pred =.
+
When writing a callback, the following attributes of Learner are available:
+
+
model:: The model used for training/validation.
+
data:: The underlying DataLoaders.
+
loss_func:: The loss function used.
+
opt:: The optimizer used to update the model parameters.
+
opt_func:: The function used to create the optimizer.
+
cbs:: The list containing all the Callbacks.
+
dl:: The current DataLoader used for iteration.
+
x/xb:: The last input drawn from self.dl (potentially modified by callbacks). xb is always a tuple (potentially with one element) and x is detuplified. You can only assign to xb.
+
y/yb:: The last target drawn from self.dl (potentially modified by callbacks). yb is always a tuple (potentially with one element) and y is detuplified. You can only assign to yb.
+
pred:: The last predictions from self.model (potentially modified by callbacks).
+
loss:: The last computed loss (potentially modified by callbacks).
+
n_epoch:: The number of epochs in this training.
+
n_iter:: The number of iterations in the current self.dl.
+
epoch:: The current epoch index (from 0 to n_epoch-1).
+
iter:: The current iteration index in self.dl (from 0 to n_iter-1).
+
+
The following attributes are added by TrainEvalCallback and should be available unless you went out of your way to remove that callback:
+
+
train_iter:: The number of training iterations done since the beginning of this training
+
pct_train:: The percentage of training iterations completed (from 0. to 1.)
+
training:: A flag to indicate whether or not we’re in training mode
+
+
The following attribute is added by Recorder and should be available unless you went out of your way to remove that callback:
+
+
smooth_loss:: An exponentially averaged version of the training loss
+
+
Callbacks can also interrupt any part of the training loop by using a system of exceptions.
+
+
+
Callback Ordering and Exceptions
+
Sometimes, callbacks need to be able to tell fastai to skip over a batch, or an epoch, or stop training altogether. For instance, consider TerminateOnNaNCallback. This handy callback will automatically stop training any time the loss becomes infinite or NaN (not a number). Here’s the fastai source for this callback:
+
+
class TerminateOnNaNCallback(Callback):
+ run_before=Recorder
+def after_batch(self):
+if torch.isinf(self.loss) or torch.isnan(self.loss):
+raise CancelFitException
+
+
The line raise CancelFitException tells the training loop to interrupt training at this point. The training loop catches this exception and does not run any further training or validation. The callback control flow exceptions available are:
+
+
CancelBatchException:: Skip the rest of this batch and go to after_batch.
+
CancelTrainException:: Skip the rest of the training part of the epoch and go to after_train.
+
CancelValidException:: Skip the rest of the validation part of the epoch and go to after_validate.
+
CancelEpochException:: Skip the rest of this epoch and go to after_epoch.
+
CancelFitException:: Interrupt training and go to after_fit.
+
+
You can detect if one of those exceptions has occurred and add code that executes right after with the following events:
+
+
after_cancel_batch:: Reached immediately after a CancelBatchException before proceeding to after_batch
+
after_cancel_train:: Reached immediately after a CancelTrainException before proceeding to after_train
+
after_cancel_valid:: Reached immediately after a CancelValidException before proceeding to after_valid
+
after_cancel_epoch:: Reached immediately after a CancelEpochException before proceeding to after_epoch
+
after_cancel_fit:: Reached immediately after a CancelFitException before proceeding to after_fit
+
+
Sometimes, callbacks need to be called in a particular order. For example, in the case of TerminateOnNaNCallback, it’s important that Recorder runs its after_batch after this callback, to avoid registering an NaN loss. You can specify run_before (this callback must run before …) or run_after (this callback must run after …) in your callback to ensure the ordering that you need.
+
+
+
+
16.8 Conclusion
+
In this chapter we took a close look at the training loop, exploring different variants of SGD and why they can be more powerful. At the time of writing, developing new optimizers is a very active area of research, so by the time you read this chapter there may be an addendum on the book’s website that presents new variants. Be sure to check out how our general optimizer framework can help you implement new optimizers very quickly.
+
We also examined the powerful callback system that allows you to customize every bit of the training loop by enabling you to inspect and modify any parameter you like between each step.
+
+
+
16.9 Questionnaire
+
+
What is the equation for a step of SGD, in math or code (as you prefer)?
+
What do we pass to vision_learner to use a non-default optimizer?
+
What are optimizer callbacks?
+
What does zero_grad do in an optimizer?
+
What does step do in an optimizer? How is it implemented in the general optimizer?
+
Rewrite sgd_cb to use the += operator, instead of add_.
+
What is “momentum”? Write out the equation.
+
What’s a physical analogy for momentum? How does it apply in our model training settings?
+
What does a bigger value for momentum do to the gradients?
+
What are the default values of momentum for 1cycle training?
+
What is RMSProp? Write out the equation.
+
What do the squared values of the gradients indicate?
+
How does Adam differ from momentum and RMSProp?
+
Write out the equation for Adam.
+
Calculate the values of unbias_avg and w.avg for a few batches of dummy values.
+
What’s the impact of having a high eps in Adam?
+
Read through the optimizer notebook in fastai’s repo, and execute it.
+
In what situations do dynamic learning rate methods like Adam change the behavior of weight decay?
+
What are the four steps of a training loop?
+
Why is using callbacks better than writing a new training loop for each tweak you want to add?
+
What aspects of the design of fastai’s callback system make it as flexible as copying and pasting bits of code?
+
How can you get the list of events available to you when writing a callback?
+
Write the ModelResetter callback (without peeking).
+
How can you access the necessary attributes of the training loop inside a callback? When can you use or not use the shortcuts that go with them?
+
How can a callback influence the control flow of the training loop?
+
Write the TerminateOnNaN callback (without peeking, if possible).
+
How do you make sure your callback runs after or before another callback?
+
+
+
Further Research
+
+
Look up the “Rectified Adam” paper, implement it using the general optimizer framework, and try it out. Search for other recent optimizers that work well in practice, and pick one to implement.
+
Look at the mixed-precision callback with the documentation. Try to understand what each event and line of code does.
+
Implement your own version of the learning rate finder from scratch. Compare it with fastai’s version.
+
Look at the source code of the callbacks that ship with fastai. See if you can find one that’s similar to what you’re looking to do, to get some inspiration.
+
+
+
+
+
16.10 Foundations of Deep Learning: Wrap up
+
Congratulations, you have made it to the end of the “foundations of deep learning” section of the book! You now understand how all of fastai’s applications and most important architectures are built, and the recommended ways to train them—and you have all the information you need to build these from scratch. While you probably won’t need to create your own training loop, or batchnorm layer, for instance, knowing what is going on behind the scenes is very helpful for debugging, profiling, and deploying your solutions.
+
Since you understand the foundations of fastai’s applications now, be sure to spend some time digging through the source notebooks and running and experimenting with parts of them. This will give you a better idea of how everything in fastai is developed.
+
In the next section, we will be looking even further under the covers: we’ll explore how the actual forward and backward passes of a neural network are done, and we will see what tools are at our disposal to get better performance. We will then continue with a project that brings together all the material in the book, which we will use to build a tool for interpreting convolutional neural networks. Last but not least, we’ll finish by building fastai’s Learner class from scratch.
In Chapter 1 we saw that deep learning can be used to get great results with natural language datasets. Our example relied on using a pretrained language model and fine-tuning it to classify reviews. That example highlighted a difference between transfer learning in NLP and computer vision: in general in NLP the pretrained model is trained on a different task.
+
What we call a language model is a model that has been trained to guess what the next word in a text is (having read the ones before). This kind of task is called self-supervised learning: we do not need to give labels to our model, just feed it lots and lots of texts. It has a process to automatically get labels from the data, and this task isn’t trivial: to properly guess the next word in a sentence, the model will have to develop an understanding of the English (or other) language. Self-supervised learning can also be used in other domains; for instance, see “Self-Supervised Learning and Computer Vision” for an introduction to vision applications. Self-supervised learning is not usually used for the model that is trained directly, but instead is used for pretraining a model used for transfer learning.
+
+
jargon: Self-supervised learning: Training a model using labels that are embedded in the independent variable, rather than requiring external labels. For instance, training a model to predict the next word in a text.
+
+
The language model we used in Chapter 1 to classify IMDb reviews was pretrained on Wikipedia. We got great results by directly fine-tuning this language model to a movie review classifier, but with one extra step, we can do even better. The Wikipedia English is slightly different from the IMDb English, so instead of jumping directly to the classifier, we could fine-tune our pretrained language model to the IMDb corpus and then use that as the base for our classifier.
+
Even if our language model knows the basics of the language we are using in the task (e.g., our pretrained model is in English), it helps to get used to the style of the corpus we are targeting. It may be more informal language, or more technical, with new words to learn or different ways of composing sentences. In the case of the IMDb dataset, there will be lots of names of movie directors and actors, and often a less formal style of language than that seen in Wikipedia.
+
We already saw that with fastai, we can download a pretrained English language model and use it to get state-of-the-art results for NLP classification. (We expect pretrained models in many more languages to be available soon—they might well be available by the time you are reading this book, in fact.) So, why are we learning how to train a language model in detail?
+
One reason, of course, is that it is helpful to understand the foundations of the models that you are using. But there is another very practical reason, which is that you get even better results if you fine-tune the (sequence-based) language model prior to fine-tuning the classification model. For instance, for the IMDb sentiment analysis task, the dataset includes 50,000 additional movie reviews that do not have any positive or negative labels attached. Since there are 25,000 labeled reviews in the training set and 25,000 in the validation set, that makes 100,000 movie reviews altogether. We can use all of these reviews to fine-tune the pretrained language model, which was trained only on Wikipedia articles; this will result in a language model that is particularly good at predicting the next word of a movie review.
+
+
This is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).
We have seen what Tokenizer and Numericalize do to a collection of texts, and how they’re used inside the data block API, which handles those transforms for us directly using the TextBlock. But what if we want to only apply one of those transforms, either to see intermediate results or because we have already tokenized texts? More generally, what can we do when the data block API is not flexible enough to accommodate our particular use case? For this, we need to use fastai’s mid-level API for processing data. The data block API is built on top of that layer, so it will allow you to do everything the data block API does, and much much more.
+
+
11.1 Going Deeper into fastai’s Layered API
+
The fastai library is built on a layered API. In the very top layer there are applications that allow us to train a model in five lines of codes, as we saw in Chapter 1. In the case of creating DataLoaders for a text classifier, for instance, we used the line:
+
from fastai.text.allimport*
+
+dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')
+
The factory method TextDataLoaders.from_folder is very convenient when your data is arranged the exact same way as the IMDb dataset, but in practice, that often won’t be the case. The data block API offers more flexibility.
+
+
This is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).
We’re now ready to go deep… deep into deep learning! You already learned how to train a basic neural network, but how do you go from there to creating state-of-the-art models? In this part of the book we’re going to uncover all of the mysteries, starting with language models.
+
You saw in Chapter 10 how to fine-tune a pretrained language model to build a text classifier. In this chapter, we will explain to you what exactly is inside that model, and what an RNN is. First, let’s gather some data that will allow us to quickly prototype our various models.
+
+
12.1 The Data
+
Whenever we start working on a new problem, we always first try to think of the simplest dataset we can that will allow us to try out methods quickly and easily, and interpret the results. When we started working on language modeling a few years ago we didn’t find any datasets that would allow for quick prototyping, so we made one. We call it Human Numbers, and it simply contains the first 10,000 numbers written out in English.
+
+
j: One of the most common practical mistakes I see even amongst highly experienced practitioners is failing to use appropriate datasets at appropriate times during the analysis process. In particular, most people tend to start with datasets that are too big and too complicated.
+
+
+
This is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).
We are now in the exciting position that we can fully understand the architectures that we have been using for our state-of-the-art models for computer vision, natural language processing, and tabular analysis. In this chapter, we’re going to fill in all the missing details on how fastai’s application models work and show you how to build the models they use.
+
We will also go back to the custom data preprocessing pipeline we saw in Chapter 11 for Siamese networks and show you how you can use the components in the fastai library to build custom pretrained models for new tasks.
+
We’ll start with computer vision.
+
+
15.1 Computer Vision
+
For computer vision application we use the functions vision_learner and unet_learner to build our models, depending on the task. In this section we’ll explore how to build the Learner objects we used in Parts 1 and 2 of this book.
+
+
vision_learner
+
Let’s take a look at what happens when we use the vision_learner function. We begin by passing this function an architecture to use for the body of the network. Most of the time we use a ResNet, which you already know how to create, so we don’t need to delve into that any further. Pretrained weights are downloaded as required and loaded into the ResNet.
+
Then, for transfer learning, the network needs to be cut. This refers to slicing off the final layer, which is only responsible for ImageNet-specific categorization. In fact, we do not slice off only this layer, but everything from the adaptive average pooling layer onwards. The reason for this will become clear in just a moment. Since different architectures might use different types of pooling layers, or even completely different kinds of heads, we don’t just search for the adaptive pooling layer to decide where to cut the pretrained model. Instead, we have a dictionary of information that is used for each model to determine where its body ends, and its head starts.
+
+
This is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).
Now that we know how to build up pretty much anything from scratch, let’s use that knowledge to create entirely new (and very useful!) functionality: the class activation map. It gives us some insight into why a CNN made the predictions it did.
+
In the process, we’ll learn about one handy feature of PyTorch we haven’t seen before, the hook, and we’ll apply many of the concepts introduced in the rest of the book. If you want to really test out your understanding of the material in this book, after you’ve finished this chapter, try putting it aside and recreating the ideas here yourself from scratch (no peeking!).
+
+
18.1 CAM and Hooks
+
The class activation map (CAM) was introduced by Bolei Zhou et al. in “Learning Deep Features for Discriminative Localization”. It uses the output of the last convolutional layer (just before the average pooling layer) together with the predictions to give us a heatmap visualization of why the model made its decision. This is a useful tool for interpretation.
+
More precisely, at each position of our final convolutional layer, we have as many filters as in the last linear layer. We can therefore compute the dot product of those activations with the final weights to get, for each location on our feature map, the score of the feature that was used to make a decision.
+
We’re going to need a way to get access to the activations inside the model while it’s training. In PyTorch this can be done with a hook. Hooks are PyTorch’s equivalent of fastai’s callbacks. However, rather than allowing you to inject code into the training loop like a fastai Learner callback, hooks allow you to inject code into the forward and backward calculations themselves. We can attach a hook to any layer of the model, and it will be executed when we compute the outputs (forward hook) or during backpropagation (backward hook). A forward hook is a function that takes three things—a module, its input, and its output—and it can perform any behavior you want. (fastai also provides a handy HookCallback that we won’t cover here, but take a look at the fastai docs; it makes working with hooks a little easier.)
+
+
This is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).
This final chapter (other than the conclusion and the online chapters) is going to look a bit different. It contains far more code and far less prose than the previous chapters. We will introduce new Python keywords and libraries without discussing them. This chapter is meant to be the start of a significant research project for you. You see, we are going to implement many of the key pieces of the fastai and PyTorch APIs from scratch, building on nothing other than the components that we developed in Chapter 17! The key goal here is to end up with your own Learner class, and some callbacks—enough to be able to train a model on Imagenette, including examples of each of the key techniques we’ve studied. On the way to building Learner, we will create our own version of Module, Parameter, and parallel DataLoader so you have a very good idea of what those PyTorch classes do.
+
The end-of-chapter questionnaire is particularly important for this chapter. This is where we will be pointing you in the many interesting directions that you could take, using this chapter as your starting point. We suggest that you follow along with this chapter on your computer, and do lots of experiments, web searches, and whatever else you need to understand what’s going on. You’ve built up the skills and expertise to do this in the rest of this book, so we think you are going to do great!
+
Let’s begin by gathering (manually) some data.
+
+
This is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).
The six lines of code we saw in Chapter 1 are just one small part of the process of using deep learning in practice. In this chapter, we’re going to use a computer vision example to look at the end-to-end process of creating a deep learning application. More specifically, we’re going to build a bear classifier! In the process, we’ll discuss the capabilities and constraints of deep learning, explore how to create datasets, look at possible gotchas when using deep learning in practice, and more. Many of the key points will apply equally well to other deep learning problems, such as those in Chapter 1. If you work through a problem similar in key respects to our example problems, we expect you to get excellent results with little code, quickly.
+
Let’s start with how you should frame your problem.
+
+
2.1 The Practice of Deep Learning
+
We’ve seen that deep learning can solve a lot of challenging problems quickly and with little code. As a beginner, there’s a sweet spot of problems that are similar enough to our example problems that you can very quickly get extremely useful results. However, deep learning isn’t magic! The same 6 lines of code won’t work for every problem anyone can think of today. Underestimating the constraints and overestimating the capabilities of deep learning may lead to frustratingly poor results, at least until you gain some experience and can solve the problems that arise. Conversely, overestimating the constraints and underestimating the capabilities of deep learning may mean you do not attempt a solvable problem because you talk yourself out of it.
+
We often talk to people who underestimate both the constraints and the capabilities of deep learning. Both of these can be problems: underestimating the capabilities means that you might not even try things that could be very beneficial, and underestimating the constraints might mean that you fail to consider and react to important issues.
+
The best thing to do is to keep an open mind. If you remain open to the possibility that deep learning might solve part of your problem with less data or complexity than you expect, then it is possible to design a process where you can find the specific capabilities and constraints related to your particular problem as you work through the process. This doesn’t mean making any risky bets — we will show you how you can gradually roll out models so that they don’t create significant risks, and can even backtest them prior to putting them in production.
+
+
This is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).
Congratulations! You’ve made it! If you have worked through all of the notebooks to this point, then you have joined the small, but growing group of people that are able to harness the power of deep learning to solve real problems. You may not feel that way yet—in fact you probably don’t. We have seen again and again that students that complete the fast.ai courses dramatically underestimate how effective they are as deep learning practitioners. We’ve also seen that these people are often underestimated by others with a classic academic background. So if you are to rise above your own expectations and the expectations of others, what you do next, after closing this book, is even more important than what you’ve done to get to this point.
+
The most important thing is to keep the momentum going. In fact, as you know from your study of optimizers, momentum is something that can build upon itself! So think about what you can do now to maintain and accelerate your deep learning journey. Figure 20.1 can give you a few ideas.
+
+
+
+
+
This is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).
This chapter was co-authored by Dr. Rachel Thomas, the cofounder of fast.ai, and founding director of the Center for Applied Data Ethics at the University of San Francisco. It largely follows a subset of the syllabus she developed for the Introduction to Data Ethics course.
+
+
+
End sidebar
+
As we discussed in Chapters 1 and 2, sometimes machine learning models can go wrong. They can have bugs. They can be presented with data that they haven’t seen before, and behave in ways we don’t expect. Or they could work exactly as designed, but be used for something that we would much prefer they were never, ever used for.
+
Because deep learning is such a powerful tool and can be used for so many things, it becomes particularly important that we consider the consequences of our choices. The philosophical study of ethics is the study of right and wrong, including how we can define those terms, recognize right and wrong actions, and understand the connection between actions and consequences. The field of data ethics has been around for a long time, and there are many academics focused on this field. It is being used to help define policy in many jurisdictions; it is being used in companies big and small to consider how best to ensure good societal outcomes from product development; and it is being used by researchers who want to make sure that the work they are doing is used for good, and not for bad.
+
As a deep learning practitioner, therefore, it is likely that at some point you are going to be put in a situation where you need to consider data ethics. So what is data ethics? It’s a subfield of ethics, so let’s start there.
+
+
J: At university, philosophy of ethics was my main thing (it would have been the topic of my thesis, if I’d finished it, instead of dropping out to join the real world). Based on the years I spent studying ethics, I can tell you this: no one really agrees on what right and wrong are, whether they exist, how to spot them, which people are good, and which bad, or pretty much anything else. So don’t expect too much from the theory! We’re going to focus on examples and thought starters here, not theory.
+
+
In answering the question “What Is Ethics”, The Markkula Center for Applied Ethics says that the term refers to:
+
+
Well-founded standards of right and wrong that prescribe what humans ought to do
+
The study and development of one’s ethical standards.
+
+
There is no list of right answers. There is no list of do and don’t. Ethics is complicated, and context-dependent. It involves the perspectives of many stakeholders. Ethics is a muscle that you have to develop and practice. In this chapter, our goal is to provide some signposts to help you on that journey.
+
Spotting ethical issues is best to do as part of a collaborative team. This is the only way you can really incorporate different perspectives. Different people’s backgrounds will help them to see things which may not be obvious to you. Working with a team is helpful for many “muscle-building” activities, including this one.
+
This chapter is certainly not the only part of the book where we talk about data ethics, but it’s good to have a place where we focus on it for a while. To get oriented, it’s perhaps easiest to look at a few examples. So, we picked out three that we think illustrate effectively some of the key topics.
+
+
+
3.1 Key Examples for Data Ethics
+
We are going to start with three specific examples that illustrate three common ethical issues in tech:
+
+
Recourse processes—Arkansas’s buggy healthcare algorithms left patients stranded.
+
Feedback loops—YouTube’s recommendation system helped unleash a conspiracy theory boom.
+
Bias—When a traditionally African-American name is searched for on Google, it displays ads for criminal background checks.
+
+
In fact, for every concept that we introduce in this chapter, we are going to provide at least one specific example. For each one, think about what you could have done in this situation, and what kinds of obstructions there might have been to you getting that done. How would you deal with them? What would you look out for?
+
+
Bugs and Recourse: Buggy Algorithm Used for Healthcare Benefits
+
The Verge investigated software used in over half of the US states to determine how much healthcare people receive, and documented their findings in the article “What Happens When an Algorithm Cuts Your Healthcare”. After implementation of the algorithm in Arkansas, hundreds of people (many with severe disabilities) had their healthcare drastically cut. For instance, Tammy Dobbs, a woman with cerebral palsy who needs an aid to help her to get out of bed, to go to the bathroom, to get food, and more, had her hours of help suddenly reduced by 20 hours a week. She couldn’t get any explanation for why her healthcare was cut. Eventually, a court case revealed that there were mistakes in the software implementation of the algorithm, negatively impacting people with diabetes or cerebral palsy. However, Dobbs and many other people reliant on these healthcare benefits live in fear that their benefits could again be cut suddenly and inexplicably.
+
+
+
Feedback Loops: YouTube’s Recommendation System
+
Feedback loops can occur when your model is controlling the next round of data you get. The data that is returned quickly becomes flawed by the software itself.
+
For instance, YouTube has 1.9 billion users, who watch over 1 billion hours of YouTube videos a day. Its recommendation algorithm (built by Google), which was designed to optimize watch time, is responsible for around 70% of the content that is watched. But there was a problem: it led to out-of-control feedback loops, leading the New York Times to run the headline “YouTube Unleashed a Conspiracy Theory Boom. Can It Be Contained?”. Ostensibly recommendation systems are predicting what content people will like, but they also have a lot of power in determining what content people even see.
+
+
+
Bias: Professor Latanya Sweeney “Arrested”
+
Dr. Latanya Sweeney is a professor at Harvard and director of the university’s data privacy lab. In the paper “Discrimination in Online Ad Delivery” (see <>) she describes her discovery that Googling her name resulted in advertisements saying “Latanya Sweeney, arrested?” even though she is the only known Latanya Sweeney and has never been arrested. However when she Googled other names, such as “Kirsten Lindquist,” she got more neutral ads, even though Kirsten Lindquist has been arrested three times.
+
+
Being a computer scientist, she studied this systematically, and looked at over 2000 names. She found a clear pattern where historically Black names received advertisements suggesting that the person had a criminal record, whereas, white names had more neutral advertisements.
+
This is an example of bias. It can make a big difference to people’s lives—for instance, if a job applicant is Googled it may appear that they have a criminal record when they do not.
+
+
+
Why Does This Matter?
+
One very natural reaction to considering these issues is: “So what? What’s that got to do with me? I’m a data scientist, not a politician. I’m not one of the senior executives at my company who make the decisions about what we do. I’m just trying to build the most predictive model I can.”
+
These are very reasonable questions. But we’re going to try to convince you that the answer is that everybody who is training models absolutely needs to consider how their models will be used, and consider how to best ensure that they are used as positively as possible. There are things you can do. And if you don’t do them, then things can go pretty badly.
+
One particularly hideous example of what happens when technologists focus on technology at all costs is the story of IBM and Nazi Germany. In 2001, a Swiss judge ruled that it was not unreasonable “to deduce that IBM’s technical assistance facilitated the tasks of the Nazis in the commission of their crimes against humanity, acts also involving accountancy and classification by IBM machines and utilized in the concentration camps themselves.”
+
IBM, you see, supplied the Nazis with data tabulation products necessary to track the extermination of Jews and other groups on a massive scale. This was driven from the top of the company, with marketing to Hitler and his leadership team. Company President Thomas Watson personally approved the 1939 release of special IBM alphabetizing machines to help organize the deportation of Polish Jews. Pictured in <> is Adolf Hitler (far left) meeting with IBM CEO Tom Watson Sr. (second from left), shortly before Hitler awarded Watson a special “Service to the Reich” medal in 1937.
+
+
But this was not an isolated incident—the organization’s involvement was extensive. IBM and its subsidiaries provided regular training and maintenance onsite at the concentration camps: printing off cards, configuring machines, and repairing them as they broke frequently. IBM set up categorizations on its punch card system for the way that each person was killed, which group they were assigned to, and the logistical information necessary to track them through the vast Holocaust system. IBM’s code for Jews in the concentration camps was 8: some 6,000,000 were killed. Its code for Romanis was 12 (they were labeled by the Nazis as “asocials,” with over 300,000 killed in the Zigeunerlager, or “Gypsy camp”). General executions were coded as 4, death in the gas chambers as 6.
+
+
Of course, the project managers and engineers and technicians involved were just living their ordinary lives. Caring for their families, going to the church on Sunday, doing their jobs the best they could. Following orders. The marketers were just doing what they could to meet their business development goals. As Edwin Black, author of IBM and the Holocaust (Dialog Press) observed: “To the blind technocrat, the means were more important than the ends. The destruction of the Jewish people became even less important because the invigorating nature of IBM’s technical achievement was only heightened by the fantastical profits to be made at a time when bread lines stretched across the world.”
+
Step back for a moment and consider: How would you feel if you discovered that you had been part of a system that ended up hurting society? Would you be open to finding out? How can you help make sure this doesn’t happen? We have described the most extreme situation here, but there are many negative societal consequences linked to AI and machine learning being observed today, some of which we’ll describe in this chapter.
+
It’s not just a moral burden, either. Sometimes technologists pay very directly for their actions. For instance, the first person who was jailed as a result of the Volkswagen scandal, where the car company was revealed to have cheated on its diesel emissions tests, was not the manager that oversaw the project, or an executive at the helm of the company. It was one of the engineers, James Liang, who just did what he was told.
+
Of course, it’s not all bad—if a project you are involved in turns out to make a huge positive impact on even one person, this is going to make you feel pretty great!
+
Okay, so hopefully we have convinced you that you ought to care. But what should you do? As data scientists, we’re naturally inclined to focus on making our models better by optimizing some metric or other. But optimizing that metric may not actually lead to better outcomes. And even if it does help create better outcomes, it almost certainly won’t be the only thing that matters. Consider the pipeline of steps that occurs between the development of a model or an algorithm by a researcher or practitioner, and the point at which this work is actually used to make some decision. This entire pipeline needs to be considered as a whole if we’re to have a hope of getting the kinds of outcomes we want.
+
Normally there is a very long chain from one end to the other. This is especially true if you are a researcher, where you might not even know if your research will ever get used for anything, or if you’re involved in data collection, which is even earlier in the pipeline. But no one is better placed to inform everyone involved in this chain about the capabilities, constraints, and details of your work than you are. Although there’s no “silver bullet” that can ensure your work is used the right way, by getting involved in the process, and asking the right questions, you can at the very least ensure that the right issues are being considered.
+
Sometimes, the right response to being asked to do a piece of work is to just say “no.” Often, however, the response we hear is, “If I don’t do it, someone else will.” But consider this: if you’ve been picked for the job, you’re the best person they’ve found to do it—so if you don’t do it, the best person isn’t working on that project. If the first five people they ask all say no too, even better!
+
+
+
+
3.2 Integrating Machine Learning with Product Design
+
Presumably the reason you’re doing this work is because you hope it will be used for something. Otherwise, you’re just wasting your time. So, let’s start with the assumption that your work will end up somewhere. Now, as you are collecting your data and developing your model, you are making lots of decisions. What level of aggregation will you store your data at? What loss function should you use? What validation and training sets should you use? Should you focus on simplicity of implementation, speed of inference, or accuracy of the model? How will your model handle out-of-domain data items? Can it be fine-tuned, or must it be retrained from scratch over time?
+
These are not just algorithm questions. They are data product design questions. But the product managers, executives, judges, journalists, doctors… whoever ends up developing and using the system of which your model is a part will not be well-placed to understand the decisions that you made, let alone change them.
+
For instance, two studies found that Amazon’s facial recognition software produced inaccurate and racially biased results. Amazon claimed that the researchers should have changed the default parameters, without explaining how this would have changed the biased results. Furthermore, it turned out that Amazon was not instructing police departments that used its software to do this either. There was, presumably, a big distance between the researchers that developed these algorithms and the Amazon documentation staff that wrote the guidelines provided to the police. A lack of tight integration led to serious problems for society at large, the police, and Amazon themselves. It turned out that their system erroneously matched 28 members of congress to criminal mugshots! (And the Congresspeople wrongly matched to criminal mugshots were disproportionately people of color, as seen in <>.)
+
+
Data scientists need to be part of a cross-disciplinary team. And researchers need to work closely with the kinds of people who will end up using their research. Better still is if the domain experts themselves have learned enough to be able to train and debug some models themselves—hopefully there are a few of you reading this book right now!
+
The modern workplace is a very specialized place. Everybody tends to have well-defined jobs to perform. Especially in large companies, it can be hard to know what all the pieces of the puzzle are. Sometimes companies even intentionally obscure the overall project goals that are being worked on, if they know that their employees are not going to like the answers. This is sometimes done by compartmentalising pieces as much as possible.
+
In other words, we’re not saying that any of this is easy. It’s hard. It’s really hard. We all have to do our best. And we have often seen that the people who do get involved in the higher-level context of these projects, and attempt to develop cross-disciplinary capabilities and teams, become some of the most important and well rewarded members of their organizations. It’s the kind of work that tends to be highly appreciated by senior executives, even if it is sometimes considered rather uncomfortable by middle management.
+
+
+
3.3 Topics in Data Ethics
+
Data ethics is a big field, and we can’t cover everything. Instead, we’re going to pick a few topics that we think are particularly relevant:
+
+
The need for recourse and accountability
+
Feedback loops
+
Bias
+
Disinformation
+
+
Let’s look at each in turn.
+
+
Recourse and Accountability
+
In a complex system, it is easy for no one person to feel responsible for outcomes. While this is understandable, it does not lead to good results. In the earlier example of the Arkansas healthcare system in which a bug led to people with cerebral palsy losing access to needed care, the creator of the algorithm blamed government officials, and government officials blamed those who implemented the software. NYU professor Danah Boyd described this phenomenon: “Bureaucracy has often been used to shift or evade responsibility… Today’s algorithmic systems are extending bureaucracy.”
+
An additional reason why recourse is so necessary is because data often contains errors. Mechanisms for audits and error correction are crucial. A database of suspected gang members maintained by California law enforcement officials was found to be full of errors, including 42 babies who had been added to the database when they were less than 1 year old (28 of whom were marked as “admitting to being gang members”). In this case, there was no process in place for correcting mistakes or removing people once they’d been added. Another example is the US credit report system: in a large-scale study of credit reports by the Federal Trade Commission (FTC) in 2012, it was found that 26% of consumers had at least one mistake in their files, and 5% had errors that could be devastating. Yet, the process of getting such errors corrected is incredibly slow and opaque. When public radio reporter Bobby Allyn discovered that he was erroneously listed as having a firearms conviction, it took him “more than a dozen phone calls, the handiwork of a county court clerk and six weeks to solve the problem. And that was only after I contacted the company’s communications department as a journalist.”
+
As machine learning practitioners, we do not always think of it as our responsibility to understand how our algorithms end up being implemented in practice. But we need to.
+
+
+
Feedback Loops
+
We explained in <> how an algorithm can interact with its environment to create a feedback loop, making predictions that reinforce actions taken in the real world, which lead to predictions even more pronounced in the same direction. As an example, let’s again consider YouTube’s recommendation system. A couple of years ago the Google team talked about how they had introduced reinforcement learning (closely related to deep learning, but where your loss function represents a result potentially a long time after an action occurs) to improve YouTube’s recommendation system. They described how they used an algorithm that made recommendations such that watch time would be optimized.
+
However, human beings tend to be drawn to controversial content. This meant that videos about things like conspiracy theories started to get recommended more and more by the recommendation system. Furthermore, it turns out that the kinds of people that are interested in conspiracy theories are also people that watch a lot of online videos! So, they started to get drawn more and more toward YouTube. The increasing number of conspiracy theorists watching videos on YouTube resulted in the algorithm recommending more and more conspiracy theory and other extremist content, which resulted in more extremists watching videos on YouTube, and more people watching YouTube developing extremist views, which led to the algorithm recommending more extremist content… The system was spiraling out of control.
+
And this phenomenon was not contained to this particular type of content. In June 2019 the New York Times published an article on YouTube’s recommendation system, titled “On YouTube’s Digital Playground, an Open Gate for Pedophiles”. The article started with this chilling story:
+
+
: Christiane C. didn’t think anything of it when her 10-year-old daughter and a friend uploaded a video of themselves playing in a backyard pool… A few days later… the video had thousands of views. Before long, it had ticked up to 400,000… “I saw the video again and I got scared by the number of views,” Christiane said. She had reason to be. YouTube’s automated recommendation system… had begun showing the video to users who watched other videos of prepubescent, partially clothed children, a team of researchers has found.
+
+
+
: On its own, each video might be perfectly innocent, a home movie, say, made by a child. Any revealing frames are fleeting and appear accidental. But, grouped together, their shared features become unmistakable.
+
+
YouTube’s recommendation algorithm had begun curating playlists for pedophiles, picking out innocent home videos that happened to contain prepubescent, partially clothed children.
+
No one at Google planned to create a system that turned family videos into porn for pedophiles. So what happened?
+
Part of the problem here is the centrality of metrics in driving a financially important system. When an algorithm has a metric to optimize, as you have seen, it will do everything it can to optimize that number. This tends to lead to all kinds of edge cases, and humans interacting with a system will search for, find, and exploit these edge cases and feedback loops for their advantage.
+
There are signs that this is exactly what has happened with YouTube’s recommendation system. The Guardian ran an article called “How an ex-YouTube Insider Investigated its Secret Algorithm” about Guillaume Chaslot, an ex-YouTube engineer who created AlgoTransparency, which tracks these issues. Chaslot published the chart in <>, following the release of Robert Mueller’s “Report on the Investigation Into Russian Interference in the 2016 Presidential Election.”
+
+
Russia Today’s coverage of the Mueller report was an extreme outlier in terms of how many channels were recommending it. This suggests the possibility that Russia Today, a state-owned Russia media outlet, has been successful in gaming YouTube’s recommendation algorithm. Unfortunately, the lack of transparency of systems like this makes it hard to uncover the kinds of problems that we’re discussing.
+
One of our reviewers for this book, Aurélien Géron, led YouTube’s video classification team from 2013 to 2016 (well before the events discussed here). He pointed out that it’s not just feedback loops involving humans that are a problem. There can also be feedback loops without humans! He told us about an example from YouTube:
+
+
: One important signal to classify the main topic of a video is the channel it comes from. For example, a video uploaded to a cooking channel is very likely to be a cooking video. But how do we know what topic a channel is about? Well… in part by looking at the topics of the videos it contains! Do you see the loop? For example, many videos have a description which indicates what camera was used to shoot the video. As a result, some of these videos might get classified as videos about “photography.” If a channel has such a misclassified video, it might be classified as a “photography” channel, making it even more likely for future videos on this channel to be wrongly classified as “photography.” This could even lead to runaway virus-like classifications! One way to break this feedback loop is to classify videos with and without the channel signal. Then when classifying the channels, you can only use the classes obtained without the channel signal. This way, the feedback loop is broken.
+
+
There are positive examples of people and organizations attempting to combat these problems. Evan Estola, lead machine learning engineer at Meetup, discussed the example of men expressing more interest than women in tech meetups. taking gender into account could therefore cause Meetup’s algorithm to recommend fewer tech meetups to women, and as a result, fewer women would find out about and attend tech meetups, which could cause the algorithm to suggest even fewer tech meetups to women, and so on in a self-reinforcing feedback loop. So, Evan and his team made the ethical decision for their recommendation algorithm to not create such a feedback loop, by explicitly not using gender for that part of their model. It is encouraging to see a company not just unthinkingly optimize a metric, but consider its impact. According to Evan, “You need to decide which feature not to use in your algorithm… the most optimal algorithm is perhaps not the best one to launch into production.”
+
While Meetup chose to avoid such an outcome, Facebook provides an example of allowing a runaway feedback loop to run wild. Like YouTube, it tends to radicalize users interested in one conspiracy theory by introducing them to more. As Renee DiResta, a researcher on proliferation of disinformation, writes:
+
+
: Once people join a single conspiracy-minded [Facebook] group, they are algorithmically routed to a plethora of others. Join an anti-vaccine group, and your suggestions will include anti-GMO, chemtrail watch, flat Earther (yes, really), and “curing cancer naturally groups. Rather than pulling a user out of the rabbit hole, the recommendation engine pushes them further in.”
+
+
It is extremely important to keep in mind that this kind of behavior can happen, and to either anticipate a feedback loop or take positive action to break it when you see the first signs of it in your own projects. Another thing to keep in mind is bias, which, as we discussed briefly in the previous chapter, can interact with feedback loops in very troublesome ways.
+
+
+
Bias
+
Discussions of bias online tend to get pretty confusing pretty fast. The word “bias” means so many different things. Statisticians often think when data ethicists are talking about bias that they’re talking about the statistical definition of the term bias. But they’re not. And they’re certainly not talking about the biases that appear in the weights and biases which are the parameters of your model!
We’ll discuss four of these types of bias, those that we’ve found most helpful in our own work (see the paper for details on the others).
+
+
Historical bias
+
Historical bias comes from the fact that people are biased, processes are biased, and society is biased. Suresh and Guttag say: “Historical bias is a fundamental, structural issue with the first step of the data generation process and can exist even given perfect sampling and feature selection.”
+
For instance, here are a few examples of historical race bias in the US, from the New York Times article “Racial Bias, Even When We Have Good Intentions” by the University of Chicago’s Sendhil Mullainathan:
+
+
When doctors were shown identical files, they were much less likely to recommend cardiac catheterization (a helpful procedure) to Black patients.
+
When bargaining for a used car, Black people were offered initial prices $700 higher and received far smaller concessions.
+
Responding to apartment rental ads on Craigslist with a Black name elicited fewer responses than with a white name.
+
An all-white jury was 16 percentage points more likely to convict a Black defendant than a white one, but when a jury had one Black member it convicted both at the same rate.
+
+
The COMPAS algorithm, widely used for sentencing and bail decisions in the US, is an example of an important algorithm that, when tested by ProPublica, showed clear racial bias in practice (<>).
+
+
Any dataset involving humans can have this kind of bias: medical data, sales data, housing data, political data, and so on. Because underlying bias is so pervasive, bias in datasets is very pervasive. Racial bias even turns up in computer vision, as shown in the example of autocategorized photos shared on Twitter by a Google Photos user shown in <>.
+
+
Yes, that is showing what you think it is: Google Photos classified a Black user’s photo with their friend as “gorillas”! This algorithmic misstep got a lot of attention in the media. “We’re appalled and genuinely sorry that this happened,” a company spokeswoman said. “There is still clearly a lot of work to do with automatic image labeling, and we’re looking at how we can prevent these types of mistakes from happening in the future.”
+
Unfortunately, fixing problems in machine learning systems when the input data has problems is hard. Google’s first attempt didn’t inspire confidence, as coverage by The Guardian suggested (<>).
+
+
These kinds of problems are certainly not limited to just Google. MIT researchers studied the most popular online computer vision APIs to see how accurate they were. But they didn’t just calculate a single accuracy number—instead, they looked at the accuracy across four different groups, as illustrated in <>.
+
+
IBM’s system, for instance, had a 34.7% error rate for darker females, versus 0.3% for lighter males—over 100 times more errors! Some people incorrectly reacted to these experiments by claiming that the difference was simply because darker skin is harder for computers to recognize. However, what actually happened was that, after the negative publicity that this result created, all of the companies in question dramatically improved their models for darker skin, such that one year later they were nearly as good as for lighter skin. So what this actually showed is that the developers failed to utilize datasets containing enough darker faces, or test their product with darker faces.
+
One of the MIT researchers, Joy Buolamwini, warned: “We have entered the age of automation overconfident yet underprepared. If we fail to make ethical and inclusive artificial intelligence, we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.”
+
Part of the issue appears to be a systematic imbalance in the makeup of popular datasets used for training models. The abstract to the paper “No Classification Without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World” by Shreya Shankar et al. states, “We analyze two large, publicly available image data sets to assess geo-diversity and find that these data sets appear to exhibit an observable amerocentric and eurocentric representation bias. Further, we analyze classifiers trained on these data sets to assess the impact of these training distributions and find strong differences in the relative performance on images from different locales.” <> shows one of the charts from the paper, showing the geographic makeup of what was, at the time (and still are, as this book is being written) the two most important image datasets for training models.
+
+
The vast majority of the images are from the United States and other Western countries, leading to models trained on ImageNet performing worse on scenes from other countries and cultures. For instance, research found that such models are worse at identifying household items (such as soap, spices, sofas, or beds) from lower-income countries. <> shows an image from the paper, “Does Object Recognition Work for Everyone?” by Terrance DeVries et al. of Facebook AI Research that illustrates this point.
+
+
In this example, we can see that the lower-income soap example is a very long way away from being accurate, with every commercial image recognition service predicting “food” as the most likely answer!
+
As we will discuss shortly, in addition, the vast majority of AI researchers and developers are young white men. Most projects that we have seen do most user testing using friends and families of the immediate product development group. Given this, the kinds of problems we just discussed should not be surprising.
+
Similar historical bias is found in the texts used as data for natural language processing models. This crops up in downstream machine learning tasks in many ways. For instance, it was widely reported that until last year Google Translate showed systematic bias in how it translated the Turkish gender-neutral pronoun “o” into English: when applied to jobs which are often associated with males it used “he,” and when applied to jobs which are often associated with females it used “she” (<>).
+
+
We also see this kind of bias in online advertisements. For instance, a study in 2019 by Muhammad Ali et al. found that even when the person placing the ad does not intentionally discriminate, Facebook will show ads to very different audiences based on race and gender. Housing ads with the same text, but picture either a white or a Black family, were shown to racially different audiences.
+
+
+
Measurement bias
+
In the paper “Does Machine Learning Automate Moral Hazard and Error” in American Economic Review, Sendhil Mullainathan and Ziad Obermeyer look at a model that tries to answer the question: using historical electronic health record (EHR) data, what factors are most predictive of stroke? These are the top predictors from the model:
+
+
Prior stroke
+
Cardiovascular disease
+
Accidental injury
+
Benign breast lump
+
Colonoscopy
+
Sinusitis
+
+
However, only the top two have anything to do with a stroke! Based on what we’ve studied so far, you can probably guess why. We haven’t really measured stroke, which occurs when a region of the brain is denied oxygen due to an interruption in the blood supply. What we’ve measured is who had symptoms, went to a doctor, got the appropriate tests, and received a diagnosis of stroke. Actually having a stroke is not the only thing correlated with this complete list—it’s also correlated with being the kind of person who actually goes to the doctor (which is influenced by who has access to healthcare, can afford their co-pay, doesn’t experience racial or gender-based medical discrimination, and more)! If you are likely to go to the doctor for an accidental injury, then you are likely to also go the doctor when you are having a stroke.
+
This is an example of measurement bias. It occurs when our models make mistakes because we are measuring the wrong thing, or measuring it in the wrong way, or incorporating that measurement into the model inappropriately.
+
+
+
Aggregation bias
+
Aggregation bias occurs when models do not aggregate data in a way that incorporates all of the appropriate factors, or when a model does not include the necessary interaction terms, nonlinearities, or so forth. This can particularly occur in medical settings. For instance, the way diabetes is treated is often based on simple univariate statistics and studies involving small groups of heterogeneous people. Analysis of results is often done in a way that does not take account of different ethnicities or genders. However, it turns out that diabetes patients have different complications across ethnicities, and HbA1c levels (widely used to diagnose and monitor diabetes) differ in complex ways across ethnicities and genders. This can result in people being misdiagnosed or incorrectly treated because medical decisions are based on a model that does not include these important variables and interactions.
+
+
+
Representation bias
+
The abstract of the paper “Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting” by Maria De-Arteaga et al. notes that there is gender imbalance in occupations (e.g., females are more likely to be nurses, and males are more likely to be pastors), and says that: “differences in true positive rates between genders are correlated with existing gender imbalances in occupations, which may compound these imbalances.”
+
In other words, the researchers noticed that models predicting occupation did not only reflect the actual gender imbalance in the underlying population, but actually amplified it! This type of representation bias is quite common, particularly for simple models. When there is some clear, easy-to-see underlying relationship, a simple model will often simply assume that this relationship holds all the time. As <> from the paper shows, for occupations that had a higher percentage of females, the model tended to overestimate the prevalence of that occupation.
+
+
For example, in the training dataset 14.6% of surgeons were women, yet in the model predictions only 11.6% of the true positives were women. The model is thus amplifying the bias existing in the training set.
+
Now that we’ve seen that those biases exist, what can we do to mitigate them?
+
+
+
+
Addressing different types of bias
+
Different types of bias require different approaches for mitigation. While gathering a more diverse dataset can address representation bias, this would not help with historical bias or measurement bias. All datasets contain bias. There is no such thing as a completely debiased dataset. Many researchers in the field have been converging on a set of proposals to enable better documentation of the decisions, context, and specifics about how and why a particular dataset was created, what scenarios it is appropriate to use in, and what the limitations are. This way, those using a particular dataset will not be caught off guard by its biases and limitations.
+
We often hear the question—“Humans are biased, so does algorithmic bias even matter?” This comes up so often, there must be some reasoning that makes sense to the people that ask it, but it doesn’t seem very logically sound to us! Independently of whether this is logically sound, it’s important to realize that algorithms (particularly machine learning algorithms!) and people are different. Consider these points about machine learning algorithms:
+
+
Machine learning can create feedback loops:: Small amounts of bias can rapidly increase exponentially due to feedback loops.
+
Machine learning can amplify bias:: Human bias can lead to larger amounts of machine learning bias.
+
Algorithms & humans are used differently:: Human decision makers and algorithmic decision makers are not used in a plug-and-play interchangeable way in practice.
+
Technology is power:: And with that comes responsibility.
+
+
As the Arkansas healthcare example showed, machine learning is often implemented in practice not because it leads to better outcomes, but because it is cheaper and more efficient. Cathy O’Neill, in her book Weapons of Math Destruction (Crown), described the pattern of how the privileged are processed by people, whereas the poor are processed by algorithms. This is just one of a number of ways that algorithms are used differently than human decision makers. Others include:
+
+
People are more likely to assume algorithms are objective or error-free (even if they’re given the option of a human override).
+
Algorithms are more likely to be implemented with no appeals process in place.
+
Algorithms are often used at scale.
+
Algorithmic systems are cheap.
+
+
Even in the absence of bias, algorithms (and deep learning especially, since it is such an effective and scalable algorithm) can lead to negative societal problems, such as when used for disinformation.
+
+
+
Disinformation
+
Disinformation has a history stretching back hundreds or even thousands of years. It is not necessarily about getting someone to believe something false, but rather often used to sow disharmony and uncertainty, and to get people to give up on seeking the truth. Receiving conflicting accounts can lead people to assume that they can never know whom or what to trust.
+
Some people think disinformation is primarily about false information or fake news, but in reality, disinformation can often contain seeds of truth, or half-truths taken out of context. Ladislav Bittman was an intelligence officer in the USSR who later defected to the US and wrote some books in the 1970s and 1980s on the role of disinformation in Soviet propaganda operations. In The KGB and Soviet Disinformation (Pergamon) he wrote, “Most campaigns are a carefully designed mixture of facts, half-truths, exaggerations, and deliberate lies.”
+
In the US this has hit close to home in recent years, with the FBI detailing a massive disinformation campaign linked to Russia in the 2016 election. Understanding the disinformation that was used in this campaign is very educational. For instance, the FBI found that the Russian disinformation campaign often organized two separate fake “grass roots” protests, one for each side of an issue, and got them to protest at the same time! The Houston Chronicle reported on one of these odd events (<>).
+
+
: A group that called itself the “Heart of Texas” had organized it on social media—a protest, they said, against the “Islamization” of Texas. On one side of Travis Street, I found about 10 protesters. On the other side, I found around 50 counterprotesters. But I couldn’t find the rally organizers. No “Heart of Texas.” I thought that was odd, and mentioned it in the article: What kind of group is a no-show at its own event? Now I know why. Apparently, the rally’s organizers were in Saint Petersburg, Russia, at the time. “Heart of Texas” is one of the internet troll groups cited in Special Prosecutor Robert Mueller’s recent indictment of Russians attempting to tamper with the U.S. presidential election.
+
+
+
Disinformation often involves coordinated campaigns of inauthentic behavior. For instance, fraudulent accounts may try to make it seem like many people hold a particular viewpoint. While most of us like to think of ourselves as independent-minded, in reality we evolved to be influenced by others in our in-group, and in opposition to those in our out-group. Online discussions can influence our viewpoints, or alter the range of what we consider acceptable viewpoints. Humans are social animals, and as social animals we are extremely influenced by the people around us. Increasingly, radicalization occurs in online environments; influence is coming from people in the virtual space of online forums and social networks.
+
Disinformation through autogenerated text is a particularly significant issue, due to the greatly increased capability provided by deep learning. We discuss this issue in depth when we delve into creating language models, in <>.
+
One proposed approach is to develop some form of digital signature, to implement it in a seamless way, and to create norms that we should only trust content that has been verified. The head of the Allen Institute on AI, Oren Etzioni, wrote such a proposal in an article titled “How Will We Prevent AI-Based Forgery?”: “AI is poised to make high-fidelity forgery inexpensive and automated, leading to potentially disastrous consequences for democracy, security, and society. The specter of AI forgery means that we need to act to make digital signatures de rigueur as a means of authentication of digital content.”
+
Whilst we can’t hope to discuss all the ethical issues that deep learning, and algorithms more generally, brings up, hopefully this brief introduction has been a useful starting point you can build on. We’ll now move on to the questions of how to identify ethical issues, and what to do about them.
+
+
+
+
3.4 Identifying and Addressing Ethical Issues
+
Mistakes happen. Finding out about them, and dealing with them, needs to be part of the design of any system that includes machine learning (and many other systems too). The issues raised within data ethics are often complex and interdisciplinary, but it is crucial that we work to address them.
+
So what can we do? This is a big topic, but a few steps towards addressing ethical issues are:
+
+
Analyze a project you are working on.
+
Implement processes at your company to find and address ethical risks.
+
Support good policy.
+
Increase diversity.
+
+
Let’s walk through each of these steps, starting with analyzing a project you are working on.
+
+
Analyze a Project You Are Working On
+
It’s easy to miss important issues when considering ethical implications of your work. One thing that helps enormously is simply asking the right questions. Rachel Thomas recommends considering the following questions throughout the development of a data project:
+
+
Should we even be doing this?
+
What bias is in the data?
+
Can the code and data be audited?
+
What are the error rates for different sub-groups?
+
What is the accuracy of a simple rule-based alternative?
+
What processes are in place to handle appeals or mistakes?
+
How diverse is the team that built it?
+
+
These questions may be able to help you identify outstanding issues, and possible alternatives that are easier to understand and control. In addition to asking the right questions, it’s also important to consider practices and processes to implement.
+
One thing to consider at this stage is what data you are collecting and storing. Data often ends up being used for different purposes than what it was originally collected for. For instance, IBM began selling to Nazi Germany well before the Holocaust, including helping with Germany’s 1933 census conducted by Adolf Hitler, which was effective at identifying far more Jewish people than had previously been recognized in Germany. Similarly, US census data was used to round up Japanese-Americans (who were US citizens) for internment during World War II. It is important to recognize how data and images collected can be weaponized later. Columbia professor Tim Wu wrote that “You must assume that any personal data that Facebook or Android keeps are data that governments around the world will try to get or that thieves will try to steal.”
+
+
+
Processes to Implement
+
The Markkula Center has released An Ethical Toolkit for Engineering/Design Practice that includes some concrete practices to implement at your company, including regularly scheduled sweeps to proactively search for ethical risks (in a manner similar to cybersecurity penetration testing), expanding the ethical circle to include the perspectives of a variety of stakeholders, and considering the terrible people (how could bad actors abuse, steal, misinterpret, hack, destroy, or weaponize what you are building?).
+
Even if you don’t have a diverse team, you can still try to pro-actively include the perspectives of a wider group, considering questions such as these (provided by the Markkula Center):
+
+
Whose interests, desires, skills, experiences, and values have we simply assumed, rather than actually consulted?
+
Who are all the stakeholders who will be directly affected by our product? How have their interests been protected? How do we know what their interests really are—have we asked?
+
Who/which groups and individuals will be indirectly affected in significant ways?
+
Who might use this product that we didn’t expect to use it, or for purposes we didn’t initially intend?
+
+
+
Ethical lenses
+
Another useful resource from the Markkula Center is its Conceptual Frameworks in Technology and Engineering Practice. This considers how different foundational ethical lenses can help identify concrete issues, and lays out the following approaches and key questions:
+
+
The rights approach:: Which option best respects the rights of all who have a stake?
+
The justice approach:: Which option treats people equally or proportionately?
+
The utilitarian approach:: Which option will produce the most good and do the least harm?
+
The common good approach:: Which option best serves the community as a whole, not just some members?
+
The virtue approach:: Which option leads me to act as the sort of person I want to be?
+
+
Markkula’s recommendations include a deeper dive into each of these perspectives, including looking at a project through the lenses of its consequences:
+
+
Who will be directly affected by this project? Who will be indirectly affected?
+
Will the effects in aggregate likely create more good than harm, and what types of good and harm?
+
Are we thinking about all relevant types of harm/benefit (psychological, political, environmental, moral, cognitive, emotional, institutional, cultural)?
+
How might future generations be affected by this project?
+
Do the risks of harm from this project fall disproportionately on the least powerful in society? Will the benefits go disproportionately to the well-off?
+
Have we adequately considered “dual-use”?
+
+
The alternative lens to this is the deontological perspective, which focuses on basic concepts of right and wrong:
+
+
What rights of others and duties to others must we respect?
+
How might the dignity and autonomy of each stakeholder be impacted by this project?
+
What considerations of trust and of justice are relevant to this design/project?
+
Does this project involve any conflicting moral duties to others, or conflicting stakeholder rights? How can we prioritize these?
+
+
One of the best ways to help come up with complete and thoughtful answers to questions like these is to ensure that the people asking the questions are diverse.
+
+
+
+
The Power of Diversity
+
Currently, less than 12% of AI researchers are women, according to a study from Element AI. The statistics are similarly dire when it comes to race and age. When everybody on a team has similar backgrounds, they are likely to have similar blindspots around ethical risks. The Harvard Business Review (HBR) has published a number of studies showing many benefits of diverse teams, including:
Diversity can lead to problems being identified earlier, and a wider range of solutions being considered. For instance, Tracy Chou was an early engineer at Quora. She wrote of her experiences, describing how she advocated internally for adding a feature that would allow trolls and other bad actors to be blocked. Chou recounts, “I was eager to work on the feature because I personally felt antagonized and abused on the site (gender isn’t an unlikely reason as to why)… But if I hadn’t had that personal perspective, it’s possible that the Quora team wouldn’t have prioritized building a block button so early in its existence.” Harassment often drives people from marginalized groups off online platforms, so this functionality has been important for maintaining the health of Quora’s community.
+
A crucial aspect to understand is that women leave the tech industry at over twice the rate that men do, according to the Harvard Business Review (41% of women working in tech leave, compared to 17% of men). An analysis of over 200 books, white papers, and articles found that the reason they leave is that “they’re treated unfairly; underpaid, less likely to be fast-tracked than their male colleagues, and unable to advance.”
+
Studies have confirmed a number of the factors that make it harder for women to advance in the workplace. Women receive more vague feedback and personality criticism in performance evaluations, whereas men receive actionable advice tied to business outcomes (which is more useful). Women frequently experience being excluded from more creative and innovative roles, and not receiving high-visibility “stretch” assignments that are helpful in getting promoted. One study found that men’s voices are perceived as more persuasive, fact-based, and logical than women’s voices, even when reading identical scripts.
+
Receiving mentorship has been statistically shown to help men advance, but not women. The reason behind this is that when women receive mentorship, it’s advice on how they should change and gain more self-knowledge. When men receive mentorship, it’s public endorsement of their authority. Guess which is more useful in getting promoted?
+
As long as qualified women keep dropping out of tech, teaching more girls to code will not solve the diversity issues plaguing the field. Diversity initiatives often end up focusing primarily on white women, even though women of color face many additional barriers. In interviews with 60 women of color who work in STEM research, 100% had experienced discrimination.
+
The hiring process is particularly broken in tech. One study indicative of the dysfunction comes from Triplebyte, a company that helps place software engineers in companies, conducting a standardized technical interview as part of this process. They have a fascinating dataset: the results of how over 300 engineers did on their exam, coupled with the results of how those engineers did during the interview process for a variety of companies. The number one finding from Triplebyte’s research is that “the types of programmers that each company looks for often have little to do with what the company needs or does. Rather, they reflect company culture and the backgrounds of the founders.”
+
This is a challenge for those trying to break into the world of deep learning, since most companies’ deep learning groups today were founded by academics. These groups tend to look for people “like them”—that is, people that can solve complex math problems and understand dense jargon. They don’t always know how to spot people who are actually good at solving real problems using deep learning.
+
This leaves a big opportunity for companies that are ready to look beyond status and pedigree, and focus on results!
+
+
+
Fairness, Accountability, and Transparency
+
The professional society for computer scientists, the ACM, runs a data ethics conference called the Conference on Fairness, Accountability, and Transparency. “Fairness, Accountability, and Transparency” which used to go under the acronym FAT but now uses to the less objectionable FAccT. Microsoft has a group focused on “Fairness, Accountability, Transparency, and Ethics” (FATE). In this section, we’ll use “FAccT” to refer to the concepts of Fairness, Accountability, and Transparency.
+
FAccT is another lens that you may find useful in considering ethical issues. One useful resource for this is the free online book Fairness and Machine Learning: Limitations and Opportunities by Solon Barocas, Moritz Hardt, and Arvind Narayanan, which “gives a perspective on machine learning that treats fairness as a central concern rather than an afterthought.” It also warns, however, that it “is intentionally narrow in scope… A narrow framing of machine learning ethics might be tempting to technologists and businesses as a way to focus on technical interventions while sidestepping deeper questions about power and accountability. We caution against this temptation.” Rather than provide an overview of the FAccT approach to ethics (which is better done in books such as that one), our focus here will be on the limitations of this kind of narrow framing.
: The ethical implications of algorithmic systems have been much discussed in both HCI and the broader community of those interested in technology design, development and policy. In this paper, we explore the application of one prominent ethical framework - Fairness, Accountability, and Transparency - to a proposed algorithm that resolves various societal issues around food security and population aging. Using various standardised forms of algorithmic audit and evaluation, we drastically increase the algorithm’s adherence to the FAT framework, resulting in a more ethical and beneficent system. We discuss how this might serve as a guide to other researchers or practitioners looking to ensure better ethical outcomes from algorithmic systems in their line of work.
+
+
In this paper, the rather controversial proposal (“Turning the Elderly into High-Nutrient Slurry”) and the results (“drastically increase the algorithm’s adherence to the FAT framework, resulting in a more ethical and beneficent system”) are at odds… to say the least!
+
In philosophy, and especially philosophy of ethics, this is one of the most effective tools: first, come up with a process, definition, set of questions, etc., which is designed to resolve some problem. Then try to come up with an example where that apparent solution results in a proposal that no one would consider acceptable. This can then lead to a further refinement of the solution.
+
So far, we’ve focused on things that you and your organization can do. But sometimes individual or organizational action is not enough. Sometimes, governments also need to consider policy implications.
+
+
+
+
3.5 Role of Policy
+
We often talk to people who are eager for technical or design fixes to be a full solution to the kinds of problems that we’ve been discussing; for instance, a technical approach to debias data, or design guidelines for making technology less addictive. While such measures can be useful, they will not be sufficient to address the underlying problems that have led to our current state. For example, as long as it is incredibly profitable to create addictive technology, companies will continue to do so, regardless of whether this has the side effect of promoting conspiracy theories and polluting our information ecosystem. While individual designers may try to tweak product designs, we will not see substantial changes until the underlying profit incentives change.
+
+
The Effectiveness of Regulation
+
To look at what can cause companies to take concrete action, consider the following two examples of how Facebook has behaved. In 2018, a UN investigation found that Facebook had played a “determining role” in the ongoing genocide of the Rohingya, an ethnic minority in Mynamar described by UN Secretary-General Antonio Guterres as “one of, if not the, most discriminated people in the world.” Local activists had been warning Facebook executives that their platform was being used to spread hate speech and incite violence since as early as 2013. In 2015, they were warned that Facebook could play the same role in Myanmar that the radio broadcasts played during the Rwandan genocide (where a million people were killed). Yet, by the end of 2015, Facebook only employed four contractors that spoke Burmese. As one person close to the matter said, “That’s not 20/20 hindsight. The scale of this problem was significant and it was already apparent.” Zuckerberg promised during the congressional hearings to hire “dozens” to address the genocide in Myanmar (in 2018, years after the genocide had begun, including the destruction by fire of at least 288 villages in northern Rakhine state after August 2017).
+
This stands in stark contrast to Facebook quickly hiring 1,200 people in Germany to try to avoid expensive penalties (of up to 50 million euros) under a new German law against hate speech. Clearly, in this case, Facebook was more reactive to the threat of a financial penalty than to the systematic destruction of an ethnic minority.
: This regulatory project has been so successful in the First World that we risk forgetting what life was like before it. Choking smog of the kind that today kills thousands in Jakarta and Delhi was https://en.wikipedia.org/wiki/Pea_soup_fog[once emblematic of London]. The Cuyahoga River in Ohio used to http://www.ohiohistorycentral.org/w/Cuyahoga_River_Fire[reliably catch fire]. In a particularly horrific example of unforeseen consequences, tetraethyl lead added to gasoline https://en.wikipedia.org/wiki/Lead%E2%80%93crime_hypothesis[raised violent crime rates] worldwide for fifty years. None of these harms could have been fixed by telling people to vote with their wallet, or carefully review the environmental policies of every company they gave their business to, or to stop using the technologies in question. It took coordinated, and sometimes highly technical, regulation across jurisdictional boundaries to fix them. In some cases, like the https://en.wikipedia.org/wiki/Montreal_Protocol[ban on commercial refrigerants] that depleted the ozone layer, that regulation required a worldwide consensus. We’re at the point where we need a similar shift in perspective in our privacy law.
+
+
+
+
Rights and Policy
+
Clean air and clean drinking water are public goods which are nearly impossible to protect through individual market decisions, but rather require coordinated regulatory action. Similarly, many of the harms resulting from unintended consequences of misuses of technology involve public goods, such as a polluted information environment or deteriorated ambient privacy. Too often privacy is framed as an individual right, yet there are societal impacts to widespread surveillance (which would still be the case even if it was possible for a few individuals to opt out).
+
Many of the issues we are seeing in tech are actually human rights issues, such as when a biased algorithm recommends that Black defendants have longer prison sentences, when particular job ads are only shown to young people, or when police use facial recognition to identify protesters. The appropriate venue to address human rights issues is typically through the law.
+
We need both regulatory and legal changes, and the ethical behavior of individuals. Individual behavior change can’t address misaligned profit incentives, externalities (where corporations reap large profits while offloading their costs and harms to the broader society), or systemic failures. However, the law will never cover all edge cases, and it is important that individual software developers and data scientists are equipped to make ethical decisions in practice.
+
+
+
Cars: A Historical Precedent
+
The problems we are facing are complex, and there are no simple solutions. This can be discouraging, but we find hope in considering other large challenges that people have tackled throughout history. One example is the movement to increase car safety, covered as a case study in “Datasheets for Datasets” by Timnit Gebru et al. and in the design podcast 99% Invisible. Early cars had no seatbelts, metal knobs on the dashboard that could lodge in people’s skulls during a crash, regular plate glass windows that shattered in dangerous ways, and non-collapsible steering columns that impaled drivers. However, car companies were incredibly resistant to even discussing the idea of safety as something they could help address, and the widespread belief was that cars are just the way they are, and that it was the people using them who caused problems.
+
It took consumer safety activists and advocates decades of work to even change the national conversation to consider that perhaps car companies had some responsibility which should be addressed through regulation. When the collapsible steering column was invented, it was not implemented for several years as there was no financial incentive to do so. Major car company General Motors hired private detectives to try to dig up dirt on consumer safety advocate Ralph Nader. The requirement of seatbelts, crash test dummies, and collapsible steering columns were major victories. It was only in 2011 that car companies were required to start using crash test dummies that would represent the average woman, and not just average men’s bodies; prior to this, women were 40% more likely to be injured in a car crash of the same impact compared to a man. This is a vivid example of the ways that bias, policy, and technology have important consequences.
+
+
+
+
3.6 Conclusion
+
Coming from a background of working with binary logic, the lack of clear answers in ethics can be frustrating at first. Yet, the implications of how our work impacts the world, including unintended consequences and the work becoming weaponized by bad actors, are some of the most important questions we can (and should!) consider. Even though there aren’t any easy answers, there are definite pitfalls to avoid and practices to follow to move toward more ethical behavior.
+
Many people (including us!) are looking for more satisfying, solid answers about how to address harmful impacts of technology. However, given the complex, far-reaching, and interdisciplinary nature of the problems we are facing, there are no simple solutions. Julia Angwin, former senior reporter at ProPublica who focuses on issues of algorithmic bias and surveillance (and one of the 2016 investigators of the COMPAS recidivism algorithm that helped spark the field of FAccT) said in a 2019 interview:
+
+
: I strongly believe that in order to solve a problem, you have to diagnose it, and that we’re still in the diagnosis phase of this. If you think about the turn of the century and industrialization, we had, I don’t know, 30 years of child labor, unlimited work hours, terrible working conditions, and it took a lot of journalist muckraking and advocacy to diagnose the problem and have some understanding of what it was, and then the activism to get laws changed. I feel like we’re in a second industrialization of data information… I see my role as trying to make as clear as possible what the downsides are, and diagnosing them really accurately so that they can be solvable. That’s hard work, and lots more people need to be doing it.
+
+
It’s reassuring that Angwin thinks we are largely still in the diagnosis phase: if your understanding of these problems feels incomplete, that is normal and natural. Nobody has a “cure” yet, although it is vital that we continue working to better understand and address the problems we are facing.
+
One of our reviewers for this book, Fred Monroe, used to work in hedge fund trading. He told us, after reading this chapter, that many of the issues discussed here (distribution of data being dramatically different than what a model was trained on, the impact feedback loops on a model once deployed and at scale, and so forth) were also key issues for building profitable trading models. The kinds of things you need to do to consider societal consequences are going to have a lot of overlap with things you need to do to consider organizational, market, and customer consequences—so thinking carefully about ethics can also help you think carefully about how to make your data product successful more generally!
+
+
+
3.7 Questionnaire
+
+
Does ethics provide a list of “right answers”?
+
How can working with people of different backgrounds help when considering ethical questions?
+
What was the role of IBM in Nazi Germany? Why did the company participate as it did? Why did the workers participate?
+
What was the role of the first person jailed in the Volkswagen diesel scandal?
+
What was the problem with a database of suspected gang members maintained by California law enforcement officials?
+
Why did YouTube’s recommendation algorithm recommend videos of partially clothed children to pedophiles, even though no employee at Google had programmed this feature?
+
What are the problems with the centrality of metrics?
+
Why did Meetup.com not include gender in its recommendation system for tech meetups?
+
What are the six types of bias in machine learning, according to Suresh and Guttag?
+
Give two examples of historical race bias in the US.
How are machines and people different, in terms of their use for making decisions?
+
Is disinformation the same as “fake news”?
+
Why is disinformation through auto-generated text a particularly significant issue?
+
What are the five ethical lenses described by the Markkula Center?
+
Where is policy an appropriate tool for addressing data ethics issues?
+
+
+
Further Research:
+
+
Read the article “What Happens When an Algorithm Cuts Your Healthcare”. How could problems like this be avoided in the future?
+
Research to find out more about YouTube’s recommendation system and its societal impacts. Do you think recommendation systems must always have feedback loops with negative results? What approaches could Google take to avoid them? What about the government?
+
Read the paper “Discrimination in Online Ad Delivery”. Do you think Google should be considered responsible for what happened to Dr. Sweeney? What would be an appropriate response?
+
How can a cross-disciplinary team help avoid negative consequences?
+
Read the paper “Does Machine Learning Automate Moral Hazard and Error”. What actions do you think should be taken to deal with the issues identified in this paper?
+
Read the article “How Will We Prevent AI-Based Forgery?” Do you think Etzioni’s proposed approach could work? Why?
+
Complete the section “Analyze a Project You Are Working On” in this chapter.
+
Consider whether your team could be more diverse. If so, what approaches might help?
+
+
+
+
+
3.8 Deep Learning in Practice: That’s a Wrap!
+
Congratulations! You’ve made it to the end of the first section of the book. In this section we’ve tried to show you what deep learning can do, and how you can use it to create real applications and products. At this point, you will get a lot more out of the book if you spend some time trying out what you’ve learned. Perhaps you have already been doing this as you go along—in which case, great! If not, that’s no problem either… Now is a great time to start experimenting yourself.
+
If you haven’t been to the book’s website yet, head over there now. It’s really important that you get yourself set up to run the notebooks. Becoming an effective deep learning practitioner is all about practice, so you need to be training models. So, please go get the notebooks running now if you haven’t already! And also have a look on the website for any important updates or notices; deep learning changes fast, and we can’t change the words that are printed in this book, so the website is where you need to look to ensure you have the most up-to-date information.
+
Make sure that you have completed the following steps:
+
+
Connect to one of the GPU Jupyter servers recommended on the book’s website.
+
Run the first notebook yourself.
+
Upload an image that you find in the first notebook; then try a few different images of different kinds to see what happens.
+
Run the second notebook, collecting your own dataset based on image search queries that you come up with.
+
Think about how you can use deep learning to help you with your own projects, including what kinds of data you could use, what kinds of problems may come up, and how you might be able to mitigate these issues in practice.
+
+
In the next section of the book you will learn about how and why deep learning works, instead of just seeing how you can use it in practice. Understanding the how and why is important for both practitioners and researchers, because in this fairly new field nearly every project requires some level of customization and debugging. The better you understand the foundations of deep learning, the better your models will be. These foundations are less important for executives, product managers, and so forth (although still useful, so feel free to keep reading!), but they are critical for anybody who is actually training and deploying models themselves.
Now that you understand what deep learning is, what it’s for, and how to create and deploy a model, it’s time for us to go deeper! In an ideal world deep learning practitioners wouldn’t have to know every detail of how things work under the hood… But as yet, we don’t live in an ideal world. The truth is, to make your model really work, and work reliably, there are a lot of details you have to get right, and a lot of details that you have to check. This process requires being able to look inside your neural network as it trains, and as it makes predictions, find possible problems, and know how to fix them.
+
So, from here on in the book we are going to do a deep dive into the mechanics of deep learning. What is the architecture of a computer vision model, an NLP model, a tabular model, and so on? How do you create an architecture that matches the needs of your particular domain? How do you get the best possible results from the training process? How do you make things faster? What do you have to change as your datasets change?
+
We will start by repeating the same basic applications that we looked at in the first chapter, but we are going to do two things:
+
+
Make them better.
+
Apply them to a wider variety of types of data.
+
+
In order to do these two things, we will have to learn all of the pieces of the deep learning puzzle. This includes different types of layers, regularization methods, optimizers, how to put layers together into architectures, labeling techniques, and much more. We are not just going to dump all of these things on you, though; we will introduce them progressively as needed, to solve actual problems related to the projects we are working on.
+
+
This is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).
In the previous chapter you learned some important practical techniques for training models in practice. Considerations like selecting learning rates and the number of epochs are very important to getting good results.
+
In this chapter we are going to look at two other types of computer vision problems: multi-label classification and regression. The first one is when you want to predict more than one label per image (or sometimes none at all), and the second is when your labels are one or several numbers—a quantity instead of a category.
+
In the process will study more deeply the output activations, targets, and loss functions in deep learning models.
+
+
6.1 Multi-Label Classification
+
Multi-label classification refers to the problem of identifying the categories of objects in images that may not contain exactly one type of object. There may be more than one kind of object, or there may be no objects at all in the classes that you are looking for.
+
For instance, this would have been a great approach for our bear classifier. One problem with the bear classifier that we rolled out in Chapter 2 was that if a user uploaded something that wasn’t any kind of bear, the model would still say it was either a grizzly, black, or teddy bear—it had no ability to predict “not a bear at all.” In fact, after we have completed this chapter, it would be a great exercise for you to go back to your image classifier application, and try to retrain it using the multi-label technique, then test it by passing in an image that is not of any of your recognized classes.
+
In practice, we have not seen many examples of people training multi-label classifiers for this purpose—but we very often see both users and developers complaining about this problem. It appears that this simple solution is not at all widely understood or appreciated! Because in practice it is probably more common to have some images with zero matches or more than one match, we should probably expect in practice that multi-label classifiers are more widely applicable than single-label classifiers.
+
First, let’s see what a multi-label dataset looks like, then we’ll explain how to get it ready for our model. You’ll see that the architecture of the model does not change from the last chapter; only the loss function does. Let’s start with the data.
+
+
This is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).
This chapter introduces more advanced techniques for training an image classification model and getting state-of-the-art results. You can skip it if you want to learn more about other applications of deep learning and come back to it later—knowledge of this material will not be assumed in later chapters.
+
We will look at what normalization is, a powerful data augmentation technique called mixup, the progressive resizing approach and test time augmentation. To show all of this, we are going to train a model from scratch (not using transfer learning) using a subset of ImageNet called Imagenette. It contains a subset of 10 very different categories from the original ImageNet dataset, making for quicker training when we want to experiment.
+
This is going to be much harder to do well than with our previous datasets because we’re using full-size, full-color images, which are photos of objects of different sizes, in different orientations, in different lighting, and so forth. So, in this chapter we’re going to introduce some important techniques for getting the most out of your dataset, especially when you’re training from scratch, or using transfer learning to train a model on a very different kind of dataset than the pretrained model used.
+
+
7.1 Imagenette
+
When fast.ai first started there were three main datasets that people used for building and testing computer vision models:
+
+
ImageNet:: 1.3 million images of various sizes around 500 pixels across, in 1,000 categories, which took a few days to train
CIFAR10:: 60,000 32×32-pixel color images in 10 classes
+
+
The problem was that the smaller datasets didn’t actually generalize effectively to the large ImageNet dataset. The approaches that worked well on ImageNet generally had to be developed and trained on ImageNet. This led to many people believing that only researchers with access to giant computing resources could effectively contribute to developing image classification algorithms.
+
We thought that seemed very unlikely to be true. We had never actually seen a study that showed that ImageNet happen to be exactly the right size, and that other datasets could not be developed which would provide useful insights. So we thought we would try to create a new dataset that researchers could test their algorithms on quickly and cheaply, but which would also provide insights likely to work on the full ImageNet dataset.
+
About three hours later we had created Imagenette. We selected 10 classes from the full ImageNet that looked very different from one another. As we had hoped, we were able to quickly and cheaply create a classifier capable of recognizing these classes. We then tried out a few algorithmic tweaks to see how they impacted Imagenette. We found some that worked pretty well, and tested them on ImageNet as well—and we were very pleased to find that our tweaks worked well on ImageNet too!
+
There is an important message here: the dataset you get given is not necessarily the dataset you want. It’s particularly unlikely to be the dataset that you want to do your development and prototyping in. You should aim to have an iteration speed of no more than a couple of minutes—that is, when you come up with a new idea you want to try out, you should be able to train a model and see how it goes within a couple of minutes. If it’s taking longer to do an experiment, think about how you could cut down your dataset, or simplify your model, to improve your experimentation speed. The more experiments you can do, the better!
+
+
This is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).
One very common problem to solve is when you have a number of users and a number of products, and you want to recommend which products are most likely to be useful for which users. There are many variations of this: for example, recommending movies (such as on Netflix), figuring out what to highlight for a user on a home page, deciding what stories to show in a social media feed, and so forth. There is a general solution to this problem, called collaborative filtering, which works like this: look at what products the current user has used or liked, find other users that have used or liked similar products, and then recommend other products that those users have used or liked.
+
For example, on Netflix you may have watched lots of movies that are science fiction, full of action, and were made in the 1970s. Netflix may not know these particular properties of the films you have watched, but it will be able to see that other people that have watched the same movies that you watched also tended to watch other movies that are science fiction, full of action, and were made in the 1970s. In other words, to use this approach we don’t necessarily need to know anything about the movies, except who like to watch them.
+
There is actually a more general class of problems that this approach can solve, not necessarily involving users and products. Indeed, for collaborative filtering we more commonly refer to items, rather than products. Items could be links that people click on, diagnoses that are selected for patients, and so forth.
+
The key foundational idea is that of latent factors. In the Netflix example, we started with the assumption that you like old, action-packed sci-fi movies. But you never actually told Netflix that you like these kinds of movies. And Netflix never actually needed to add columns to its movies table saying which movies are of these types. Still, there must be some underlying concept of sci-fi, action, and movie age, and these concepts must be relevant for at least some people’s movie watching decisions.
+
For this chapter we are going to work on this movie recommendation problem. We’ll start by getting some data suitable for a collaborative filtering model.
+
+
This is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).
Tabular modeling takes data in the form of a table (like a spreadsheet or CSV). The objective is to predict the value in one column based on the values in the other columns. In this chapter we will not only look at deep learning but also more general machine learning techniques like random forests, as they can give better results depending on your problem.
+
We will look at how we should preprocess and clean the data as well as how to interpret the result of our models after training, but first, we will see how we can feed columns that contain categories into a model that expects numbers by using embeddings.
+
+
9.1 Categorical Embeddings
+
In tabular data some columns may contain numerical data, like “age,” while others contain string values, like “sex.” The numerical data can be directly fed to the model (with some optional preprocessing), but the other columns need to be converted to numbers. Since the values in those correspond to different categories, we often call this type of variables categorical variables. The first type are called continuous variables.
+
+
jargon: Continuous and Categorical Variables: Continuous variables are numerical data, such as “age,” that can be directly fed to the model, since you can add and multiply them directly. Categorical variables contain a number of discrete levels, such as “movie ID,” for which addition and multiplication don’t have meaning (even if they’re stored as numbers).
+
+
At the end of 2015, the Rossmann sales competition ran on Kaggle. Competitors were given a wide range of information about various stores in Germany, and were tasked with trying to predict sales on a number of days. The goal was to help the company to manage stock properly and be able to satisfy demand without holding unnecessary inventory. The official training set provided a lot of information about the stores. It was also permitted for competitors to use additional data, as long as that data was made public and available to all participants.
+
+
This is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).
In Chapter 4 we learned how to create a neural network recognizing images. We were able to achieve a bit over 98% accuracy at distinguishing 3s from 7s—but we also saw that fastai’s built-in classes were able to get close to 100%. Let’s start trying to close the gap.
+
In this chapter, we will begin by digging into what convolutions are and building a CNN from scratch. We will then study a range of techniques to improve training stability and learn all the tweaks the library usually applies for us to get great results.
+
+
13.1 The Magic of Convolutions
+
One of the most powerful tools that machine learning practitioners have at their disposal is feature engineering. A feature is a transformation of the data which is designed to make it easier to model. For instance, the add_datepart function that we used for our tabular dataset preprocessing in Chapter 9 added date features to the Bulldozers dataset. What kinds of features might we be able to create from images?
+
+
jargon: Feature engineering: Creating new transformations of the input data in order to make it easier to model.
+
+
In the context of an image, a feature is a visually distinctive attribute. For example, the number 7 is characterized by a horizontal edge near the top of the digit, and a top-right to bottom-left diagonal edge underneath that. On the other hand, the number 3 is characterized by a diagonal edge in one direction at the top left and bottom right of the digit, the opposite diagonal at the bottom left and top right, horizontal edges at the middle, top, and bottom, and so forth. So what if we could extract information about where the edges occur in each image, and then use that information as our features, instead of raw pixels?
+
It turns out that finding the edges in an image is a very common task in computer vision, and is surprisingly straightforward. To do it, we use something called a convolution. A convolution requires nothing more than multiplication, and addition—two operations that are responsible for the vast majority of work that we will see in every single deep learning model in this book!
+
A convolution applies a kernel across an image. A kernel is a little matrix, such as the 3×3 matrix in the top right of Figure 13.1.
+
+
+
+
The 7×7 grid to the left is the image we’re going to apply the kernel to. The convolution operation multiplies each element of the kernel by each element of a 3×3 block of the image. The results of these multiplications are then added together. The diagram in Figure 13.1 shows an example of applying a kernel to a single location in the image, the 3×3 block around cell 18.
+
Let’s do this with code. First, we create a little 3×3 matrix like so:
Now we’re going to take the top 3×3-pixel square of our image, and multiply each of those values by each item in our kernel. Then we’ll add them up, like so:
There’s a top edge at cell 5,8. Let’s repeat our calculation there:
+
+
(im3_t[4:7,6:9] * top_edge).sum()
+
+
tensor(762.)
+
+
+
There’s a right edge at cell 8,18. What does that give us?:
+
+
(im3_t[7:10,17:20] * top_edge).sum()
+
+
tensor(-29.)
+
+
+
As you can see, this little calculation is returning a high number where the 3×3-pixel square represents a top edge (i.e., where there are low values at the top of the square, and high values immediately underneath). That’s because the -1 values in our kernel have little impact in that case, but the 1 values have a lot.
+
Let’s look a tiny bit at the math. The filter will take any window of size 3×3 in our images, and if we name the pixel values like this:
it will return \(-a1-a2-a3+a7+a8+a9\). If we are in a part of the image where \(a1\), \(a2\), and \(a3\) add up to the same as \(a7\), \(a8\), and \(a9\), then the terms will cancel each other out and we will get 0. However, if \(a7\) is greater than \(a1\), \(a8\) is greater than \(a2\), and \(a9\) is greater than \(a3\), we will get a bigger number as a result. So this filter detects horizontal edges—more precisely, edges where we go from bright parts of the image at the top to darker parts at the bottom.
+
Changing our filter to have the row of 1s at the top and the -1s at the bottom would detect horizontal edges that go from dark to light. Putting the 1s and -1s in columns versus rows would give us filters that detect vertical edges. Each set of weights will produce a different kind of outcome.
+
Let’s create a function to do this for one location, and check it matches our result from before:
But note that we can’t apply it to the corner (e.g., location 0,0), since there isn’t a complete 3×3 square there.
+
+
Mapping a Convolution Kernel
+
We can map apply_kernel() across the coordinate grid. That is, we’ll be taking our 3×3 kernel, and applying it to each 3×3 section of our image. For instance, Figure 13.2 shows the positions a 3×3 kernel can be applied to in the first row of a 5×5 image.
+
+
+
+
To get a grid of coordinates we can use a nested list comprehension, like so:
note: Nested List Comprehensions: Nested list comprehensions are used a lot in Python, so if you haven’t seen them before, take a few minutes to make sure you understand what’s happening here, and experiment with writing your own nested list comprehensions.
+
+
Here’s the result of applying our kernel over a coordinate grid:
+
+
rng =range(1,27)
+top_edge3 = tensor([[apply_kernel(i,j,top_edge) for j in rng] for i in rng])
+
+show_image(top_edge3);
+
+
+
+
+
+
+
Looking good! Our top edges are black, and bottom edges are white (since they are the opposite of top edges). Now that our image contains negative numbers too, matplotlib has automatically changed our colors so that white is the smallest number in the image, black the highest, and zeros appear as gray.
+
We can try the same thing for left edges:
+
+
left_edge = tensor([[-1,1,0],
+ [-1,1,0],
+ [-1,1,0]]).float()
+
+left_edge3 = tensor([[apply_kernel(i,j,left_edge) for j in rng] for i in rng])
+
+show_image(left_edge3);
+
+
+
+
+
+
+
As we mentioned before, a convolution is the operation of applying such a kernel over a grid in this way. In the paper “A Guide to Convolution Arithmetic for Deep Learning” there are many great diagrams showing how image kernels can be applied. Here’s an example from the paper showing (at the bottom) a light blue 4×4 image, with a dark blue 3×3 kernel being applied, creating a 2×2 green output activation map at the top.
+
+
+
+
Look at the shape of the result. If the original image has a height of h and a width of w, how many 3×3 windows can we find? As you can see from the example, there are h-2 by w-2 windows, so the image we get has a result as a height of h-2 and a width of w-2.
+
We won’t implement this convolution function from scratch, but use PyTorch’s implementation instead (it is way faster than anything we could do in Python).
+
+
+
Convolutions in PyTorch
+
Convolution is such an important and widely used operation that PyTorch has it built in. It’s called F.conv2d (recall that F is a fastai import from torch.nn.functional, as recommended by PyTorch). The PyTorch docs tell us that it includes these parameters:
+
+
input:: input tensor of shape (minibatch, in_channels, iH, iW)
+
weight:: filters of shape (out_channels, in_channels, kH, kW)
+
+
Here iH,iW is the height and width of the image (i.e., 28,28), and kH,kW is the height and width of our kernel (3,3). But apparently PyTorch is expecting rank-4 tensors for both these arguments, whereas currently we only have rank-2 tensors (i.e., matrices, or arrays with two axes).
+
The reason for these extra axes is that PyTorch has a few tricks up its sleeve. The first trick is that PyTorch can apply a convolution to multiple images at the same time. That means we can call it on every item in a batch at once!
+
The second trick is that PyTorch can apply multiple kernels at the same time. So let’s create the diagonal-edge kernels too, and then stack all four of our edge kernels into a single tensor:
By default, fastai puts data on the GPU when using data blocks. Let’s move it to the CPU for our examples:
+
+
xb,yb = to_cpu(xb),to_cpu(yb)
+
+
One batch contains 64 images, each of 1 channel, with 28×28 pixels. F.conv2d can handle multichannel (i.e., color) images too. A channel is a single basic color in an image—for regular full-color images there are three channels, red, green, and blue. PyTorch represents an image as a rank-3 tensor, with dimensions [channels, rows, columns].
+
We’ll see how to handle more than one channel later in this chapter. Kernels passed to F.conv2d need to be rank-4 tensors: [channels_in, features_out, rows, columns]. edge_kernels is currently missing one of these. We need to tell PyTorch that the number of input channels in the kernel is one, which we can do by inserting an axis of size one (this is known as a unit axis) in the first location, where the PyTorch docs show in_channels is expected. To insert a unit axis into a tensor, we use the unsqueeze method:
The output shape shows we gave 64 images in the mini-batch, 4 kernels, and 26×26 edge maps (we started with 28×28 images, but lost one pixel from each side as discussed earlier). We can see we get the same results as when we did this manually:
+
+
show_image(batch_features[0,0]);
+
+
+
+
+
+
+
The most important trick that PyTorch has up its sleeve is that it can use the GPU to do all this work in parallel—that is, applying multiple kernels, to multiple images, across multiple channels. Doing lots of work in parallel is critical to getting GPUs to work efficiently; if we did each of these operations one at a time, we’d often run hundreds of times slower (and if we used our manual convolution loop from the previous section, we’d be millions of times slower!). Therefore, to become a strong deep learning practitioner, one skill to practice is giving your GPU plenty of work to do at a time.
+
It would be nice to not lose those two pixels on each axis. The way we do that is to add padding, which is simply additional pixels added around the outside of our image. Most commonly, pixels of zeros are added.
+
+
+
Strides and Padding
+
With appropriate padding, we can ensure that the output activation map is the same size as the original image, which can make things a lot simpler when we construct our architectures. Figure 13.4 shows how adding padding allows us to apply the kernels in the image corners.
+
+
+
+
With a 5×5 input, 4×4 kernel, and 2 pixels of padding, we end up with a 6×6 activation map, as we can see in Figure 13.5.
+
+
+
+
If we add a kernel of size ks by ks (with ks an odd number), the necessary padding on each side to keep the same shape is ks//2. An even number for ks would require a different amount of padding on the top/bottom and left/right, but in practice we almost never use an even filter size.
+
So far, when we have applied the kernel to the grid, we have moved it one pixel over at a time. But we can jump further; for instance, we could move over two pixels after each kernel application, as in Figure 13.6. This is known as a stride-2 convolution. The most common kernel size in practice is 3×3, and the most common padding is 1. As you’ll see, stride-2 convolutions are useful for decreasing the size of our outputs, and stride-1 convolutions are useful for adding layers without changing the output size.
+
+
+
+
In an image of size h by w, using a padding of 1 and a stride of 2 will give us a result of size (h+1)//2 by (w+1)//2. The general formula for each dimension is (n + 2*pad - ks)//stride + 1, where pad is the padding, ks, the size of our kernel, and stride is the stride.
+
Let’s now take a look at how the pixel values of the result of our convolutions are computed.
+
+
+
Understanding the Convolution Equations
+
To explain the math behind convolutions, fast.ai student Matt Kleinsmith came up with the very clever idea of showing CNNs from different viewpoints. In fact, it’s so clever, and so helpful, we’re going to show it here too!
+
Here’s our 3×3 pixel image, with each pixel labeled with a letter:
+
+
And here’s our kernel, with each weight labeled with a Greek letter:
+
+
Since the filter fits in the image four times, we have four results:
+
+
Figure 13.7 shows how we applied the kernel to each section of the image to yield each result.
Notice that the bias term, b, is the same for each section of the image. You can consider the bias as part of the filter, just like the weights (α, β, γ, δ) are part of the filter.
+
Here’s an interesting insight—a convolution can be represented as a special kind of matrix multiplication, as illustrated in Figure 13.9. The weight matrix is just like the ones from traditional neural networks. However, this weight matrix has two special properties:
+
+
The zeros shown in gray are untrainable. This means that they’ll stay zero throughout the optimization process.
+
Some of the weights are equal, and while they are trainable (i.e., changeable), they must remain equal. These are called shared weights.
+
+
The zeros correspond to the pixels that the filter can’t touch. Each row of the weight matrix corresponds to one application of the filter.
+
+
+
+
Now that we understand what a convolution is, let’s use them to build a neural net.
+
+
+
+
13.2 Our First Convolutional Neural Network
+
There is no reason to believe that some particular edge filters are the most useful kernels for image recognition. Furthermore, we’ve seen that in later layers convolutional kernels become complex transformations of features from lower levels, but we don’t have a good idea of how to manually construct these.
+
Instead, it would be best to learn the values of the kernels. We already know how to do this—SGD! In effect, the model will learn the features that are useful for classification.
+
When we use convolutions instead of (or in addition to) regular linear layers we create a convolutional neural network (CNN).
+
+
Creating the CNN
+
Let’s go back to the basic neural network we had in Chapter 4. It was defined like this:
We now want to create a similar architecture to this linear model, but using convolutional layers instead of linear. nn.Conv2d is the module equivalent of F.conv2d. It’s more convenient than F.conv2d when creating an architecture, because it creates the weight matrix for us automatically when we instantiate it.
One thing to note here is that we didn’t need to specify 28×28 as the input size. That’s because a linear layer needs a weight in the weight matrix for every pixel, so it needs to know how many pixels there are, but a convolution is applied over each pixel automatically. The weights only depend on the number of input and output channels and the kernel size, as we saw in the previous section.
+
Think about what the output shape is going to be, then let’s try it and see:
+
+
broken_cnn(xb).shape
+
+
torch.Size([64, 1, 28, 28])
+
+
+
This is not something we can use to do classification, since we need a single output activation per image, not a 28×28 map of activations. One way to deal with this is to use enough stride-2 convolutions such that the final layer is size 1. That is, after one stride-2 convolution the size will be 14×14, after two it will be 7×7, then 4×4, 2×2, and finally size 1.
+
Let’s try that now. First, we’ll define a function with the basic parameters we’ll use in each convolution:
+
+
def conv(ni, nf, ks=3, act=True):
+ res = nn.Conv2d(ni, nf, stride=2, kernel_size=ks, padding=ks//2)
+if act: res = nn.Sequential(res, nn.ReLU())
+return res
+
+
+
important: Refactoring: Refactoring parts of your neural networks like this makes it much less likely you’ll get errors due to inconsistencies in your architectures, and makes it more obvious to the reader which parts of your layers are actually changing.
+
+
When we use a stride-2 convolution, we often increase the number of features at the same time. This is because we’re decreasing the number of activations in the activation map by a factor of 4; we don’t want to decrease the capacity of a layer by too much at a time.
+
+
jargon: channels and features: These two terms are largely used interchangeably, and refer to the size of the second axis of a weight matrix, which is, the number of activations per grid cell after a convolution. Features is never used to refer to the input data, but channels can refer to either the input data (generally channels are colors) or activations inside the network.
j: I like to add comments like the ones here after each convolution to show how large the activation map will be after each layer. These comments assume that the input size is 28*28
+
+
Now the network outputs two activations, which map to the two possible levels in our labels:
To see exactly what’s going on in the model, we can use summary:
+
+
learn.summary()
+
+
Sequential (Input shape: ['64 x 1 x 28 x 28'])
+================================================================
+Layer (type) Output Shape Param # Trainable
+================================================================
+Conv2d 64 x 4 x 14 x 14 40 True
+________________________________________________________________
+ReLU 64 x 4 x 14 x 14 0 False
+________________________________________________________________
+Conv2d 64 x 8 x 7 x 7 296 True
+________________________________________________________________
+ReLU 64 x 8 x 7 x 7 0 False
+________________________________________________________________
+Conv2d 64 x 16 x 4 x 4 1,168 True
+________________________________________________________________
+ReLU 64 x 16 x 4 x 4 0 False
+________________________________________________________________
+Conv2d 64 x 32 x 2 x 2 4,640 True
+________________________________________________________________
+ReLU 64 x 32 x 2 x 2 0 False
+________________________________________________________________
+Conv2d 64 x 2 x 1 x 1 578 True
+________________________________________________________________
+Flatten 64 x 2 0 False
+________________________________________________________________
+
+Total params: 6,722
+Total trainable params: 6,722
+Total non-trainable params: 0
+
+Optimizer used: <function Adam>
+Loss function: <function cross_entropy>
+
+Callbacks:
+ - TrainEvalCallback
+ - Recorder
+ - ProgressCallback
+
+
+
Note that the output of the final Conv2d layer is 64x2x1x1. We need to remove those extra 1x1 axes; that’s what Flatten does. It’s basically the same as PyTorch’s squeeze method, but as a module.
+
Let’s see if this trains! Since this is a deeper network than we’ve built from scratch before, we’ll use a lower learning rate and more epochs:
+
+
learn.fit_one_cycle(2, 0.01)
+
+
+
+
+
epoch
+
train_loss
+
valid_loss
+
accuracy
+
time
+
+
+
+
+
0
+
0.072684
+
0.045110
+
0.990186
+
00:05
+
+
+
1
+
0.022580
+
0.030775
+
0.990186
+
00:05
+
+
+
+
+
+
Success! It’s getting closer to the resnet18 result we had, although it’s not quite there yet, and it’s taking more epochs, and we’re needing to use a lower learning rate. We still have a few more tricks to learn, but we’re getting closer and closer to being able to create a modern CNN from scratch.
+
+
+
Understanding Convolution Arithmetic
+
We can see from the summary that we have an input of size 64x1x28x28. The axes are batch,channel,height,width. This is often represented as NCHW (where N refers to batch size). Tensorflow, on the other hand, uses NHWC axis order. The first layer is:
So we have 1 input channel, 4 output channels, and a 3×3 kernel. Let’s check the weights of the first convolution:
+
+
m[0].weight.shape
+
+
torch.Size([4, 1, 3, 3])
+
+
+
The summary shows we have 40 parameters, and 4*1*3*3 is 36. What are the other four parameters? Let’s see what the bias contains:
+
+
m[0].bias.shape
+
+
torch.Size([4])
+
+
+
We can now use this information to clarify our statement in the previous section: “When we use a stride-2 convolution, we often increase the number of features because we’re decreasing the number of activations in the activation map by a factor of 4; we don’t want to decrease the capacity of a layer by too much at a time.”
+
There is one bias for each channel. (Sometimes channels are called features or filters when they are not input channels.) The output shape is 64x4x14x14, and this will therefore become the input shape to the next layer. The next layer, according to summary, has 296 parameters. Let’s ignore the batch axis to keep things simple. So for each of 14*14=196 locations we are multiplying 296-8=288 weights (ignoring the bias for simplicity), so that’s 196*288=56_448 multiplications at this layer. The next layer will have 7*7*(1168-16)=56_448 multiplications.
+
What happened here is that our stride-2 convolution halved the grid size from 14x14 to 7x7, and we doubled the number of filters from 8 to 16, resulting in no overall change in the amount of computation. If we left the number of channels the same in each stride-2 layer, the amount of computation being done in the net would get less and less as it gets deeper. But we know that the deeper layers have to compute semantically rich features (such as eyes or fur), so we wouldn’t expect that doing less computation would make sense.
+
Another way to think of this is based on receptive fields.
+
+
+
Receptive Fields
+
The receptive field is the area of an image that is involved in the calculation of a layer. On the book’s website, you’ll find an Excel spreadsheet called conv-example.xlsx that shows the calculation of two stride-2 convolutional layers using an MNIST digit. Each layer has a single kernel. Figure 13.10 shows what we see if we click on one of the cells in the conv2 section, which shows the output of the second convolutional layer, and click trace precedents.
+
+
+
+
Here, the cell with the green border is the cell we clicked on, and the blue highlighted cells are its precedents—that is, the cells used to calculate its value. These cells are the corresponding 3×3 area of cells from the input layer (on the left), and the cells from the filter (on the right). Let’s now click trace precedents again, to see what cells are used to calculate these inputs. Figure 13.11 shows what happens.
+
+
+
+
In this example, we have just two convolutional layers, each of stride 2, so this is now tracing right back to the input image. We can see that a 7×7 area of cells in the input layer is used to calculate the single green cell in the Conv2 layer. This 7×7 area is the receptive field in the input of the green activation in Conv2. We can also see that a second filter kernel is needed now, since we have two layers.
+
As you see from this example, the deeper we are in the network (specifically, the more stride-2 convs we have before a layer), the larger the receptive field for an activation in that layer. A large receptive field means that a large amount of the input image is used to calculate each activation in that layer is. We now know that in the deeper layers of the network we have semantically rich features, corresponding to larger receptive fields. Therefore, we’d expect that we’d need more weights for each of our features to handle this increasing complexity. This is another way of saying the same thing we mentioned in the previous section: when we introduce a stride-2 conv in our network, we should also increase the number of channels.
+
When writing this particular chapter, we had a lot of questions we needed answers for, to be able to explain CNNs to you as best we could. Believe it or not, we found most of the answers on Twitter. We’re going to take a quick break to talk to you about that now, before we move on to color images.
+
+
+
A Note About Twitter
+
We are not, to say the least, big users of social networks in general. But our goal in writing this book is to help you become the best deep learning practitioner you can, and we would be remiss not to mention how important Twitter has been in our own deep learning journeys.
+
You see, there’s another part of Twitter, far away from Donald Trump and the Kardashians, which is the part of Twitter where deep learning researchers and practitioners talk shop every day. As we were writing this section, Jeremy wanted to double-check that what we were saying about stride-2 convolutions was accurate, so he asked on Twitter:
+
+
A few minutes later, this answer popped up:
+
+
Christian Szegedy is the first author of Inception, the 2014 ImageNet winner and source of many key insights used in modern neural networks. Two hours later, this appeared:
+
+
Do you recognize that name? You saw it in Chapter 2, when we were talking about the Turing Award winners who established the foundations of deep learning today!
+
Jeremy also asked on Twitter for help checking our description of label smoothing in Chapter 7 was accurate, and got a response again from directly from Christian Szegedy (label smoothing was originally introduced in the Inception paper):
+
+
Many of the top people in deep learning today are Twitter regulars, and are very open about interacting with the wider community. One good way to get started is to look at a list of Jeremy’s recent Twitter likes, or Sylvain’s. That way, you can see a list of Twitter users that we think have interesting and useful things to say.
+
Twitter is the main way we both stay up to date with interesting papers, software releases, and other deep learning news. For making connections with the deep learning community, we recommend getting involved both in the fast.ai forums and on Twitter.
+
That said, let’s get back to the meat of this chapter. Up until now, we have only shown you examples of pictures in black and white, with one value per pixel. In practice, most colored images have three values per pixel to define their color. We’ll look at working with color images next.
+
+
+
+
13.3 Color Images
+
A colour picture is a rank-3 tensor:
+
+
im = image2tensor(Image.open(image_bear()))
+im.shape
+
+
torch.Size([3, 1000, 846])
+
+
+
+
show_image(im);
+
+
+
+
+
+
+
The first axis contains the channels, red, green, and blue:
We saw what the convolution operation was for one filter on one channel of the image (our examples were done on a square). A convolutional layer will take an image with a certain number of channels (three for the first layer for regular RGB color images) and output an image with a different number of channels. Like our hidden size that represented the numbers of neurons in a linear layer, we can decide to have as many filters as we want, and each of them will be able to specialize, some to detect horizontal edges, others to detect vertical edges and so forth, to give something like we studied in Chapter 2.
+
In one sliding window, we have a certain number of channels and we need as many filters (we don’t use the same kernel for all the channels). So our kernel doesn’t have a size of 3 by 3, but ch_in (for channels in) is 3 by 3. On each channel, we multiply the elements of our window by the elements of the coresponding filter, then sum the results (as we saw before) and sum over all the filters. In the example given in Figure 13.12, the result of our conv layer on that window is red + green + blue.
+
+
+
+
So, in order to apply a convolution to a color picture we require a kernel tensor with a size that matches the first axis. At each location, the corresponding parts of the kernel and the image patch are multiplied together.
+
These are then all added together, to produce a single number, for each grid location, for each output feature, as shown in Figure 13.13.
+
+
+
+
Then we have ch_out filters like this, so in the end, the result of our convolutional layer will be a batch of images with ch_out channels and a height and width given by the formula outlined earlier. This give us ch_out tensors of size ch_in x ks x ks that we represent in one big tensor of four dimensions. In PyTorch, the order of the dimensions for those weights is ch_out x ch_in x ks x ks.
+
Additionally, we may want to have a bias for each filter. In the preceding example, the final result for our convolutional layer would be \(y_{R} + y_{G} + y_{B} + b\) in that case. Like in a linear layer, there are as many bias as we have kernels, so the biases is a vector of size ch_out.
+
There are no special mechanisms required when setting up a CNN for training with color images. Just make sure your first layer has three inputs.
+
There are lots of ways of processing color images. For instance, you can change them to black and white, change from RGB to HSV (hue, saturation, and value) color space, and so forth. In general, it turns out experimentally that changing the encoding of colors won’t make any difference to your model results, as long as you don’t lose information in the transformation. So, transforming to black and white is a bad idea, since it removes the color information entirely (and this can be critical; for instance, a pet breed may have a distinctive color); but converting to HSV generally won’t make any difference.
+
Now you know what those pictures in Chapter 1 of “what a neural net learns” from the Zeiler and Fergus paper mean! This is their picture of some of the layer 1 weights which we showed:
+
+
This is taking the three slices of the convolutional kernel, for each output feature, and displaying them as images. We can see that even though the creators of the neural net never explicitly created kernels to find edges, for instance, the neural net automatically discovered these features using SGD.
+
Now let’s see how we can train these CNNs, and show you all the techniques fastai uses under the hood for efficient training.
+
+
+
13.4 Improving Training Stability
+
Since we are so good at recognizing 3s from 7s, let’s move on to something harder—recognizing all 10 digits. That means we’ll need to use MNIST instead of MNIST_SAMPLE:
+
+
path = untar_data(URLs.MNIST)
+
+
+
path.ls()
+
+
(#2) [Path('testing'),Path('training')]
+
+
+
The data is in two folders named training and testing, so we have to tell GrandparentSplitter about that (it defaults to train and valid). We did do that in the get_dls function, which we create to make it easy to change our batch size later:
Remember, it’s always a good idea to look at your data before you use it:
+
+
dls.show_batch(max_n=9, figsize=(4,4))
+
+
+
+
+
+
+
Now that we have our data ready, we can train a simple model on it.
+
+
A Simple Baseline
+
Earlier in this chapter, we built a model based on a conv function like this:
+
+
def conv(ni, nf, ks=3, act=True):
+ res = nn.Conv2d(ni, nf, stride=2, kernel_size=ks, padding=ks//2)
+if act: res = nn.Sequential(res, nn.ReLU())
+return res
+
+
Let’s start with a basic CNN as a baseline. We’ll use the same one as earlier, but with one tweak: we’ll use more activations. Since we have more numbers to differentiate, it’s likely we will need to learn more filters.
+
As we discussed, we generally want to double the number of filters each time we have a stride-2 layer. One way to increase the number of filters throughout our network is to double the number of activations in the first layer–then every layer after that will end up twice as big as in the previous version as well.
+
But there is a subtle problem with this. Consider the kernel that is being applied to each pixel. By default, we use a 3×3-pixel kernel. That means that there are a total of 3×3 = 9 pixels that the kernel is being applied to at each location. Previously, our first layer had four output filters. That meant that there were four values being computed from nine pixels at each location. Think about what happens if we double this output to eight filters. Then when we apply our kernel we will be using nine pixels to calculate eight numbers. That means it isn’t really learning much at all: the output size is almost the same as the input size. Neural networks will only create useful features if they’re forced to do so—that is, if the number of outputs from an operation is significantly smaller than the number of inputs.
+
To fix this, we can use a larger kernel in the first layer. If we use a kernel of 5×5 pixels then there are 25 pixels being used at each kernel application. Creating eight filters from this will mean the neural net will have to find some useful features:
As you’ll see in a moment, we can look inside our models while they’re training in order to try to find ways to make them train better. To do this we use the ActivationStats callback, which records the mean, standard deviation, and histogram of activations of every trainable layer (as we’ve seen, callbacks are used to add behavior to the training loop; we’ll explore how they work in Chapter 16):
+
+
from fastai.callback.hook import*
+
+
We want to train quickly, so that means training at a high learning rate. Let’s see how we go at 0.06:
This didn’t train at all well! Let’s find out why.
+
One handy feature of the callbacks passed to Learner is that they are made available automatically, with the same name as the callback class, except in snake_case. So, our ActivationStats callback can be accessed through activation_stats. I’m sure you remember learn.recorder… can you guess how that is implemented? That’s right, it’s a callback called Recorder!
+
ActivationStats includes some handy utilities for plotting the activations during training. plot_layer_stats(idx) plots the mean and standard deviation of the activations of layer number idx, along with the percentage of activations near zero. Here’s the first layer’s plot:
+
+
learn.activation_stats.plot_layer_stats(0)
+
+
+
+
+
+
+
Generally our model should have a consistent, or at least smooth, mean and standard deviation of layer activations during training. Activations near zero are particularly problematic, because it means we have computation in the model that’s doing nothing at all (since multiplying by zero gives zero). When you have some zeros in one layer, they will therefore generally carry over to the next layer… which will then create more zeros. Here’s the penultimate layer of our network:
+
+
learn.activation_stats.plot_layer_stats(-2)
+
+
+
+
+
+
+
As expected, the problems get worse towards the end of the network, as the instability and zero activations compound over layers. Let’s look at what we can do to make training more stable.
+
+
+
Increase Batch Size
+
One way to make training more stable is to increase the batch size. Larger batches have gradients that are more accurate, since they’re calculated from more data. On the downside, though, a larger batch size means fewer batches per epoch, which means less opportunities for your model to update weights. Let’s see if a batch size of 512 helps:
+
+
dls = get_dls(512)
+
+
+
learn = fit()
+
+
+
+
+
epoch
+
train_loss
+
valid_loss
+
accuracy
+
time
+
+
+
+
+
0
+
2.309385
+
2.302744
+
0.113500
+
00:08
+
+
+
+
+
+
Let’s see what the penultimate layer looks like:
+
+
learn.activation_stats.plot_layer_stats(-2)
+
+
+
+
+
+
+
Again, we’ve got most of our activations near zero. Let’s see what else we can do to improve training stability.
+
+
+
1cycle Training
+
Our initial weights are not well suited to the task we’re trying to solve. Therefore, it is dangerous to begin training with a high learning rate: we may very well make the training diverge instantly, as we’ve seen. We probably don’t want to end training with a high learning rate either, so that we don’t skip over a minimum. But we want to train at a high learning rate for the rest of the training period, because we’ll be able to train more quickly that way. Therefore, we should change the learning rate during training, from low, to high, and then back to low again.
+
Leslie Smith (yes, the same guy that invented the learning rate finder!) developed this idea in his article “Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates”. He designed a schedule for learning rate separated into two phases: one where the learning rate grows from the minimum value to the maximum value (warmup), and one where it decreases back to the minimum value (annealing). Smith called this combination of approaches 1cycle training.
+
1cycle training allows us to use a much higher maximum learning rate than other types of training, which gives two benefits:
+
+
By training with higher learning rates, we train faster—a phenomenon Smith named super-convergence.
+
By training with higher learning rates, we overfit less because we skip over the sharp local minima to end up in a smoother (and therefore more generalizable) part of the loss.
+
+
The second point is an interesting and subtle one; it is based on the observation that a model that generalizes well is one whose loss would not change very much if you changed the input by a small amount. If a model trains at a large learning rate for quite a while, and can find a good loss when doing so, it must have found an area that also generalizes well, because it is jumping around a lot from batch to batch (that is basically the definition of a high learning rate). The problem is that, as we have discussed, just jumping to a high learning rate is more likely to result in diverging losses, rather than seeing your losses improve. So we don’t jump straight to a high learning rate. Instead, we start at a low learning rate, where our losses do not diverge, and we allow the optimizer to gradually find smoother and smoother areas of our parameters by gradually going to higher and higher learning rates.
+
Then, once we have found a nice smooth area for our parameters, we want to find the very best part of that area, which means we have to bring our learning rates down again. This is why 1cycle training has a gradual learning rate warmup, and a gradual learning rate cooldown. Many researchers have found that in practice this approach leads to more accurate models and trains more quickly. That is why it is the approach that is used by default for fine_tune in fastai.
+
In Chapter 16 we’ll learn all about momentum in SGD. Briefly, momentum is a technique where the optimizer takes a step not only in the direction of the gradients, but also that continues in the direction of previous steps. Leslie Smith introduced the idea of cyclical momentums in “A Disciplined Approach to Neural Network Hyper-Parameters: Part 1”. It suggests that the momentum varies in the opposite direction of the learning rate: when we are at high learning rates, we use less momentum, and we use more again in the annealing phase.
+
We can use 1cycle training in fastai by calling fit_one_cycle:
We’re finally making some progress! It’s giving us a reasonable accuracy now.
+
We can view the learning rate and momentum throughout training by calling plot_sched on learn.recorder. learn.recorder (as the name suggests) records everything that happens during training, including losses, metrics, and hyperparameters such as learning rate and momentum:
+
+
learn.recorder.plot_sched()
+
+
+
+
+
+
+
Smith’s original 1cycle paper used a linear warmup and linear annealing. As you can see, we adapted the approach in fastai by combining it with another popular approach: cosine annealing. fit_one_cycle provides the following parameters you can adjust:
+
+
lr_max:: The highest learning rate that will be used (this can also be a list of learning rates for each layer group, or a Python slice object containing the first and last layer group learning rates)
+
div:: How much to divide lr_max by to get the starting learning rate
+
div_final:: How much to divide lr_max by to get the ending learning rate
+
pct_start:: What percentage of the batches to use for the warmup
+
moms:: A tuple (mom1,mom2,mom3) where mom1 is the initial momentum, mom2 is the minimum momentum, and mom3 is the final momentum
+
+
Let’s take a look at our layer stats again:
+
+
learn.activation_stats.plot_layer_stats(-2)
+
+
+
+
+
+
+
The percentage of near-zero weights is getting much better, although it’s still quite high.
+
We can see even more about what’s going on in our training using color_dim, passing it a layer index:
+
+
learn.activation_stats.color_dim(-2)
+
+
+
+
+
+
+
color_dim was developed by fast.ai in conjunction with a student, Stefano Giomo. Stefano, who refers to the idea as the colorful dimension, provides an in-depth explanation of the history and details behind the method. The basic idea is to create a histogram of the activations of a layer, which we would hope would follow a smooth pattern such as the normal distribution (colorful_dist).
+
+
+
+
To create color_dim, we take the histogram shown on the left here, and convert it into just the colored representation shown at the bottom. Then we flip it on its side, as shown on the right. We found that the distribution is clearer if we take the log of the histogram values. Then, Stefano describes:
+
+
The final plot for each layer is made by stacking the histogram of the activations from each batch along the horizontal axis. So each vertical slice in the visualisation represents the histogram of activations for a single batch. The color intensity corresponds to the height of the histogram, in other words the number of activations in each histogram bin.
This illustrates why log(f) is more colorful than f when f follows a normal distribution because taking a log changes the Gaussian in a quadratic, which isn’t as narrow.
+
So with that in mind, let’s take another look at the result for the penultimate layer:
+
+
learn.activation_stats.color_dim(-2)
+
+
+
+
+
+
+
This shows a classic picture of “bad training.” We start with nearly all activations at zero—that’s what we see at the far left, with all the dark blue. The bright yellow at the bottom represents the near-zero activations. Then, over the first few batches we see the number of nonzero activations exponentially increasing. But it goes too far, and collapses! We see the dark blue return, and the bottom becomes bright yellow again. It almost looks like training restarts from scratch. Then we see the activations increase again, and collapse again. After repeating this a few times, eventually we see a spread of activations throughout the range.
+
It’s much better if training can be smooth from the start. The cycles of exponential increase and then collapse tend to result in a lot of near-zero activations, resulting in slow training and poor final results. One way to solve this problem is to use batch normalization.
+
+
+
Batch Normalization
+
To fix the slow training and poor final results we ended up with in the previous section, we need to fix the initial large percentage of near-zero activations, and then try to maintain a good distribution of activations throughout training.
Training Deep Neural Networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization… We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs.
+
+
Their solution, they say is:
+
+
Making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization.
+
+
The paper caused great excitement as soon as it was released, because it included the chart in Figure 13.16, which clearly demonstrated that batch normalization could train a model that was even more accurate than the current state of the art (the Inception architecture) and around 5x faster.
+
+
+
+
Batch normalization (often just called batchnorm) works by taking an average of the mean and standard deviations of the activations of a layer and using those to normalize the activations. However, this can cause problems because the network might want some activations to be really high in order to make accurate predictions. So they also added two learnable parameters (meaning they will be updated in the SGD step), usually called gamma and beta. After normalizing the activations to get some new activation vector y, a batchnorm layer returns gamma*y + beta.
+
That’s why our activations can have any mean or variance, independent from the mean and standard deviation of the results of the previous layer. Those statistics are learned separately, making training easier on our model. The behavior is different during training and validation: during training, we use the mean and standard deviation of the batch to normalize the data, while during validation we instead use a running mean of the statistics calculated during training.
That’s a great result! Let’s take a look at color_dim:
+
+
learn.activation_stats.color_dim(-4)
+
+
+
+
+
+
+
This is just what we hope to see: a smooth development of activations, with no “crashes.” Batchnorm has really delivered on its promise here! In fact, batchnorm has been so successful that we see it (or something very similar) in nearly all modern neural networks.
+
An interesting observation about models containing batch normalization layers is that they tend to generalize better than models that don’t contain them. Although we haven’t as yet seen a rigorous analysis of what’s going on here, most researchers believe that the reason for this is that batch normalization adds some extra randomness to the training process. Each mini-batch will have a somewhat different mean and standard deviation than other mini-batches. Therefore, the activations will be normalized by different values each time. In order for the model to make accurate predictions, it will have to learn to become robust to these variations. In general, adding additional randomization to the training process often helps.
+
Since things are going so well, let’s train for a few more epochs and see how it goes. In fact, let’s increase the learning rate, since the abstract of the batchnorm paper claimed we should be able to “train at much higher learning rates”:
+
+
learn = fit(5, lr=0.1)
+
+
+
+
+
epoch
+
train_loss
+
valid_loss
+
accuracy
+
time
+
+
+
+
+
0
+
0.191731
+
0.121738
+
0.960900
+
00:11
+
+
+
1
+
0.083739
+
0.055808
+
0.981800
+
00:10
+
+
+
2
+
0.053161
+
0.044485
+
0.987100
+
00:10
+
+
+
3
+
0.034433
+
0.030233
+
0.990200
+
00:10
+
+
+
4
+
0.017646
+
0.025407
+
0.991200
+
00:10
+
+
+
+
+
+
At this point, I think it’s fair to say we know how to recognize digits! It’s time to move on to something harder…
+
+
+
+
13.5 Conclusions
+
We’ve seen that convolutions are just a type of matrix multiplication, with two constraints on the weight matrix: some elements are always zero, and some elements are tied (forced to always have the same value). In Chapter 1 we saw the eight requirements from the 1986 book Parallel Distributed Processing; one of them was “A pattern of connectivity among units.” That’s exactly what these constraints do: they enforce a certain pattern of connectivity.
+
These constraints allow us to use far fewer parameters in our model, without sacrificing the ability to represent complex visual features. That means we can train deeper models faster, with less overfitting. Although the universal approximation theorem shows that it should be possible to represent anything in a fully connected network in one hidden layer, we’ve seen now that in practice we can train much better models by being thoughtful about network architecture.
+
Convolutions are by far the most common pattern of connectivity we see in neural nets (along with regular linear layers, which we refer to as fully connected), but it’s likely that many more will be discovered.
+
We’ve also seen how to interpret the activations of layers in the network to see whether training is going well or not, and how batchnorm helps regularize the training and makes it smoother. In the next chapter, we will use both of those layers to build the most popular architecture in computer vision: a residual network.
+
+
+
13.6 Questionnaire
+
+
What is a “feature”?
+
Write out the convolutional kernel matrix for a top edge detector.
+
Write out the mathematical operation applied by a 3×3 kernel to a single pixel in an image.
+
What is the value of a convolutional kernel apply to a 3×3 matrix of zeros?
+
What is “padding”?
+
What is “stride”?
+
Create a nested list comprehension to complete any task that you choose.
+
What are the shapes of the input and weight parameters to PyTorch’s 2D convolution?
+
What is a “channel”?
+
What is the relationship between a convolution and a matrix multiplication?
+
What is a “convolutional neural network”?
+
What is the benefit of refactoring parts of your neural network definition?
+
What is Flatten? Where does it need to be included in the MNIST CNN? Why?
+
What does “NCHW” mean?
+
Why does the third layer of the MNIST CNN have 7*7*(1168-16) multiplications?
+
What is a “receptive field”?
+
What is the size of the receptive field of an activation after two stride 2 convolutions? Why?
+
Run conv-example.xlsx yourself and experiment with trace precedents.
+
Have a look at Jeremy or Sylvain’s list of recent Twitter “like”s, and see if you find any interesting resources or ideas there.
+
How is a color image represented as a tensor?
+
How does a convolution work with a color input?
+
What method can we use to see that data in DataLoaders?
+
Why do we double the number of filters after each stride-2 conv?
+
Why do we use a larger kernel in the first conv with MNIST (with simple_cnn)?
+
What information does ActivationStats save for each layer?
+
How can we access a learner’s callback after training?
+
What are the three statistics plotted by plot_layer_stats? What does the x-axis represent?
+
Why are activations near zero problematic?
+
What are the upsides and downsides of training with a larger batch size?
+
Why should we avoid using a high learning rate at the start of training?
+
What is 1cycle training?
+
What are the benefits of training with a high learning rate?
+
Why do we want to use a low learning rate at the end of training?
+
What is “cyclical momentum”?
+
What callback tracks hyperparameter values during training (along with other information)?
+
What does one column of pixels in the color_dim plot represent?
+
What does “bad training” look like in color_dim? Why?
+
What trainable parameters does a batch normalization layer contain?
+
What statistics are used to normalize in batch normalization during training? How about during validation?
+
Why do models with batch normalization layers generalize better?
+
+
+
Further Research
+
+
What features other than edge detectors have been used in computer vision (especially before deep learning became popular)?
+
There are other normalization layers available in PyTorch. Try them out and see what works best. Learn about why other normalization layers have been developed, and how they differ from batch normalization.
+
Try moving the activation function after the batch normalization layer in conv. Does it make a difference? See what you can find out about what order is recommended, and why.
This chapter begins a journey where we will dig deep into the internals of the models we used in the previous chapters. We will be covering many of the same things we’ve seen before, but this time around we’ll be looking much more closely at the implementation details, and much less closely at the practical issues of how and why things are as they are.
+
We will build everything from scratch, only using basic indexing into a tensor. We’ll write a neural net from the ground up, then implement backpropagation manually, so we know exactly what’s happening in PyTorch when we call loss.backward. We’ll also see how to extend PyTorch with custom autograd functions that allow us to specify our own forward and backward computations.
+
+
17.1 Building a Neural Net Layer from Scratch
+
Let’s start by refreshing our understanding of how matrix multiplication is used in a basic neural network. Since we’re building everything up from scratch, we’ll use nothing but plain Python initially (except for indexing into PyTorch tensors), and then replace the plain Python with PyTorch functionality once we’ve seen how to create it.
+
+
Modeling a Neuron
+
A neuron receives a given number of inputs and has an internal weight for each of them. It sums those weighted inputs to produce an output and adds an inner bias. In math, this can be written as:
+
\[ out = \sum_{i=1}^{n} x_{i} w_{i} + b\]
+
if we name our inputs \((x_{1},\dots,x_{n})\), our weights \((w_{1},\dots,w_{n})\), and our bias \(b\). In code this translates into:
+
output =sum([x*w for x,w inzip(inputs,weights)]) + bias
+
This output is then fed into a nonlinear function called an activation function before being sent to another neuron. In deep learning the most common of these is the rectified Linear unit, or ReLU, which, as we’ve seen, is a fancy way of saying:
+
def relu(x): return x if x >=0else0
+
A deep learning model is then built by stacking a lot of those neurons in successive layers. We create a first layer with a certain number of neurons (known as hidden size) and link all the inputs to each of those neurons. Such a layer is often called a fully connected layer or a dense layer (for densely connected), or a linear layer.
+
It requires to compute, for each input in our batch and each neuron with a give weight, the dot product:
+
sum([x*w for x,w inzip(input,weight)])
+
If you have done a little bit of linear algebra, you may remember that having a lot of those dot products happens when you do a matrix multiplication. More precisely, if our inputs are in a matrix x with a size of batch_size by n_inputs, and if we have grouped the weights of our neurons in a matrix w of size n_neurons by n_inputs (each neuron must have the same number of weights as it has inputs) and all the biases in a vector b of size n_neurons, then the output of this fully connected layer is:
+
y = x @ w.t() + b
+
where @ represents the matrix product and w.t() is the transpose matrix of w. The output y is then of size batch_size by n_neurons, and in position (i,j) we have (for the mathy folks out there):
y[i,j] =sum([a * b for a,b inzip(x[i,:],w[j,:])]) + b[j]
+
The transpose is necessary because in the mathematical definition of the matrix product m @ n, the coefficient (i,j) is:
+
sum([a * b for a,b inzip(m[i,:],n[:,j])])
+
So the very basic operation we need is a matrix multiplication, as it’s what is hidden in the core of a neural net.
+
+
+
Matrix Multiplication from Scratch
+
Let’s write a function that computes the matrix product of two tensors, before we allow ourselves to use the PyTorch version of it. We will only use the indexing in PyTorch tensors:
+
+
import torch
+from torch import tensor
+
+
We’ll need three nested for loops: one for the row indices, one for the column indices, and one for the inner sum. ac and ar stand for number of columns of a and number of rows of a, respectively (the same convention is followed for b), and we make sure calculating the matrix product is possible by checking that a has as many columns as b has rows:
+
+
def matmul(a,b):
+ ar,ac = a.shape # n_rows * n_cols
+ br,bc = b.shape
+assert ac==br
+ c = torch.zeros(ar, bc)
+for i inrange(ar):
+for j inrange(bc):
+for k inrange(ac): c[i,j] += a[i,k] * b[k,j]
+return c
+
+
To test this out, we’ll pretend (using random matrices) that we’re working with a small batch of 5 MNIST images, flattened into 28×28 vectors, with linear model to turn them into 10 activations:
Let’s time our function, using the Jupyter “magic” command %time:
+
+
%time t1=matmul(m1, m2)
+
+
CPU times: user 211 ms, sys: 504 µs, total: 212 ms
+Wall time: 211 ms
+
+
+
And see how that compares to PyTorch’s built-in @:
+
+
%timeit -n 20 t2=m1@m2
+
+
The slowest run took 9.18 times longer than the fastest. This could mean that an intermediate result is being cached.
+8.15 µs ± 9.07 µs per loop (mean ± std. dev. of 7 runs, 20 loops each)
+
+
+
As we can see, in Python three nested loops is a very bad idea! Python is a slow language, and this isn’t going to be very efficient. We see here that PyTorch is around 100,000 times faster than Python—and that’s before we even start using the GPU!
+
Where does this difference come from? PyTorch didn’t write its matrix multiplication in Python, but rather in C++ to make it fast. In general, whenever we do computations on tensors we will need to vectorize them so that we can take advantage of the speed of PyTorch, usually by using two techniques: elementwise arithmetic and broadcasting.
+
+
+
Elementwise Arithmetic
+
All the basic operators (+, -, *, /, >, <, ==) can be applied elementwise. That means if we write a+b for two tensors a and b that have the same shape, we will get a tensor composed of the sums the elements of a and b:
+
+
a = tensor([10., 6, -4])
+b = tensor([2., 8, 7])
+a + b
+
+
tensor([12., 14., 3.])
+
+
+
The Booleans operators will return an array of Booleans:
+
+
a < b
+
+
tensor([False, True, True])
+
+
+
If we want to know if every element of a is less than the corresponding element in b, or if two tensors are equal, we need to combine those elementwise operations with torch.all:
+
+
(a < b).all(), (a==b).all()
+
+
(tensor(False), tensor(False))
+
+
+
Reduction operations like all(), sum() and mean() return tensors with only one element, called rank-0 tensors. If you want to convert this to a plain Python Boolean or number, you need to call .item():
+
+
(a + b).mean().item()
+
+
9.666666984558105
+
+
+
The elementwise operations work on tensors of any rank, as long as they have the same shape:
However you can’t perform elementwise operations on tensors that don’t have the same shape (unless they are broadcastable, as discussed in the next section):
+
m*tensor([[1., 2, 3], [4,5,6]])
+
---------------------------------------------------------------------------
+RuntimeError Traceback (most recent call last)
+Cell In [14], line 1
+----> 1 m*n
+
+RuntimeError: The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 0
+
With elementwise arithmetic, we can remove one of our three nested loops: we can multiply the tensors that correspond to the i-th row of a and the j-th column of b before summing all the elements, which will speed things up because the inner loop will now be executed by PyTorch at C speed.
+
To access one column or row, we can simply write a[i,:] or b[:,j]. The : means take everything in that dimension. We could restrict this and take only a slice of that particular dimension by passing a range, like 1:5, instead of just :. In that case, we would take the elements in columns or rows 1 to 4 (the second number is noninclusive).
+
One simplification is that we can always omit a trailing colon, so a[i,:] can be abbreviated to a[i]. With all of that in mind, we can write a new version of our matrix multiplication:
+
+
def matmul(a,b):
+ ar,ac = a.shape
+ br,bc = b.shape
+assert ac==br
+ c = torch.zeros(ar, bc)
+for i inrange(ar):
+for j inrange(bc): c[i,j] = (a[i] * b[:,j]).sum()
+return c
+
+
+
%timeit -n 20 t3 = matmul(m1,m2)
+
+
257 µs ± 25.4 µs per loop (mean ± std. dev. of 7 runs, 20 loops each)
+
+
+
We’re already ~700 times faster, just by removing that inner for loop! And that’s just the beginning—with broadcasting we can remove another loop and get an even more important speed up.
+
+
+
Broadcasting
+
As we discussed in Chapter 4, broadcasting is a term introduced by the NumPy library that describes how tensors of different ranks are treated during arithmetic operations. For instance, it’s obvious there is no way to add a 3×3 matrix with a 4×5 matrix, but what if we want to add one scalar (which can be represented as a 1×1 tensor) with a matrix? Or a vector of size 3 with a 3×4 matrix? In both cases, we can find a way to make sense of this operation.
+
Broadcasting gives specific rules to codify when shapes are compatible when trying to do an elementwise operation, and how the tensor of the smaller shape is expanded to match the tensor of the bigger shape. It’s essential to master those rules if you want to be able to write code that executes quickly. In this section, we’ll expand our previous treatment of broadcasting to understand these rules.
+
+
Broadcasting with a scalar
+
Broadcasting with a scalar is the easiest type of broadcasting. When we have a tensor a and a scalar, we just imagine a tensor of the same shape as a filled with that scalar and perform the operation:
+
+
a = tensor([10., 6, -4])
+a >0
+
+
tensor([ True, True, False])
+
+
+
How are we able to do this comparison? 0 is being broadcast to have the same dimensions as a. Note that this is done without creating a tensor full of zeros in memory (that would be very inefficient).
+
This is very useful if you want to normalize your dataset by subtracting the mean (a scalar) from the entire data set (a matrix) and dividing by the standard deviation (another scalar):
+
+
m = tensor([[1., 2, 3], [4,5,6], [7,8,9]])
+(m -5) /2.73
Here the elements of c are expanded to make three rows that match, making the operation possible. Again, PyTorch doesn’t actually create three copies of c in memory. This is done by the expand_as method behind the scenes:
If we look at the corresponding tensor, we can ask for its storage property (which shows the actual contents of the memory used for the tensor) to check there is no useless data stored:
+
+
t = c.expand_as(m)
+t.storage()
+
+
10.0
+ 20.0
+ 30.0
+[torch.storage._TypedStorage(dtype=torch.float32, device=cpu) of size 3]
+
+
+
Even though the tensor officially has nine elements, only three scalars are stored in memory. This is possible thanks to the clever trick of giving that dimension a stride of 0 (which means that when PyTorch looks for the next row by adding the stride, it doesn’t move):
+
+
t.stride(), t.shape
+
+
((0, 1), torch.Size([3, 3]))
+
+
+
Since m is of size 3×3, there are two ways to do broadcasting. The fact it was done on the last dimension is a convention that comes from the rules of broadcasting and has nothing to do with the way we ordered our tensors. If instead we do this, we get the same result:
In fact, it’s only possible to broadcast a vector of size n with a matrix of size m by n:
+
+
c = tensor([10.,20,30])
+m = tensor([[1., 2, 3], [4,5,6]])
+c+m
+
+
tensor([[11., 22., 33.],
+ [14., 25., 36.]])
+
+
+
This won’t work:
+
c = tensor([10.,20])
+c+tensor([[1., 2, 3], [4,5,6]])
+
---------------------------------------------------------------------------
+RuntimeError Traceback (most recent call last)
+Cell In [26], line 3
+ 1 c = tensor([10.,20])
+ 2 m = tensor([[1., 2, 3], [4,5,6]])
+----> 3 c+m
+
+RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 1
+
If we want to broadcast in the other dimension, we have to change the shape of our vector to make it a 3×1 matrix. This is done with the unsqueeze method in PyTorch:
Like before, only three scalars are stored in memory:
+
+
t = c.expand_as(m)
+t.storage()
+
+
10.0
+ 20.0
+ 30.0
+[torch.storage._TypedStorage(dtype=torch.float32, device=cpu) of size 3]
+
+
+
And the expanded tensor has the right shape because the column dimension has a stride of 0:
+
+
t.stride(), t.shape
+
+
((1, 0), torch.Size([3, 3]))
+
+
+
With broadcasting, by default if we need to add dimensions, they are added at the beginning. When we were broadcasting before, Pytorch was doing c.unsqueeze(0) behind the scenes:
+
+
c = tensor([10.,20,30])
+c.shape, c.unsqueeze(0).shape,c.unsqueeze(1).shape
You can always omit trailing colons, and ... means all preceding dimensions:
+
+
c[None].shape,c[...,None].shape
+
+
(torch.Size([1, 3]), torch.Size([3, 1]))
+
+
+
With this, we can remove another for loop in our matrix multiplication function. Now, instead of multiplying a[i] with b[:,j], we can multiply a[i] with the whole matrix b using broadcasting, then sum the results:
43.9 µs ± 8.28 µs per loop (mean ± std. dev. of 7 runs, 20 loops each)
+
+
+
We’re now 3,700 times faster than our first implementation! Before we move on, let’s discuss the rules of broadcasting in a little more detail.
+
+
+
Broadcasting rules
+
When operating on two tensors, PyTorch compares their shapes elementwise. It starts with the trailing dimensions and works its way backward, adding 1 when it meets empty dimensions. Two dimensions are compatible when one of the following is true:
+
+
They are equal.
+
One of them is 1, in which case that dimension is broadcast to make it the same as the other.
+
+
Arrays do not need to have the same number of dimensions. For example, if you have a 256×256×3 array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with three values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible:
+
Image (3d tensor): 256 x 256 x 3
+Scale (1d tensor): (1) (1) 3
+Result (3d tensor): 256 x 256 x 3
+
However, a 2D tensor of size 256×256 isn’t compatible with our image:
+
Image (3d tensor): 256 x 256 x 3
+Scale (2d tensor): (1) 256 x 256
+Error
+
In our earlier examples we had with a 3×3 matrix and a vector of size 3, broadcasting was done on the rows:
+
Matrix (2d tensor): 3 x 3
+Vector (1d tensor): (1) 3
+Result (2d tensor): 3 x 3
+
As an exercise, try to determine what dimensions to add (and where) when you need to normalize a batch of images of size 64 x 3 x 256 x 256 with vectors of three elements (one for the mean and one for the standard deviation).
+
Another useful way of simplifying tensor manipulations is the use of Einstein summations convention.
+
+
+
+
Einstein Summation
+
Before using the PyTorch operation @ or torch.matmul, there is one last way we can implement matrix multiplication: Einstein summation (einsum). This is a compact representation for combining products and sums in a general way. We write an equation like this:
+
ik,kj -> ij
+
The lefthand side represents the operands dimensions, separated by commas. Here we have two tensors that each have two dimensions (i,k and k,j). The righthand side represents the result dimensions, so here we have a tensor with two dimensions i,j.
+
The rules of Einstein summation notation are as follows:
+
+
Repeated indices on the left side are implicitly summed over if they are not on the right side.
+
Each index can appear at most twice on the left side.
+
The unrepeated indices on the left side must appear on the right side.
+
+
So in our example, since k is repeated, we sum over that index. In the end the formula represents the matrix obtained when we put in (i,j) the sum of all the coefficients (i,k) in the first tensor multiplied by the coefficients (k,j) in the second tensor… which is the matrix product! Here is how we can code this in PyTorch:
+
+
def matmul(a,b): return torch.einsum('ik,kj->ij', a, b)
+
+
Einstein summation is a very practical way of expressing operations involving indexing and sum of products. Note that you can have just one member on the lefthand side. For instance, this:
+
torch.einsum('ij->ji', a)
+
returns the transpose of the matrix a. You can also have three or more members. This:
+
torch.einsum('bi,ij,bj->b', a, b, c)
+
will return a vector of size b where the k-th coordinate is the sum of a[k,i] b[i,j] c[k,j]. This notation is particularly convenient when you have more dimensions because of batches. For example, if you have two batches of matrices and want to compute the matrix product per batch, you would could this:
+
torch.einsum('bik,bkj->bij', a, b)
+
Let’s go back to our new matmul implementation using einsum and look at its speed:
+
+
%timeit -n 20 t5 = matmul(m1,m2)
+
+
19 µs ± 13.2 µs per loop (mean ± std. dev. of 7 runs, 20 loops each)
+
+
+
As you can see, not only is it practical, but it’s very fast. einsum is often the fastest way to do custom operations in PyTorch, without diving into C++ and CUDA. (But it’s generally not as fast as carefully optimized CUDA code, as you see from the results in “Matrix Multiplication from Scratch”.)
+
Now that we know how to implement a matrix multiplication from scratch, we are ready to build our neural net—specifically its forward and backward passes—using just matrix multiplications.
+
+
+
+
17.2 The Forward and Backward Passes
+
As we saw in Chapter 4, to train a model, we will need to compute all the gradients of a given loss with respect to its parameters, which is known as the backward pass. The forward pass is where we compute the output of the model on a given input, based on the matrix products. As we define our first neural net, we will also delve into the problem of properly initializing the weights, which is crucial for making training start properly.
+
+
Defining and Initializing a Layer
+
We will take the example of a two-layer neural net first. As we’ve seen, one layer can be expressed as y = x @ w + b, with x our inputs, y our outputs, w the weights of the layer (which is of size number of inputs by number of neurons if we don’t transpose like before), and b is the bias vector:
+
+
def lin(x, w, b): return x @ w + b
+
+
We can stack the second layer on top of the first, but since mathematically the composition of two linear operations is another linear operation, this only makes sense if we put something nonlinear in the middle, called an activation function. As mentioned at the beginning of the chapter, in deep learning applications the activation function most commonly used is a ReLU, which returns the maximum of x and 0.
+
We won’t actually train our model in this chapter, so we’ll use random tensors for our inputs and targets. Let’s say our inputs are 200 vectors of size 100, which we group into one batch, and our targets are 200 random floats:
+
+
x = torch.randn(200, 100)
+y = torch.randn(200)
+
+
For our two-layer model we will need two weight matrices and two bias vectors. Let’s say we have a hidden size of 50 and the output size is 1 (for one of our inputs, the corresponding output is one float in this toy example). We initialize the weights randomly and the bias at zero:
Note that this formula works with our batch of inputs, and returns a batch of hidden state: l1 is a matrix of size 200 (our batch size) by 50 (our hidden size).
+
There is a problem with the way our model was initialized, however. To understand it, we need to look at the mean and standard deviation (std) of l1:
+
+
l1.mean(), l1.std()
+
+
(tensor(0.1381), tensor(9.8549))
+
+
+
The mean is close to zero, which is understandable since both our input and weight matrices have means close to zero. But the standard deviation, which represents how far away our activations go from the mean, went from 1 to 10. This is a really big problem because that’s with just one layer. Modern neural nets can have hundred of layers, so if each of them multiplies the scale of our activations by 10, by the end of the last layer we won’t have numbers representable by a computer.
+
Indeed, if we make just 50 multiplications between x and random matrices of size 100×100, we’ll have:
+
+
x = torch.randn(200, 100)
+for i inrange(50): x = x @ torch.randn(100,100)
+x[0:5,0:5]
The result is nans everywhere. So maybe the scale of our matrix was too big, and we need to have smaller weights? But if we use too small weights, we will have the opposite problem—the scale of our activations will go from 1 to 0.1, and after 50 layers we’ll be left with zeros everywhere:
+
+
x = torch.randn(200, 100)
+for i inrange(50): x = x @ (torch.randn(100,100) *0.01)
+x[0:5,0:5]
So we have to scale our weight matrices exactly right so that the standard deviation of our activations stays at 1. We can compute the exact value to use mathematically, as illustrated by Xavier Glorot and Yoshua Bengio in “Understanding the Difficulty of Training Deep Feedforward Neural Networks”. The right scale for a given layer is \(1/\sqrt{n_{in}}\), where \(n_{in}\) represents the number of inputs.
+
In our case, if we have 100 inputs, we should scale our weight matrices by 0.1:
+
+
x = torch.randn(200, 100)
+for i inrange(50): x = x @ (torch.randn(100,100) *0.1)
+x[0:5,0:5]
Finally some numbers that are neither zeros nor nans! Notice how stable the scale of our activations is, even after those 50 fake layers:
+
+
x.std()
+
+
tensor(1.5285)
+
+
+
If you play a little bit with the value for scale you’ll notice that even a slight variation from 0.1 will get you either to very small or very large numbers, so initializing the weights properly is extremely important.
+
Let’s go back to our neural net. Since we messed a bit with our inputs, we need to redefine them:
+
+
x = torch.randn(200, 100)
+y = torch.randn(200)
+
+
And for our weights, we’ll use the right scale, which is known as Xavier initialization (or Glorot initialization):
+
+
from math import sqrt
+w1 = torch.randn(100,50) / sqrt(100)
+b1 = torch.zeros(50)
+w2 = torch.randn(50,1) / sqrt(50)
+b2 = torch.zeros(1)
+
+
Now if we compute the result of the first layer, we can check that the mean and standard deviation are under control:
+
+
l1 = lin(x, w1, b1)
+l1.mean(),l1.std()
+
+
(tensor(-0.0067), tensor(0.9904))
+
+
+
Very good. Now we need to go through a ReLU, so let’s define one. A ReLU removes the negatives and replaces them with zeros, which is another way of saying it clamps our tensor at zero:
+
+
def relu(x): return x.clamp_min(0.)
+
+
We pass our activations through this:
+
+
l2 = relu(l1)
+l2.mean(),l2.std()
+
+
(tensor(0.3906), tensor(0.5769))
+
+
+
And we’re back to square one: the mean of our activations has gone to 0.4 (which is understandable since we removed the negatives) and the std went down to 0.58. So like before, after a few layers we will probably wind up with zeros:
+
+
x = torch.randn(200, 100)
+for i inrange(50): x = relu(x @ (torch.randn(100,100) *0.1))
+x[0:5,0:5]
This means our initialization wasn’t right. Why? At the time Glorot and Bengio wrote their article, the popular activation in a neural net was the hyperbolic tangent (tanh, which is the one they used), and that initialization doesn’t account for our ReLU. Fortunately, someone else has done the math for us and computed the right scale for us to use. In “Delving Deep into Rectifiers: Surpassing Human-Level Performance” (which we’ve seen before—it’s the article that introduced the ResNet), Kaiming He et al. show that we should use the following scale instead: \(\sqrt{2 / n_{in}}\), where \(n_{in}\) is the number of inputs of our model. Let’s see what this gives us:
+
+
x = torch.randn(200, 100)
+for i inrange(50): x = relu(x @ (torch.randn(100,100) * sqrt(2/100)))
+x[0:5,0:5]
That’s better: our numbers aren’t all zeroed this time. So let’s go back to the definition of our neural net and use this initialization (which is named Kaiming initialization or He initialization):
This is the forward pass. Now all that’s left to do is to compare our output to the labels we have (random numbers, in this example) with a loss function. In this case, we will use the mean squared error. (It’s a toy problem, and this is the easiest loss function to use for what is next, computing the gradients.)
+
The only subtlety is that our outputs and targets don’t have exactly the same shape—after going though the model, we get an output like this:
+
+
out = model(x)
+out.shape
+
+
torch.Size([200, 1])
+
+
+
To get rid of this trailing 1 dimension, we use the squeeze function:
That’s all for the forward pass—let’s now look at the gradients.
+
+
+
Gradients and the Backward Pass
+
We’ve seen that PyTorch computes all the gradients we need with a magic call to loss.backward, but let’s explore what’s happening behind the scenes.
+
Now comes the part where we need to compute the gradients of the loss with respect to all the weights of our model, so all the floats in w1, b1, w2, and b2. For this, we will need a bit of math—specifically the chain rule. This is the rule of calculus that guides how we can compute the derivative of a composed function:
+
\[(g \circ f)'(x) = g'(f(x)) f'(x)\]
+
+
j: I find this notation very hard to wrap my head around, so instead I like to think of it as: if y = g(u) and u=f(x); then dy/dx = dy/du * du/dx. The two notations mean the same thing, so use whatever works for you.
+
+
Our loss is a big composition of different functions: mean squared error (which is in turn the composition of a mean and a power of two), the second linear layer, a ReLU and the first linear layer. For instance, if we want the gradients of the loss with respect to b2 and our loss is defined by:
+
loss = mse(out,y) = mse(lin(l2, w2, b2), y)
+
The chain rule tells us that we have: \[\frac{\text{d} loss}{\text{d} b_{2}} = \frac{\text{d} loss}{\text{d} out} \times \frac{\text{d} out}{\text{d} b_{2}} = \frac{\text{d}}{\text{d} out} mse(out, y) \times \frac{\text{d}}{\text{d} b_{2}} lin(l_{2}, w_{2}, b_{2})\]
+
To compute the gradients of the loss with respect to \(b_{2}\), we first need the gradients of the loss with respect to our output \(out\). It’s the same if we want the gradients of the loss with respect to \(w_{2}\). Then, to get the gradients of the loss with respect to \(b_{1}\) or \(w_{1}\), we will need the gradients of the loss with respect to \(l_{1}\), which in turn requires the gradients of the loss with respect to \(l_{2}\), which will need the gradients of the loss with respect to \(out\).
+
So to compute all the gradients we need for the update, we need to begin from the output of the model and work our way backward, one layer after the other—which is why this step is known as backpropagation. We can automate it by having each function we implemented (relu, mse, lin) provide its backward step: that is, how to derive the gradients of the loss with respect to the input(s) from the gradients of the loss with respect to the output.
+
Here we populate those gradients in an attribute of each tensor, a bit like PyTorch does with .grad.
+
The first are the gradients of the loss with respect to the output of our model (which is the input of the loss function). We undo the squeeze we did in mse, then we use the formula that gives us the derivative of \(x^{2}\): \(2x\). The derivative of the mean is just \(1/n\) where \(n\) is the number of elements in our input:
+
+
def mse_grad(inp, targ):
+# grad of loss with respect to output of previous layer
+ inp.g =2.* (inp.squeeze() - targ).unsqueeze(-1) / inp.shape[0]
+
+
For the gradients of the ReLU and our linear layer, we use the gradients of the loss with respect to the output (in out.g) and apply the chain rule to compute the gradients of the loss with respect to the input (in inp.g). The chain rule tells us that inp.g = relu'(inp) * out.g. The derivative of relu is either 0 (when inputs are negative) or 1 (when inputs are positive), so this gives us:
+
+
def relu_grad(inp, out):
+# grad of relu with respect to input activations
+ inp.g = (inp>0).float() * out.g
+
+
The scheme is the same to compute the gradients of the loss with respect to the inputs, weights, and bias in the linear layer:
+
+
def lin_grad(inp, out, w, b):
+# grad of matmul with respect to input
+ inp.g = out.g @ w.t()
+ w.g = inp.t() @ out.g
+ b.g = out.g.sum(0)
+
+
We won’t linger on the mathematical formulas that define them since they’re not important for our purposes, but do check out Khan Academy’s excellent calculus lessons if you’re interested in this topic.
+
+
+
Sidebar: SymPy
+
SymPy is a library for symbolic computation that is extremely useful library when working with calculus. Per the documentation:
+
+
Symbolic computation deals with the computation of mathematical objects symbolically. This means that the mathematical objects are represented exactly, not approximately, and mathematical expressions with unevaluated variables are left in symbolic form.
+
+
To do symbolic computation, we first define a symbol, and then do a computation, like so:
+
+
from sympy import symbols,diff
+sx,sy = symbols('sx sy')
+diff(sx**2, sx)
+
+
\(\displaystyle 2 sx\)
+
+
+
Here, SymPy has taken the derivative of x**2 for us! It can take the derivative of complicated compound expressions, simplify and factor equations, and much more. There’s really not much reason for anyone to do calculus manually nowadays—for calculating gradients, PyTorch does it for us, and for showing the equations, SymPy does it for us!
+
+
+
End sidebar
+
Once we have have defined those functions, we can use them to write the backward pass. Since each gradient is automatically populated in the right tensor, we don’t need to store the results of those _grad functions anywhere—we just need to execute them in the reverse order of the forward pass, to make sure that in each function out.g exists:
+
+
def forward_and_backward(inp, targ):
+# forward pass:
+ l1 = inp @ w1 + b1
+ l2 = relu(l1)
+ out = l2 @ w2 + b2
+# we don't actually need the loss in backward!
+ loss = mse(out, targ)
+
+# backward pass:
+ mse_grad(out, targ)
+ lin_grad(l2, out, w2, b2)
+ relu_grad(l1, l2)
+ lin_grad(inp, l1, w1, b1)
+
+
And now we can access the gradients of our model parameters in w1.g, b1.g, w2.g, and b2.g.
+
We have successfully defined our model—now let’s make it a bit more like a PyTorch module.
+
+
+
Refactoring the Model
+
The three functions we used have two associated functions: a forward pass and a backward pass. Instead of writing them separately, we can create a class to wrap them together. That class can also store the inputs and outputs for the backward pass. This way, we will just have to call backward:
__call__ is a magic name in Python that will make our class callable. This is what will be executed when we type y = Relu()(x). We can do the same for our linear layer and the MSE loss:
Then we can put everything in a model that we initiate with our tensors w1, b1, w2, b2:
+
+
class Model():
+def__init__(self, w1, b1, w2, b2):
+self.layers = [Lin(w1,b1), Relu(), Lin(w2,b2)]
+self.loss = Mse()
+
+def__call__(self, x, targ):
+for l inself.layers: x = l(x)
+returnself.loss(x, targ)
+
+def backward(self):
+self.loss.backward()
+for l inreversed(self.layers): l.backward()
+
+
What is really nice about this refactoring and registering things as layers of our model is that the forward and backward passes are now really easy to write. If we want to instantiate our model, we just need to write:
+
+
model = Model(w1, b1, w2, b2)
+
+
The forward pass can then be executed with:
+
+
loss = model(x, y)
+
+
And the backward pass with:
+
+
model.backward()
+
+
+
+
Going to PyTorch
+
The Lin, Mse and Relu classes we wrote have a lot in common, so we could make them all inherit from the same base class:
The rest of our model can be the same as before. This is getting closer and closer to what PyTorch does. Each basic function we need to differentiate is written as a torch.autograd.Function object that has a forward and a backward method. PyTorch will then keep trace of any computation we do to be able to properly run the backward pass, unless we set the requires_grad attribute of our tensors to False.
+
Writing one of these is (almost) as easy as writing our original classes. The difference is that we choose what to save and what to put in a context variable (so that we make sure we don’t save anything we don’t need), and we return the gradients in the backward pass. It’s very rare to have to write your own Function but if you ever need something exotic or want to mess with the gradients of a regular function, here is how to write one:
+
+
from torch.autograd import Function
+
+class MyRelu(Function):
+@staticmethod
+def forward(ctx, i):
+ result = i.clamp_min(0.)
+ ctx.save_for_backward(i)
+return result
+
+@staticmethod
+def backward(ctx, grad_output):
+ i, = ctx.saved_tensors
+return grad_output * (i>0).float()
+
+
The structure used to build a more complex model that takes advantage of those Functions is a torch.nn.Module. This is the base structure for all models, and all the neural nets you have seen up until now inherited from that class. It mostly helps to register all the trainable parameters, which as we’ve seen can be used in the training loop.
+
To implement an nn.Module you just need to:
+
+
Make sure the superclass __init__ is called first when you initialize it.
+
Define any parameters of the model as attributes with nn.Parameter.
+
Define a forward function that returns the output of your model.
+
+
As an example, here is the linear layer from scratch:
+
+
import torch.nn as nn
+
+class LinearLayer(nn.Module):
+def__init__(self, n_in, n_out):
+super().__init__()
+self.weight = nn.Parameter(torch.randn(n_out, n_in) * sqrt(2/n_in))
+self.bias = nn.Parameter(torch.zeros(n_out))
+
+def forward(self, x): return x @self.weight.t() +self.bias
+
+
As you see, this class automatically keeps track of what parameters have been defined:
+
+
lin = LinearLayer(10,2)
+p1,p2 = lin.parameters()
+p1.shape,p2.shape
+
+
(torch.Size([2, 10]), torch.Size([2]))
+
+
+
It is thanks to this feature of nn.Module that we can just say opt.step() and have an optimizer loop through the parameters and update each one.
+
Note that in PyTorch, the weights are stored as an n_out x n_in matrix, which is why we have the transpose in the forward pass.
+
By using the linear layer from PyTorch (which uses the Kaiming initialization as well), the model we have been building up during this chapter can be written like this:
In the last chapter, we will start from such a model and see how to build a training loop from scratch and refactor it to what we’ve been using in previous chapters.
+
+
+
+
17.3 Conclusion
+
In this chapter we explored the foundations of deep learning, beginning with matrix multiplication and moving on to implementing the forward and backward passes of a neural net from scratch. We then refactored our code to show how PyTorch works beneath the hood.
+
Here are a few things to remember:
+
+
A neural net is basically a bunch of matrix multiplications with nonlinearities in between.
+
Python is slow, so to write fast code we have to vectorize it and take advantage of techniques such as elementwise arithmetic and broadcasting.
+
Two tensors are broadcastable if the dimensions starting from the end and going backward match (if they are the same, or one of them is 1). To make tensors broadcastable, we may need to add dimensions of size 1 with unsqueeze or a None index.
+
Properly initializing a neural net is crucial to get training started. Kaiming initialization should be used when we have ReLU nonlinearities.
+
The backward pass is the chain rule applied multiple times, computing the gradients from the output of our model and going back, one layer at a time.
+
When subclassing nn.Module (if not using fastai’s Module) we have to call the superclass __init__ method in our __init__ method and we have to define a forward function that takes an input and returns the desired result.
+
+
+
+
17.4 Questionnaire
+
+
Write the Python code to implement a single neuron.
+
Write the Python code to implement ReLU.
+
Write the Python code for a dense layer in terms of matrix multiplication.
+
Write the Python code for a dense layer in plain Python (that is, with list comprehensions and functionality built into Python).
+
What is the “hidden size” of a layer?
+
What does the t method do in PyTorch?
+
Why is matrix multiplication written in plain Python very slow?
+
In matmul, why is ac==br?
+
In Jupyter Notebook, how do you measure the time taken for a single cell to execute?
+
What is “elementwise arithmetic”?
+
Write the PyTorch code to test whether every element of a is greater than the corresponding element of b.
+
What is a rank-0 tensor? How do you convert it to a plain Python data type?
+
What does this return, and why? tensor([1,2]) + tensor([1])
+
What does this return, and why? tensor([1,2]) + tensor([1,2,3])
+
How does elementwise arithmetic help us speed up matmul?
+
What are the broadcasting rules?
+
What is expand_as? Show an example of how it can be used to match the results of broadcasting.
+
How does unsqueeze help us to solve certain broadcasting problems?
+
How can we use indexing to do the same operation as unsqueeze?
+
How do we show the actual contents of the memory used for a tensor?
+
When adding a vector of size 3 to a matrix of size 3×3, are the elements of the vector added to each row or each column of the matrix? (Be sure to check your answer by running this code in a notebook.)
+
Do broadcasting and expand_as result in increased memory use? Why or why not?
+
Implement matmul using Einstein summation.
+
What does a repeated index letter represent on the left-hand side of einsum?
+
What are the three rules of Einstein summation notation? Why?
+
What are the forward pass and backward pass of a neural network?
+
Why do we need to store some of the activations calculated for intermediate layers in the forward pass?
+
What is the downside of having activations with a standard deviation too far away from 1?
+
How can weight initialization help avoid this problem?
+
What is the formula to initialize weights such that we get a standard deviation of 1 for a plain linear layer, and for a linear layer followed by ReLU?
+
Why do we sometimes have to use the squeeze method in loss functions?
+
What does the argument to the squeeze method do? Why might it be important to include this argument, even though PyTorch does not require it?
+
What is the “chain rule”? Show the equation in either of the two forms presented in this chapter.
+
Show how to calculate the gradients of mse(lin(l2, w2, b2), y) using the chain rule.
+
What is the gradient of ReLU? Show it in math or code. (You shouldn’t need to commit this to memory—try to figure it using your knowledge of the shape of the function.)
+
In what order do we need to call the *_grad functions in the backward pass? Why?
+
What is __call__?
+
What methods must we implement when writing a torch.autograd.Function?
+
Write nn.Linear from scratch, and test it works.
+
What is the difference between nn.Module and fastai’s Module?
+
+
+
Further Research
+
+
Implement ReLU as a torch.autograd.Function and train a model with it.
+
If you are mathematically inclined, find out what the gradients of a linear layer are in mathematical notation. Map that to the implementation we saw in this chapter.
+
Learn about the unfold method in PyTorch, and use it along with matrix multiplication to implement your own 2D convolution function. Then train a CNN that uses it.
+
Implement everything in this chapter using NumPy instead of PyTorch.
This is a preview version of Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD. Note that chapters shown in italics in the sidebar are only available as a preview of the first few paragraphs. The full content of all chapters is available for free as Jupyter Notebooks here with only basic formatting. A nicely typeset version can be purchased from Amazon.
+
+
Here’s a list of all the full chapters available here:
This book is designed to go with our free deep learning course, available at course.fast.ai.
+
Once you’ve finished the first eight chapters of the book, or completed course.fast.ai, you’ll be ready for our new course, From Deep Learning Foundations to Stable Diffusion, which starts on Oct 11th 2022 (Australian time; Oct 10th US time). You can sign up here. If you’re an open source author you may qualify for a scholarship – details here.
Hello, and thank you for letting us join you on your deep learning journey, however far along that you may be! In this chapter, we will tell you a little bit more about what to expect in this book, introduce the key concepts behind deep learning, and train our first models on different tasks. It doesn’t matter if you don’t come from a technical or a mathematical background (though it’s okay if you do too!); we wrote this book to make deep learning accessible to as many people as possible.
+
+
1.1 Deep Learning Is for Everyone
+
A lot of people assume that you need all kinds of hard-to-find stuff to get great results with deep learning, but as you’ll see in this book, those people are wrong. Table 1.1 is a list of a few thing you absolutely don’t need to do world-class deep learning.
+
+
+
+Table 1.1: What you don’t need to do deep learning
+
+
+
+
+
+
+
+
+
+
Myth (don’t need)
+
Truth
+
+
+
+
+
Lots of math
+
Just high school math is sufficient
+
+
+
Lots of data
+
We’ve seen record-breaking results with <50 items of data
+
+
+
Lots of expensive computers
+
You can get what you need for state of the art work for free
+
+
+
+
+
+
+
Deep learning is a computer technique to extract and transform data–-with use cases ranging from human speech recognition to animal imagery classification–-by using multiple layers of neural networks. Each of these layers takes its inputs from previous layers and progressively refines them. The layers are trained by algorithms that minimize their errors and improve their accuracy. In this way, the network learns to perform a specified task. We will discuss training algorithms in detail in the next section.
+
Deep learning has power, flexibility, and simplicity. That’s why we believe it should be applied across many disciplines. These include the social and physical sciences, the arts, medicine, finance, scientific research, and many more. To give a personal example, despite having no background in medicine, Jeremy started Enlitic, a company that uses deep learning algorithms to diagnose illness and disease. Within months of starting the company, it was announced that its algorithm could identify malignant tumors more accurately than radiologists.
+
Here’s a list of some of the thousands of tasks in different areas at which deep learning, or methods heavily using deep learning, is now the best in the world:
+
+
Natural language processing (NLP):: Answering questions; speech recognition; summarizing documents; classifying documents; finding names, dates, etc. in documents; searching for articles mentioning a concept
+
Computer vision:: Satellite and drone imagery interpretation (e.g., for disaster resilience); face recognition; image captioning; reading traffic signs; locating pedestrians and vehicles in autonomous vehicles
+
Medicine:: Finding anomalies in radiology images, including CT, MRI, and X-ray images; counting features in pathology slides; measuring features in ultrasounds; diagnosing diabetic retinopathy
+
Biology:: Folding proteins; classifying proteins; many genomics tasks, such as tumor-normal sequencing and classifying clinically actionable genetic mutations; cell classification; analyzing protein/protein interactions
+
Image generation:: Colorizing images; increasing image resolution; removing noise from images; converting images to art in the style of famous artists
+
Recommendation systems:: Web search; product recommendations; home page layout
+
Playing games:: Chess, Go, most Atari video games, and many real-time strategy games
+
Robotics:: Handling objects that are challenging to locate (e.g., transparent, shiny, lacking texture) or hard to pick up
+
Other applications:: Financial and logistical forecasting, text to speech, and much more…
+
+
What is remarkable is that deep learning has such varied application yet nearly all of deep learning is based on a single type of model, the neural network.
+
But neural networks are not in fact completely new. In order to have a wider perspective on the field, it is worth it to start with a bit of history.
+
+
+
1.2 Neural Networks: A Brief History
+
In 1943 Warren McCulloch, a neurophysiologist, and Walter Pitts, a logician, teamed up to develop a mathematical model of an artificial neuron. In their paper “A Logical Calculus of the Ideas Immanent in Nervous Activity” they declared that:
+
+
Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms.
+
+
McCulloch and Pitts realized that a simplified model of a real neuron could be represented using simple addition and thresholding, as shown in Figure 1.1. Pitts was self-taught, and by age 12, had received an offer to study at Cambridge University with the great Bertrand Russell. He did not take up this invitation, and indeed throughout his life did not accept any offers of advanced degrees or positions of authority. Most of his famous work was done while he was homeless. Despite his lack of an officially recognized position and increasing social isolation, his work with McCulloch was influential, and was taken up by a psychologist named Frank Rosenblatt.
+
+
+
+
Rosenblatt further developed the artificial neuron to give it the ability to learn. Even more importantly, he worked on building the first device that actually used these principles, the Mark I Perceptron. In “The Design of an Intelligent Automaton” Rosenblatt wrote about this work: “We are now about to witness the birth of such a machine–-a machine capable of perceiving, recognizing and identifying its surroundings without any human training or control.” The perceptron was built, and was able to successfully recognize simple shapes.
+
An MIT professor named Marvin Minsky (who was a grade behind Rosenblatt at the same high school!), along with Seymour Papert, wrote a book called Perceptrons (MIT Press), about Rosenblatt’s invention. They showed that a single layer of these devices was unable to learn some simple but critical mathematical functions (such as XOR). In the same book, they also showed that using multiple layers of the devices would allow these limitations to be addressed. Unfortunately, only the first of these insights was widely recognized. As a result, the global academic community nearly entirely gave up on neural networks for the next two decades.
+
Perhaps the most pivotal work in neural networks in the last 50 years was the multi-volume Parallel Distributed Processing (PDP) by David Rumelhart, James McClellan, and the PDP Research Group, released in 1986 by MIT Press. Chapter 1 lays out a similar hope to that shown by Rosenblatt:
+
+
People are smarter than today’s computers because the brain employs a basic computational architecture that is more suited to deal with a central aspect of the natural information processing tasks that people are so good at. …We will introduce a computational framework for modeling cognitive processes that seems… closer than other frameworks to the style of computation as it might be done by the brain.
+
+
The premise that PDP is using here is that traditional computer programs work very differently to brains, and that might be why computer programs had been (at that point) so bad at doing things that brains find easy (such as recognizing objects in pictures). The authors claimed that the PDP approach was “closer than other frameworks” to how the brain works, and therefore it might be better able to handle these kinds of tasks.
+
In fact, the approach laid out in PDP is very similar to the approach used in today’s neural networks. The book defined parallel distributed processing as requiring:
+
+
A set of processing units
+
A state of activation
+
An output function for each unit
+
A pattern of connectivity among units
+
A propagation rule for propagating patterns of activities through the network of connectivities
+
An activation rule for combining the inputs impinging on a unit with the current state of that unit to produce an output for the unit
+
A learning rule whereby patterns of connectivity are modified by experience
+
An environment within which the system must operate
+
+
We will see in this book that modern neural networks handle each of these requirements.
+
In the 1980’s most models were built with a second layer of neurons, thus avoiding the problem that had been identified by Minsky and Papert (this was their “pattern of connectivity among units,” to use the framework above). And indeed, neural networks were widely used during the ’80s and ’90s for real, practical projects. However, again a misunderstanding of the theoretical issues held back the field. In theory, adding just one extra layer of neurons was enough to allow any mathematical function to be approximated with these neural networks, but in practice such networks were often too big and too slow to be useful.
+
Although researchers showed 30 years ago that to get practical good performance you need to use even more layers of neurons, it is only in the last decade that this principle has been more widely appreciated and applied. Neural networks are now finally living up to their potential, thanks to the use of more layers, coupled with the capacity to do so due to improvements in computer hardware, increases in data availability, and algorithmic tweaks that allow neural networks to be trained faster and more easily. We now have what Rosenblatt promised: “a machine capable of perceiving, recognizing, and identifying its surroundings without any human training or control.”
+
This is what you will learn how to build in this book. But first, since we are going to be spending a lot of time together, let’s get to know each other a bit…
+
+
+
1.3 Who We Are
+
We are Sylvain and Jeremy, your guides on this journey. We hope that you will find us well suited for this position.
+
Jeremy has been using and teaching machine learning for around 30 years. He started using neural networks 25 years ago. During this time, he has led many companies and projects that have machine learning at their core, including founding the first company to focus on deep learning and medicine, Enlitic, and taking on the role of President and Chief Scientist of the world’s largest machine learning community, Kaggle. He is the co-founder, along with Dr. Rachel Thomas, of fast.ai, the organization that built the course this book is based on.
+
From time to time you will hear directly from us, in sidebars like this one from Jeremy:
+
+
J: Hi everybody, I’m Jeremy! You might be interested to know that I do not have any formal technical education. I completed a BA, with a major in philosophy, and didn’t have great grades. I was much more interested in doing real projects, rather than theoretical studies, so I worked full time at a management consulting firm called McKinsey and Company throughout my university years. If you’re somebody who would rather get their hands dirty building stuff than spend years learning abstract concepts, then you will understand where I am coming from! Look out for sidebars from me to find information most suited to people with a less mathematical or formal technical background—that is, people like me…
+
+
Sylvain, on the other hand, knows a lot about formal technical education. In fact, he has written 10 math textbooks, covering the entire advanced French maths curriculum!
+
+
S: Unlike Jeremy, I have not spent many years coding and applying machine learning algorithms. Rather, I recently came to the machine learning world, by watching Jeremy’s fast.ai course videos. So, if you are somebody who has not opened a terminal and written commands at the command line, then you will understand where I am coming from! Look out for sidebars from me to find information most suited to people with a more mathematical or formal technical background, but less real-world coding experience—that is, people like me…
+
+
The fast.ai course has been studied by hundreds of thousands of students, from all walks of life, from all parts of the world. Sylvain stood out as the most impressive student of the course that Jeremy had ever seen, which led to him joining fast.ai, and then becoming the coauthor, along with Jeremy, of the fastai software library.
+
All this means that between us you have the best of both worlds: the people who know more about the software than anybody else, because they wrote it; an expert on math, and an expert on coding and machine learning; and also people who understand both what it feels like to be a relative outsider in math, and a relative outsider in coding and machine learning.
+
Anybody who has watched sports knows that if you have a two-person commentary team then you also need a third person to do “special comments.” Our special commentator is Alexis Gallagher. Alexis has a very diverse background: he has been a researcher in mathematical biology, a screenplay writer, an improv performer, a McKinsey consultant (like Jeremy!), a Swift coder, and a CTO.
+
+
A: I’ve decided it’s time for me to learn about this AI stuff! After all, I’ve tried pretty much everything else… But I don’t really have a background in building machine learning models. Still… how hard can it be? I’m going to be learning throughout this book, just like you are. Look out for my sidebars for learning tips that I found helpful on my journey, and hopefully you will find helpful too.
+
+
+
+
1.4 How to Learn Deep Learning
+
Harvard professor David Perkins, who wrote Making Learning Whole (Jossey-Bass), has much to say about teaching. The basic idea is to teach the whole game. That means that if you’re teaching baseball, you first take people to a baseball game or get them to play it. You don’t teach them how to wind twine to make a baseball from scratch, the physics of a parabola, or the coefficient of friction of a ball on a bat.
+
Paul Lockhart, a Columbia math PhD, former Brown professor, and K-12 math teacher, imagines in the influential essay “A Mathematician’s Lament” a nightmare world where music and art are taught the way math is taught. Children are not allowed to listen to or play music until they have spent over a decade mastering music notation and theory, spending classes transposing sheet music into a different key. In art class, students study colors and applicators, but aren’t allowed to actually paint until college. Sound absurd? This is how math is taught–-we require students to spend years doing rote memorization and learning dry, disconnected fundamentals that we claim will pay off later, long after most of them quit the subject.
+
Unfortunately, this is where many teaching resources on deep learning begin–-asking learners to follow along with the definition of the Hessian and theorems for the Taylor approximation of your loss functions, without ever giving examples of actual working code. We’re not knocking calculus. We love calculus, and Sylvain has even taught it at the college level, but we don’t think it’s the best place to start when learning deep learning!
+
In deep learning, it really helps if you have the motivation to fix your model to get it to do better. That’s when you start learning the relevant theory. But you need to have the model in the first place. We teach almost everything through real examples. As we build out those examples, we go deeper and deeper, and we’ll show you how to make your projects better and better. This means that you’ll be gradually learning all the theoretical foundations you need, in context, in such a way that you’ll see why it matters and how it works.
+
So, here’s our commitment to you. Throughout this book, we will follow these principles:
+
+
Teaching the whole game. We’ll start by showing how to use a complete, working, very usable, state-of-the-art deep learning network to solve real-world problems, using simple, expressive tools. And then we’ll gradually dig deeper and deeper into understanding how those tools are made, and how the tools that make those tools are made, and so on…
+
Always teaching through examples. We’ll ensure that there is a context and a purpose that you can understand intuitively, rather than starting with algebraic symbol manipulation.
+
Simplifying as much as possible. We’ve spent years building tools and teaching methods that make previously complex topics very simple.
+
Removing barriers. Deep learning has, until now, been a very exclusive game. We’re breaking it open, and ensuring that everyone can play.
+
+
The hardest part of deep learning is artisanal: how do you know if you’ve got enough data, whether it is in the right format, if your model is training properly, and, if it’s not, what you should do about it? That is why we believe in learning by doing. As with basic data science skills, with deep learning you only get better through practical experience. Trying to spend too much time on the theory can be counterproductive. The key is to just code and try to solve problems: the theory can come later, when you have context and motivation.
+
There will be times when the journey will feel hard. Times where you feel stuck. Don’t give up! Rewind through the book to find the last bit where you definitely weren’t stuck, and then read slowly through from there to find the first thing that isn’t clear. Then try some code experiments yourself, and Google around for more tutorials on whatever the issue you’re stuck with is—often you’ll find some different angle on the material might help it to click. Also, it’s expected and normal to not understand everything (especially the code) on first reading. Trying to understand the material serially before proceeding can sometimes be hard. Sometimes things click into place after you get more context from parts down the road, from having a bigger picture. So if you do get stuck on a section, try moving on anyway and make a note to come back to it later.
+
Remember, you don’t need any particular academic background to succeed at deep learning. Many important breakthroughs are made in research and industry by folks without a PhD, such as “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks”—one of the most influential papers of the last decade—with over 5,000 citations, which was written by Alec Radford when he was an undergraduate. Even at Tesla, where they’re trying to solve the extremely tough challenge of making a self-driving car, CEO Elon Musk says:
+
+
A PhD is definitely not required. All that matters is a deep understanding of AI & ability to implement NNs in a way that is actually useful (latter point is what’s truly hard). Don’t care if you even graduated high school.
+
+
What you will need to do to succeed however is to apply what you learn in this book to a personal project, and always persevere.
+
+
Your Projects and Your Mindset
+
Whether you’re excited to identify if plants are diseased from pictures of their leaves, auto-generate knitting patterns, diagnose TB from X-rays, or determine when a raccoon is using your cat door, we will get you using deep learning on your own problems (via pre-trained models from others) as quickly as possible, and then will progressively drill into more details. You’ll learn how to use deep learning to solve your own problems at state-of-the-art accuracy within the first 30 minutes of the next chapter! (And feel free to skip straight there now if you’re dying to get coding right away.) There is a pernicious myth out there that you need to have computing resources and datasets the size of those at Google to be able to do deep learning, but it’s not true.
+
So, what sorts of tasks make for good test cases? You could train your model to distinguish between Picasso and Monet paintings or to pick out pictures of your daughter instead of pictures of your son. It helps to focus on your hobbies and passions–-setting yourself four or five little projects rather than striving to solve a big, grand problem tends to work better when you’re getting started. Since it is easy to get stuck, trying to be too ambitious too early can often backfire. Then, once you’ve got the basics mastered, aim to complete something you’re really proud of!
+
+
J: Deep learning can be set to work on almost any problem. For instance, my first startup was a company called FastMail, which provided enhanced email services when it launched in 1999 (and still does to this day). In 2002 I set it up to use a primitive form of deep learning, single-layer neural networks, to help categorize emails and stop customers from receiving spam.
+
+
Common character traits in the people that do well at deep learning include playfulness and curiosity. The late physicist Richard Feynman is an example of someone who we’d expect to be great at deep learning: his development of an understanding of the movement of subatomic particles came from his amusement at how plates wobble when they spin in the air.
+
Let’s now focus on what you will learn, starting with the software.
+
+
+
+
1.5 The Software: PyTorch, fastai, and Jupyter
+
(And Why It Doesn’t Matter)
+
We’ve completed hundreds of machine learning projects using dozens of different packages, and many different programming languages. At fast.ai, we have written courses using most of the main deep learning and machine learning packages used today. After PyTorch came out in 2017 we spent over a thousand hours testing it before deciding that we would use it for future courses, software development, and research. Since that time PyTorch has become the world’s fastest-growing deep learning library and is already used for most research papers at top conferences. This is generally a leading indicator of usage in industry, because these are the papers that end up getting used in products and services commercially. We have found that PyTorch is the most flexible and expressive library for deep learning. It does not trade off speed for simplicity, but provides both.
+
PyTorch works best as a low-level foundation library, providing the basic operations for higher-level functionality. The fastai library is the most popular library for adding this higher-level functionality on top of PyTorch. It’s also particularly well suited to the purposes of this book, because it is unique in providing a deeply layered software architecture (there’s even a peer-reviewed academic paper about this layered API). In this book, as we go deeper and deeper into the foundations of deep learning, we will also go deeper and deeper into the layers of fastai. This book covers version 2 of the fastai library, which is a from-scratch rewrite providing many unique features.
+
However, it doesn’t really matter what software you learn, because it takes only a few days to learn to switch from one library to another. What really matters is learning the deep learning foundations and techniques properly. Our focus will be on using code that clearly expresses the concepts that you need to learn. Where we are teaching high-level concepts, we will use high-level fastai code. Where we are teaching low-level concepts, we will use low-level PyTorch, or even pure Python code.
+
If it feels like new deep learning libraries are appearing at a rapid pace nowadays, then you need to be prepared for a much faster rate of change in the coming months and years. As more people enter the field, they will bring more skills and ideas, and try more things. You should assume that whatever specific libraries and software you learn today will be obsolete in a year or two. Just think about the number of changes in libraries and technology stacks that occur all the time in the world of web programming—a much more mature and slow-growing area than deep learning. We strongly believe that the focus in learning needs to be on understanding the underlying techniques and how to apply them in practice, and how to quickly build expertise in new tools and techniques as they are released.
+
By the end of the book, you’ll understand nearly all the code that’s inside fastai (and much of PyTorch too), because in each chapter we’ll be digging a level deeper to show you exactly what’s going on as we build and train our models. This means that you’ll have learned the most important best practices used in modern deep learning—not just how to use them, but how they really work and are implemented. If you want to use those approaches in another framework, you’ll have the knowledge you need to do so if needed.
+
Since the most important thing for learning deep learning is writing code and experimenting, it’s important that you have a great platform for experimenting with code. The most popular programming experimentation platform is called Jupyter. This is what we will be using throughout this book. We will show you how you can use Jupyter to train and experiment with models and introspect every stage of the data pre-processing and model development pipeline. Jupyter Notebook is the most popular tool for doing data science in Python, for good reason. It is powerful, flexible, and easy to use. We think you will love it!
+
Let’s see it in practice and train our first model.
+
+
+
1.6 Your First Model
+
As we said before, we will teach you how to do things before we explain why they work. Following this top-down approach, we will begin by actually training an image classifier to recognize dogs and cats with almost 100% accuracy. To train this model and run our experiments, you will need to do some initial setup. Don’t worry, it’s not as hard as it looks.
+
+
s: Do not skip the setup part even if it looks intimidating at first, especially if you have little or no experience using things like a terminal or the command line. Most of that is actually not necessary and you will find that the easiest servers can be set up with just your usual web browser. It is crucial that you run your own experiments in parallel with this book in order to learn.
+
+
+
Getting a GPU Deep Learning Server
+
To do nearly everything in this book, you’ll need access to a computer with an NVIDIA GPU (unfortunately other brands of GPU are not fully supported by the main deep learning libraries). However, we don’t recommend you buy one; in fact, even if you already have one, we don’t suggest you use it just yet! Setting up a computer takes time and energy, and you want all your energy to focus on deep learning right now. Therefore, we instead suggest you rent access to a computer that already has everything you need preinstalled and ready to go. Costs can be as little as US$0.25 per hour while you’re using it, and some options are even free.
+
+
jargon: Graphics Processing Unit (GPU): Also known as a graphics card. A special kind of processor in your computer that can handle thousands of single tasks at the same time, especially designed for displaying 3D environments on a computer for playing games. These same basic tasks are very similar to what neural networks do, such that GPUs can run neural networks hundreds of times faster than regular CPUs. All modern computers contain a GPU, but few contain the right kind of GPU necessary for deep learning.
+
+
The best choice of GPU servers to use with this book will change over time, as companies come and go and prices change. We maintain a list of our recommended options on the book’s website, so go there now and follow the instructions to get connected to a GPU deep learning server. Don’t worry, it only takes about two minutes to get set up on most platforms, and many don’t even require any payment, or even a credit card, to get started.
+
+
A: My two cents: heed this advice! If you like computers you will be tempted to set up your own box. Beware! It is feasible but surprisingly involved and distracting. There is a good reason this book is not titled, Everything You Ever Wanted to Know About Ubuntu System Administration, NVIDIA Driver Installation, apt-get, conda, pip, and Jupyter Notebook Configuration. That would be a book of its own. Having designed and deployed our production machine learning infrastructure at work, I can testify it has its satisfactions, but it is as unrelated to modeling as maintaining an airplane is to flying one.
+
+
Each option shown on the website includes a tutorial; after completing the tutorial, you will end up with a screen looking like Figure 1.2.
+
+
+
+
You are now ready to run your first Jupyter notebook!
+
+
jargon: Jupyter Notebook: A piece of software that allows you to include formatted text, code, images, videos, and much more, all within a single interactive document. Jupyter received the highest honor for software, the ACM Software System Award, thanks to its wide use and enormous impact in many academic fields and in industry. Jupyter Notebook is the software most widely used by data scientists for developing and interacting with deep learning models.
+
+
+
+
Running Your First Notebook
+
The notebooks are labeled by chapter and then by notebook number, so that they are in the same order as they are presented in this book. So, the very first notebook you will see listed is the notebook that you need to use now. You will be using this notebook to train a model that can recognize dog and cat photos. To do this, you’ll be downloading a dataset of dog and cat photos, and using that to train a model. A dataset is simply a bunch of data—it could be images, emails, financial indicators, sounds, or anything else. There are many datasets made freely available that are suitable for training models. Many of these datasets are created by academics to help advance research, many are made available for competitions (there are competitions where data scientists can compete to see who has the most accurate model!), and some are by-products of other processes (such as financial filings).
+
+
note: Full and Stripped Notebooks: There are two folders containing different versions of the notebooks. The full folder contains the exact notebooks used to create the book you’re reading now, with all the prose and outputs. The stripped version has the same headings and code cells, but all outputs and prose have been removed. After reading a section of the book, we recommend working through the stripped notebooks, with the book closed, and seeing if you can figure out what each cell will show before you execute it. Also try to recall what the code is demonstrating.
+
+
To open a notebook, just click on it. The notebook will open, and it will look something like Figure 1.3 (note that there may be slight differences in details across different platforms; you can ignore those differences).
+
+
+
+
A notebook consists of cells. There are two main types of cell:
+
+
Cells containing formatted text, images, and so forth. These use a format called markdown, which you will learn about soon.
+
Cells containing code that can be executed, and outputs will appear immediately underneath (which could be plain text, tables, images, animations, sounds, or even interactive applications).
+
+
Jupyter notebooks can be in one of two modes: edit mode or command mode. In edit mode typing on your keyboard enters the letters into the cell in the usual way. However, in command mode, you will not see any flashing cursor, and the keys on your keyboard will each have a special function.
+
Before continuing, press the Escape key on your keyboard to switch to command mode (if you are already in command mode, this does nothing, so press it now just in case). To see a complete list of all of the functions available, press H; press Escape to remove this help screen. Notice that in command mode, unlike most programs, commands do not require you to hold down Control, Alt, or similar—you simply press the required letter key.
+
You can make a copy of a cell by pressing C (the cell needs to be selected first, indicated with an outline around it; if it is not already selected, click on it once). Then press V to paste a copy of it.
+
Click on the cell that begins with the line “# CLICK ME” to select it. The first character in that line indicates that what follows is a comment in Python, so it is ignored when executing the cell. The rest of the cell is, believe it or not, a complete system for creating and training a state-of-the-art model for recognizing cats versus dogs. So, let’s train it now! To do so, just press Shift-Enter on your keyboard, or press the Play button on the toolbar. Then wait a few minutes while the following things happen:
+
+
A dataset called the Oxford-IIIT Pet Dataset that contains 7,349 images of cats and dogs from 37 different breeds will be downloaded from the fast.ai datasets collection to the GPU server you are using, and will then be extracted.
+
A pretrained model that has already been trained on 1.3 million images, using a competition-winning model will be downloaded from the internet.
+
The pretrained model will be fine-tuned using the latest advances in transfer learning, to create a model that is specially customized for recognizing dogs and cats.
+
+
The first two steps only need to be run once on your GPU server. If you run the cell again, it will use the dataset and model that have already been downloaded, rather than downloading them again. Let’s take a look at the contents of the cell, and the results:
You will probably not see exactly the same results that are in the book. There are a lot of sources of small random variation involved in training models. We generally see an error rate of well less than 0.02 in this example, however.
+
+
important: Training Time: Depending on your network speed, it might take a few minutes to download the pretrained model and dataset. Running fine_tune might take a minute or so. Often models in this book take a few minutes to train, as will your own models, so it’s a good idea to come up with good techniques to make the most of this time. For instance, keep reading the next section while your model trains, or open up another notebook and use it for some coding experiments.
+
+
+
+
Sidebar: This Book Was Written in Jupyter Notebooks
+
We wrote this book using Jupyter notebooks, so for nearly every chart, table, and calculation in this book, we’ll be showing you the exact code required to replicate it yourself. That’s why very often in this book, you will see some code immediately followed by a table, a picture or just some text. If you go on the book’s website you will find all the code, and you can try running and modifying every example yourself.
+
You just saw how a cell that outputs a table looks inside the book. Here is an example of a cell that outputs text:
+
+
1+1
+
+
2
+
+
+
Jupyter will always print or show the result of the last line (if there is one). For instance, here is an example of a cell that outputs an image:
So, how do we know if this model is any good? In the last column of the table you can see the error rate, which is the proportion of images that were incorrectly identified. The error rate serves as our metric—our measure of model quality, chosen to be intuitive and comprehensible. As you can see, the model is nearly perfect, even though the training time was only a few seconds (not including the one-time downloading of the dataset and the pretrained model). In fact, the accuracy you’ve achieved already is far better than anybody had ever achieved just 10 years ago!
+
Finally, let’s check that this model actually works. Go and get a photo of a dog, or a cat; if you don’t have one handy, just search Google Images and download an image that you find there. Now execute the cell with uploader defined. It will output a button you can click, so you can select the image you want to classify:
+
+
uploader = widgets.FileUpload()
+uploader
+
+
+
+
+
Now you can pass the uploaded file to the model. Make sure that it is a clear photo of a single dog or a cat, and not a line drawing, cartoon, or similar. The notebook will tell you whether it thinks it is a dog or a cat, and how confident it is. Hopefully, you’ll find that your model did a great job:
+
+
img = PILImage.create(uploader.data[0])
+is_cat,_,probs = learn.predict(img)
+print(f"Is this a cat?: {is_cat}.")
+print(f"Probability it's a cat: {probs[1].item():.6f}")
+
+
+
+
+
Is this a cat?: True.
+Probability it's a cat: 1.000000
+
+
+
Congratulations on your first classifier!
+
But what does this mean? What did you actually do? In order to explain this, let’s zoom out again to take in the big picture.
+
+
+
What Is Machine Learning?
+
Your classifier is a deep learning model. As was already mentioned, deep learning models use neural networks, which originally date from the 1950s and have become powerful very recently thanks to recent advancements.
+
Another key piece of context is that deep learning is just a modern area in the more general discipline of machine learning. To understand the essence of what you did when you trained your own classification model, you don’t need to understand deep learning. It is enough to see how your model and your training process are examples of the concepts that apply to machine learning in general.
+
So in this section, we will describe what machine learning is. We will look at the key concepts, and show how they can be traced back to the original essay that introduced them.
+
Machine learning is, like regular programming, a way to get computers to complete a specific task. But how would we use regular programming to do what we just did in the last section: recognize dogs versus cats in photos? We would have to write down for the computer the exact steps necessary to complete the task.
+
Normally, it’s easy enough for us to write down the steps to complete a task when we’re writing a program. We just think about the steps we’d take if we had to do the task by hand, and then we translate them into code. For instance, we can write a function that sorts a list. In general, we’d write a function that looks something like Figure 1.5 (where inputs might be an unsorted list, and results a sorted list).
+
+
+
+
+
+
+
+
But for recognizing objects in a photo that’s a bit tricky; what are the steps we take when we recognize an object in a picture? We really don’t know, since it all happens in our brain without us being consciously aware of it!
+
Right back at the dawn of computing, in 1949, an IBM researcher named Arthur Samuel started working on a different way to get computers to complete tasks, which he called machine learning. In his classic 1962 essay “Artificial Intelligence: A Frontier of Automation”, he wrote:
+
+
Programming a computer for such computations is, at best, a difficult task, not primarily because of any inherent complexity in the computer itself but, rather, because of the need to spell out every minute step of the process in the most exasperating detail. Computers, as any programmer will tell you, are giant morons, not giant brains.
+
+
His basic idea was this: instead of telling the computer the exact steps required to solve a problem, show it examples of the problem to solve, and let it figure out how to solve it itself. This turned out to be very effective: by 1961 his checkers-playing program had learned so much that it beat the Connecticut state champion! Here’s how he described his idea (from the same essay as above):
+
+
Suppose we arrange for some automatic means of testing the effectiveness of any current weight assignment in terms of actual performance and provide a mechanism for altering the weight assignment so as to maximize the performance. We need not go into the details of such a procedure to see that it could be made entirely automatic and to see that a machine so programmed would “learn” from its experience.
+
+
There are a number of powerful concepts embedded in this short statement:
+
+
The idea of a “weight assignment”
+
The fact that every weight assignment has some “actual performance”
+
The requirement that there be an “automatic means” of testing that performance,
+
+
The need for a “mechanism” (i.e., another automatic process) for improving the performance by changing the weight assignments
+
+
Let us take these concepts one by one, in order to understand how they fit together in practice. First, we need to understand what Samuel means by a weight assignment.
+
Weights are just variables, and a weight assignment is a particular choice of values for those variables. The program’s inputs are values that it processes in order to produce its results—for instance, taking image pixels as inputs, and returning the classification “dog” as a result. The program’s weight assignments are other values that define how the program will operate.
+
Since they will affect the program they are in a sense another kind of input, so we will update our basic picture in Figure 1.5 and replace it with Figure 1.6 in order to take this into account.
+
+
+
+
+
+
+
+
We’ve changed the name of our box from program to model. This is to follow modern terminology and to reflect that the model is a special kind of program: it’s one that can do many different things, depending on the weights. It can be implemented in many different ways. For instance, in Samuel’s checkers program, different values of the weights would result in different checkers-playing strategies.
+
(By the way, what Samuel called “weights” are most generally referred to as model parameters these days, in case you have encountered that term. The term weights is reserved for a particular type of model parameter.)
+
Next, Samuel said we need an automatic means of testing the effectiveness of any current weight assignment in terms of actual performance. In the case of his checkers program, the “actual performance” of a model would be how well it plays. And you could automatically test the performance of two models by setting them to play against each other, and seeing which one usually wins.
+
Finally, he says we need a mechanism for altering the weight assignment so as to maximize the performance. For instance, we could look at the difference in weights between the winning model and the losing model, and adjust the weights a little further in the winning direction.
+
We can now see why he said that such a procedure could be made entirely automatic and… a machine so programmed would “learn” from its experience. Learning would become entirely automatic when the adjustment of the weights was also automatic—when instead of us improving a model by adjusting its weights manually, we relied on an automated mechanism that produced adjustments based on performance.
+
Figure 1.7 shows the full picture of Samuel’s idea of training a machine learning model.
+
+
+
+
+
+
+
+
Notice the distinction between the model’s results (e.g., the moves in a checkers game) and its performance (e.g., whether it wins the game, or how quickly it wins).
+
Also note that once the model is trained—that is, once we’ve chosen our final, best, favorite weight assignment—then we can think of the weights as being part of the model, since we’re not varying them any more.
+
Therefore, actually using a model after it’s trained looks like Figure 1.8.
+
+
+
+
+
+
+
+
This looks identical to our original diagram in Figure 1.5, just with the word program replaced with model. This is an important insight: a trained model can be treated just like a regular computer program.
+
+
jargon: Machine Learning: The training of programs developed by allowing a computer to learn from its experience, rather than through manually coding the individual steps.
+
+
+
+
What Is a Neural Network?
+
It’s not too hard to imagine what the model might look like for a checkers program. There might be a range of checkers strategies encoded, and some kind of search mechanism, and then the weights could vary how strategies are selected, what parts of the board are focused on during a search, and so forth. But it’s not at all obvious what the model might look like for an image recognition program, or for understanding text, or for many other interesting problems we might imagine.
+
What we would like is some kind of function that is so flexible that it could be used to solve any given problem, just by varying its weights. Amazingly enough, this function actually exists! It’s the neural network, which we already discussed. That is, if you regard a neural network as a mathematical function, it turns out to be a function which is extremely flexible depending on its weights. A mathematical proof called the universal approximation theorem shows that this function can solve any problem to any level of accuracy, in theory. The fact that neural networks are so flexible means that, in practice, they are often a suitable kind of model, and you can focus your effort on the process of training them—that is, of finding good weight assignments.
+
But what about that process? One could imagine that you might need to find a new “mechanism” for automatically updating weights for every problem. This would be laborious. What we’d like here as well is a completely general way to update the weights of a neural network, to make it improve at any given task. Conveniently, this also exists!
+
This is called stochastic gradient descent (SGD). We’ll see how neural networks and SGD work in detail in Chapter 4, as well as explaining the universal approximation theorem. For now, however, we will instead use Samuel’s own words: We need not go into the details of such a procedure to see that it could be made entirely automatic and to see that a machine so programmed would “learn” from its experience.
+
+
J: Don’t worry, neither SGD nor neural nets are mathematically complex. Both nearly entirely rely on addition and multiplication to do their work (but they do a lot of addition and multiplication!). The main reaction we hear from students when they see the details is: “Is that all it is?”
+
+
In other words, to recap, a neural network is a particular kind of machine learning model, which fits right in to Samuel’s original conception. Neural networks are special because they are highly flexible, which means they can solve an unusually wide range of problems just by finding the right weights. This is powerful, because stochastic gradient descent provides us a way to find those weight values automatically.
+
Having zoomed out, let’s now zoom back in and revisit our image classification problem using Samuel’s framework.
+
Our inputs are the images. Our weights are the weights in the neural net. Our model is a neural net. Our results are the values that are calculated by the neural net, like “dog” or “cat.”
+
What about the next piece, an automatic means of testing the effectiveness of any current weight assignment in terms of actual performance? Determining “actual performance” is easy enough: we can simply define our model’s performance as its accuracy at predicting the correct answers.
+
Putting this all together, and assuming that SGD is our mechanism for updating the weight assignments, we can see how our image classifier is a machine learning model, much like Samuel envisioned.
+
+
+
A Bit of Deep Learning Jargon
+
Samuel was working in the 1960s, and since then terminology has changed. Here is the modern deep learning terminology for all the pieces we have discussed:
+
+
The functional form of the model is called its architecture (but be careful—sometimes people use model as a synonym of architecture, so this can get confusing).
+
The weights are called parameters.
+
The predictions are calculated from the independent variable, which is the data not including the labels.
+
The results of the model are called predictions.
+
The measure of performance is called the loss.
+
The loss depends not only on the predictions, but also the correct labels (also known as targets or the dependent variable); e.g., “dog” or “cat.”
From this picture we can now see some fundamental things about training a deep learning model:
+
+
A model cannot be created without data.
+
A model can only learn to operate on the patterns seen in the input data used to train it.
+
This learning approach only creates predictions, not recommended actions.
+
It’s not enough to just have examples of input data; we need labels for that data too (e.g., pictures of dogs and cats aren’t enough to train a model; we need a label for each one, saying which ones are dogs, and which are cats).
+
+
Generally speaking, we’ve seen that most organizations that say they don’t have enough data, actually mean they don’t have enough labeled data. If any organization is interested in doing something in practice with a model, then presumably they have some inputs they plan to run their model against. And presumably they’ve been doing that some other way for a while (e.g., manually, or with some heuristic program), so they have data from those processes! For instance, a radiology practice will almost certainly have an archive of medical scans (since they need to be able to check how their patients are progressing over time), but those scans may not have structured labels containing a list of diagnoses or interventions (since radiologists generally create free-text natural language reports, not structured data). We’ll be discussing labeling approaches a lot in this book, because it’s such an important issue in practice.
+
Since these kinds of machine learning models can only make predictions (i.e., attempt to replicate labels), this can result in a significant gap between organizational goals and model capabilities. For instance, in this book you’ll learn how to create a recommendation system that can predict what products a user might purchase. This is often used in e-commerce, such as to customize products shown on a home page by showing the highest-ranked items. But such a model is generally created by looking at a user and their buying history (inputs) and what they went on to buy or look at (labels), which means that the model is likely to tell you about products the user already has or already knows about, rather than new products that they are most likely to be interested in hearing about. That’s very different to what, say, an expert at your local bookseller might do, where they ask questions to figure out your taste, and then tell you about authors or series that you’ve never heard of before.
+
Another critical insight comes from considering how a model interacts with its environment. This can create feedback loops, as described here:
+
+
A predictive policing model is created based on where arrests have been made in the past. In practice, this is not actually predicting crime, but rather predicting arrests, and is therefore partially simply reflecting biases in existing policing processes.
+
Law enforcement officers then might use that model to decide where to focus their police activity, resulting in increased arrests in those areas.
+
Data on these additional arrests would then be fed back in to retrain future versions of the model.
+
+
This is a positive feedback loop, where the more the model is used, the more biased the data becomes, making the model even more biased, and so forth.
+
Feedback loops can also create problems in commercial settings. For instance, a video recommendation system might be biased toward recommending content consumed by the biggest watchers of video (e.g., conspiracy theorists and extremists tend to watch more online video content than the average), resulting in those users increasing their video consumption, resulting in more of those kinds of videos being recommended. We’ll consider this topic more in detail in Chapter 3.
+
Now that you have seen the base of the theory, let’s go back to our code example and see in detail how the code corresponds to the process we just described.
+
+
+
How Our Image Recognizer Works
+
Let’s see just how our image recognizer code maps to these ideas. We’ll put each line into a separate cell, and look at what each one is doing (we won’t explain every detail of every parameter yet, but will give a description of the important bits; full details will come later in the book).
+
The first line imports all of the fastai.vision library.
+
from fastai.vision.allimport*
+
This gives us all of the functions and classes we will need to create a wide variety of computer vision models.
+
+
J: A lot of Python coders recommend avoiding importing a whole library like this (using the import * syntax), because in large software projects it can cause problems. However, for interactive work such as in a Jupyter notebook, it works great. The fastai library is specially designed to support this kind of interactive use, and it will only import the necessary pieces into your environment.
+
+
The second line downloads a standard dataset from the fast.ai datasets collection (if not previously downloaded) to your server, extracts it (if not previously extracted), and returns a Path object with the extracted location:
+
path = untar_data(URLs.PETS)/'images'
+
+
S: Throughout my time studying at fast.ai, and even still today, I’ve learned a lot about productive coding practices. The fastai library and fast.ai notebooks are full of great little tips that have helped make me a better programmer. For instance, notice that the fastai library doesn’t just return a string containing the path to the dataset, but a Path object. This is a really useful class from the Python 3 standard library that makes accessing files and directories much easier. If you haven’t come across it before, be sure to check out its documentation or a tutorial and try it out. Note that the https://book.fast.ai[website] contains links to recommended tutorials for each chapter. I’ll keep letting you know about little coding tips I’ve found useful as we come across them.
+
+
In the third line we define a function, is_cat, which labels cats based on a filename rule provided by the dataset creators:
+
def is_cat(x): return x[0].isupper()
+
We use that function in the fourth line, which tells fastai what kind of dataset we have and how it is structured:
There are various different classes for different kinds of deep learning datasets and problems—here we’re using ImageDataLoaders. The first part of the class name will generally be the type of data you have, such as image, or text.
+
The other important piece of information that we have to tell fastai is how to get the labels from the dataset. Computer vision datasets are normally structured in such a way that the label for an image is part of the filename, or path—most commonly the parent folder name. fastai comes with a number of standardized labeling methods, and ways to write your own. Here we’re telling fastai to use the is_cat function we just defined.
+
Finally, we define the Transforms that we need. A Transform contains code that is applied automatically during training; fastai includes many predefined Transforms, and adding new ones is as simple as creating a Python function. There are two kinds: item_tfms are applied to each item (in this case, each item is resized to a 224-pixel square), while batch_tfms are applied to a batch of items at a time using the GPU, so they’re particularly fast (we’ll see many examples of these throughout this book).
+
Why 224 pixels? This is the standard size for historical reasons (old pretrained models require this size exactly), but you can pass pretty much anything. If you increase the size, you’ll often get a model with better results (since it will be able to focus on more details), but at the price of speed and memory consumption; the opposite is true if you decrease the size.
+
+
Note: Classification and Regression: classification and regression have very specific meanings in machine learning. These are the two main types of model that we will be investigating in this book. A classification model is one which attempts to predict a class, or category. That is, it’s predicting from a number of discrete possibilities, such as “dog” or “cat.” A regression model is one which attempts to predict one or more numeric quantities, such as a temperature or a location. Sometimes people use the word regression to refer to a particular kind of model called a linear regression model; this is a bad practice, and we won’t be using that terminology in this book!
+
+
The Pet dataset contains 7,390 pictures of dogs and cats, consisting of 37 different breeds. Each image is labeled using its filename: for instance the file great_pyrenees_173.jpg is the 173rd example of an image of a Great Pyrenees breed dog in the dataset. The filenames start with an uppercase letter if the image is a cat, and a lowercase letter otherwise. We have to tell fastai how to get labels from the filenames, which we do by calling from_name_func (which means that labels can be extracted using a function applied to the filename), and passing is_cat, which returns x[0].isupper(), which evaluates to True if the first letter is uppercase (i.e., it’s a cat).
+
The most important parameter to mention here is valid_pct=0.2. This tells fastai to hold out 20% of the data and not use it for training the model at all. This 20% of the data is called the validation set; the remaining 80% is called the training set. The validation set is used to measure the accuracy of the model. By default, the 20% that is held out is selected randomly. The parameter seed=42 sets the random seed to the same value every time we run this code, which means we get the same validation set every time we run it—this way, if we change our model and retrain it, we know that any differences are due to the changes to the model, not due to having a different random validation set.
+
fastai will always show you your model’s accuracy using only the validation set, never the training set. This is absolutely critical, because if you train a large enough model for a long enough time, it will eventually memorize the label of every item in your dataset! The result will not actually be a useful model, because what we care about is how well our model works on previously unseen images. That is always our goal when creating a model: for it to be useful on data that the model only sees in the future, after it has been trained.
+
Even when your model has not fully memorized all your data, earlier on in training it may have memorized certain parts of it. As a result, the longer you train for, the better your accuracy will get on the training set; the validation set accuracy will also improve for a while, but eventually it will start getting worse as the model starts to memorize the training set, rather than finding generalizable underlying patterns in the data. When this happens, we say that the model is overfitting.
+
Figure 1.10 shows what happens when you overfit, using a simplified example where we have just one parameter, and some randomly generated data based on the function x**2. As you can see, although the predictions in the overfit model are accurate for data near the observed data points, they are way off when outside of that range.
+
+
+
+
Overfitting is the single most important and challenging issue when training for all machine learning practitioners, and all algorithms. As you will see, it is very easy to create a model that does a great job at making predictions on the exact data it has been trained on, but it is much harder to make accurate predictions on data the model has never seen before. And of course, this is the data that will actually matter in practice. For instance, if you create a handwritten digit classifier (as we will very soon!) and use it to recognize numbers written on checks, then you are never going to see any of the numbers that the model was trained on—checks will have slightly different variations of writing to deal with. You will learn many methods to avoid overfitting in this book. However, you should only use those methods after you have confirmed that overfitting is actually occurring (i.e., you have actually observed the validation accuracy getting worse during training). We often see practitioners using over-fitting avoidance techniques even when they have enough data that they didn’t need to do so, ending up with a model that may be less accurate than what they could have achieved.
+
+
important: Validation Set: When you train a model, you must always have both a training set and a validation set, and must measure the accuracy of your model only on the validation set. If you train for too long, with not enough data, you will see the accuracy of your model start to get worse; this is called overfitting. fastai defaults valid_pct to 0.2, so even if you forget, fastai will create a validation set for you!
+
+
The fifth line of the code training our image recognizer tells fastai to create a convolutional neural network (CNN) and specifies what architecture to use (i.e. what kind of model to create), what data we want to train it on, and what metric to use:
Why a CNN? It’s the current state-of-the-art approach to creating computer vision models. We’ll be learning all about how CNNs work in this book. Their structure is inspired by how the human vision system works.
+
There are many different architectures in fastai, which we will introduce in this book (as well as discussing how to create your own). Most of the time, however, picking an architecture isn’t a very important part of the deep learning process. It’s something that academics love to talk about, but in practice it is unlikely to be something you need to spend much time on. There are some standard architectures that work most of the time, and in this case we’re using one called ResNet that we’ll be talking a lot about during the book; it is both fast and accurate for many datasets and problems. The 34 in resnet34 refers to the number of layers in this variant of the architecture (other options are 18, 50, 101, and 152). Models using architectures with more layers take longer to train, and are more prone to overfitting (i.e. you can’t train them for as many epochs before the accuracy on the validation set starts getting worse). On the other hand, when using more data, they can be quite a bit more accurate.
+
What is a metric? A metric is a function that measures the quality of the model’s predictions using the validation set, and will be printed at the end of each epoch. In this case, we’re using error_rate, which is a function provided by fastai that does just what it says: tells you what percentage of images in the validation set are being classified incorrectly. Another common metric for classification is accuracy (which is just 1.0 - error_rate). fastai provides many more, which will be discussed throughout this book.
+
The concept of a metric may remind you of loss, but there is an important distinction. The entire purpose of loss is to define a “measure of performance” that the training system can use to update weights automatically. In other words, a good choice for loss is a choice that is easy for stochastic gradient descent to use. But a metric is defined for human consumption, so a good metric is one that is easy for you to understand, and that hews as closely as possible to what you want the model to do. At times, you might decide that the loss function is a suitable metric, but that is not necessarily the case.
+
vision_learner also has a parameter pretrained, which defaults to True (so it’s used in this case, even though we haven’t specified it), which sets the weights in your model to values that have already been trained by experts to recognize a thousand different categories across 1.3 million photos (using the famous ImageNet dataset). A model that has weights that have already been trained on some other dataset is called a pretrained model. You should nearly always use a pretrained model, because it means that your model, before you’ve even shown it any of your data, is already very capable. And, as you’ll see, in a deep learning model many of these capabilities are things you’ll need, almost regardless of the details of your project. For instance, parts of pretrained models will handle edge, gradient, and color detection, which are needed for many tasks.
+
When using a pretrained model, vision_learner will remove the last layer, since that is always specifically customized to the original training task (i.e. ImageNet dataset classification), and replace it with one or more new layers with randomized weights, of an appropriate size for the dataset you are working with. This last part of the model is known as the head.
+
Using pretrained models is the most important method we have to allow us to train more accurate models, more quickly, with less data, and less time and money. You might think that would mean that using pretrained models would be the most studied area in academic deep learning… but you’d be very, very wrong! The importance of pretrained models is generally not recognized or discussed in most courses, books, or software library features, and is rarely considered in academic papers. As we write this at the start of 2020, things are just starting to change, but it’s likely to take a while. So be careful: most people you speak to will probably greatly underestimate what you can do in deep learning with few resources, because they probably won’t deeply understand how to use pretrained models.
+
Using a pretrained model for a task different to what it was originally trained for is known as transfer learning. Unfortunately, because transfer learning is so under-studied, few domains have pretrained models available. For instance, there are currently few pretrained models available in medicine, making transfer learning challenging to use in that domain. In addition, it is not yet well understood how to use transfer learning for tasks such as time series analysis.
+
+
jargon: Transfer learning: Using a pretrained model for a task different to what it was originally trained for.
+
+
The sixth line of our code tells fastai how to fit the model:
+
learn.fine_tune(1)
+
As we’ve discussed, the architecture only describes a template for a mathematical function; it doesn’t actually do anything until we provide values for the millions of parameters it contains.
+
This is the key to deep learning—determining how to fit the parameters of a model to get it to solve your problem. In order to fit a model, we have to provide at least one piece of information: how many times to look at each image (known as number of epochs). The number of epochs you select will largely depend on how much time you have available, and how long you find it takes in practice to fit your model. If you select a number that is too small, you can always train for more epochs later.
+
But why is the method called fine_tune, and not fit? fastai actually does have a method called fit, which does indeed fit a model (i.e. look at images in the training set multiple times, each time updating the parameters to make the predictions closer and closer to the target labels). But in this case, we’ve started with a pretrained model, and we don’t want to throw away all those capabilities that it already has. As you’ll learn in this book, there are some important tricks to adapt a pretrained model for a new dataset—a process called fine-tuning.
+
+
jargon: Fine-tuning: A transfer learning technique where the parameters of a pretrained model are updated by training for additional epochs using a different task to that used for pretraining.
+
+
When you use the fine_tune method, fastai will use these tricks for you. There are a few parameters you can set (which we’ll discuss later), but in the default form shown here, it does two steps:
+
+
Use one epoch to fit just those parts of the model necessary to get the new random head to work correctly with your dataset.
+
Use the number of epochs requested when calling the method to fit the entire model, updating the weights of the later layers (especially the head) faster than the earlier layers (which, as we’ll see, generally don’t require many changes from the pretrained weights).
+
+
The head of a model is the part that is newly added to be specific to the new dataset. An epoch is one complete pass through the dataset. After calling fit, the results after each epoch are printed, showing the epoch number, the training and validation set losses (the “measure of performance” used for training the model), and any metrics you’ve requested (error rate, in this case).
+
So, with all this code our model learned to recognize cats and dogs just from labeled examples. But how did it do it?
+
+
+
What Our Image Recognizer Learned
+
At this stage we have an image recognizer that is working very well, but we have no idea what it is actually doing! Although many people complain that deep learning results in impenetrable “black box” models (that is, something that gives predictions but that no one can understand), this really couldn’t be further from the truth. There is a vast body of research showing how to deeply inspect deep learning models, and get rich insights from them. Having said that, all kinds of machine learning models (including deep learning, and traditional statistical models) can be challenging to fully understand, especially when considering how they will behave when coming across data that is very different to the data used to train them. We’ll be discussing this issue throughout this book.
+
In 2013 a PhD student, Matt Zeiler, and his supervisor, Rob Fergus, published the paper “Visualizing and Understanding Convolutional Networks”, which showed how to visualize the neural network weights learned in each layer of a model. They carefully analyzed the model that won the 2012 ImageNet competition, and used this analysis to greatly improve the model, such that they were able to go on to win the 2013 competition! Figure 1.11 is the picture that they published of the first layer’s weights.
+
+
+
+
This picture requires some explanation. For each layer, the image part with the light gray background shows the reconstructed weights pictures, and the larger section at the bottom shows the parts of the training images that most strongly matched each set of weights. For layer 1, what we can see is that the model has discovered weights that represent diagonal, horizontal, and vertical edges, as well as various different gradients. (Note that for each layer only a subset of the features are shown; in practice there are thousands across all of the layers.) These are the basic building blocks that the model has learned for computer vision. They have been widely analyzed by neuroscientists and computer vision researchers, and it turns out that these learned building blocks are very similar to the basic visual machinery in the human eye, as well as the handcrafted computer vision features that were developed prior to the days of deep learning. The next layer is represented in Figure 1.12.
+
+
+
+
For layer 2, there are nine examples of weight reconstructions for each of the features found by the model. We can see that the model has learned to create feature detectors that look for corners, repeating lines, circles, and other simple patterns. These are built from the basic building blocks developed in the first layer. For each of these, the right-hand side of the picture shows small patches from actual images which these features most closely match. For instance, the particular pattern in row 2, column 1 matches the gradients and textures associated with sunsets.
+
Figure 1.13 shows the image from the paper showing the results of reconstructing the features of layer 3.
+
+
+
+
As you can see by looking at the righthand side of this picture, the features are now able to identify and match with higher-level semantic components, such as car wheels, text, and flower petals. Using these components, layers four and five can identify even higher-level concepts, as shown in Figure 1.14.
+
+
+
+
This article was studying an older model called AlexNet that only contained five layers. Networks developed since then can have hundreds of layers—so you can imagine how rich the features developed by these models can be!
+
When we fine-tuned our pretrained model earlier, we adapted what those last layers focus on (flowers, humans, animals) to specialize on the cats versus dogs problem. More generally, we could specialize such a pretrained model on many different tasks. Let’s have a look at some examples.
+
+
+
Image Recognizers Can Tackle Non-Image Tasks
+
An image recognizer can, as its name suggests, only recognize images. But a lot of things can be represented as images, which means that an image recogniser can learn to complete many tasks.
+
For instance, a sound can be converted to a spectrogram, which is a chart that shows the amount of each frequency at each time in an audio file. Fast.ai student Ethan Sutin used this approach to easily beat the published accuracy of a state-of-the-art environmental sound detection model using a dataset of 8,732 urban sounds. fastai’s show_batch clearly shows how each different sound has a quite distinctive spectrogram, as you can see in Figure 1.15.
+
+
+
+
A time series can easily be converted into an image by simply plotting the time series on a graph. However, it is often a good idea to try to represent your data in a way that makes it as easy as possible to pull out the most important components. In a time series, things like seasonality and anomalies are most likely to be of interest. There are various transformations available for time series data. For instance, fast.ai student Ignacio Oguiza created images from a time series dataset for olive oil classification, using a technique called Gramian Angular Difference Field (GADF); you can see the result in Figure 1.16. He then fed those images to an image classification model just like the one you see in this chapter. His results, despite having only 30 training set images, were well over 90% accurate, and close to the state of the art.
+
+
+
+
Another interesting fast.ai student project example comes from Gleb Esman. He was working on fraud detection at Splunk, using a dataset of users’ mouse movements and mouse clicks. He turned these into pictures by drawing an image where the position, speed, and acceleration of the mouse pointer was displayed using coloured lines, and the clicks were displayed using small colored circles, as shown in Figure 1.17. He then fed this into an image recognition model just like the one we’ve used in this chapter, and it worked so well that it led to a patent for this approach to fraud analytics!
+
+
+
+
Another example comes from the paper “Malware Classification with Deep Convolutional Neural Networks” by Mahmoud Kalash et al., which explains that “the malware binary file is divided into 8-bit sequences which are then converted to equivalent decimal values. This decimal vector is reshaped and a gray-scale image is generated that represents the malware sample,” like in Figure 1.18.
+
+
+
+
The authors then show “pictures” generated through this process of malware in different categories, as shown in Figure 1.19.
+
+
+
+
As you can see, the different types of malware look very distinctive to the human eye. The model the researchers trained based on this image representation was more accurate at malware classification than any previous approach shown in the academic literature. This suggests a good rule of thumb for converting a dataset into an image representation: if the human eye can recognize categories from the images, then a deep learning model should be able to do so too.
+
In general, you’ll find that a small number of general approaches in deep learning can go a long way, if you’re a bit creative in how you represent your data! You shouldn’t think of approaches like the ones described here as “hacky workarounds,” because actually they often (as here) beat previously state-of-the-art results. These really are the right ways to think about these problem domains.
+
+
+
Jargon Recap
+
We just covered a lot of information so let’s recap briefly, Table 1.2 provides a handy vocabulary.
+
+
+
+Table 1.2: Deep learning vocabulary
+
+
+
+
+
+
+
+
+
+
Term
+
Meaning
+
+
+
+
+
Label
+
The data that we’re trying to predict, such as “dog” or “cat”
+
+
+
Architecture
+
The template of the model that we’re trying to fit; the actual mathematical function that we’re passing the input data and parameters to
+
+
+
Model
+
The combination of the architecture with a particular set of parameters
+
+
+
Parameters
+
The values in the model that change what task it can do, and are updated through model training
+
+
+
Fit
+
Update the parameters of the model such that the predictions of the model using the input data match the target labels
+
+
+
Train
+
A synonym for fit
+
+
+
Pretrained model
+
A model that has already been trained, generally using a large dataset, and will be fine-tuned
+
+
+
Fine-tune
+
Update a pretrained model for a different task
+
+
+
Epoch
+
One complete pass through the input data
+
+
+
Loss
+
A measure of how good the model is, chosen to drive training via SGD
+
+
+
Metric
+
A measurement of how good the model is, using the validation set, chosen for human consumption
+
+
+
Validation set
+
A set of data held out from training, used only for measuring how good the model is
+
+
+
Training set
+
The data used for fitting the model; does not include any data from the validation set
+
+
+
Overfitting
+
Training a model in such a way that it remembers specific features of the input data, rather than generalizing well to data not seen during training
+
+
+
CNN
+
Convolutional neural network; a type of neural network that works particularly well for computer vision tasks
+
+
+
+
+
+
+
With this vocabulary in hand, we are now in a position to bring together all the key concepts introduced so far. Take a moment to review those definitions and read the following summary. If you can follow the explanation, then you’re well equipped to understand the discussions to come.
+
Machine learning is a discipline where we define a program not by writing it entirely ourselves, but by learning from data. Deep learning is a specialty within machine learning that uses neural networks with multiple layers. Image classification is a representative example (also known as image recognition). We start with labeled data; that is, a set of images where we have assigned a label to each image indicating what it represents. Our goal is to produce a program, called a model, which, given a new image, will make an accurate prediction regarding what that new image represents.
+
Every model starts with a choice of architecture, a general template for how that kind of model works internally. The process of training (or fitting) the model is the process of finding a set of parameter values (or weights) that specialize that general architecture into a model that works well for our particular kind of data. In order to define how well a model does on a single prediction, we need to define a loss function, which determines how we score a prediction as good or bad.
+
To make the training process go faster, we might start with a pretrained model—a model that has already been trained on someone else’s data. We can then adapt it to our data by training it a bit more on our data, a process called fine-tuning.
+
When we train a model, a key concern is to ensure that our model generalizes—that is, that it learns general lessons from our data which also apply to new items it will encounter, so that it can make good predictions on those items. The risk is that if we train our model badly, instead of learning general lessons it effectively memorizes what it has already seen, and then it will make poor predictions about new images. Such a failure is called overfitting. In order to avoid this, we always divide our data into two parts, the training set and the validation set. We train the model by showing it only the training set and then we evaluate how well the model is doing by seeing how well it performs on items from the validation set. In this way, we check if the lessons the model learns from the training set are lessons that generalize to the validation set. In order for a person to assess how well the model is doing on the validation set overall, we define a metric. During the training process, when the model has seen every item in the training set, we call that an epoch.
+
All these concepts apply to machine learning in general. That is, they apply to all sorts of schemes for defining a model by training it with data. What makes deep learning distinctive is a particular class of architectures: the architectures based on neural networks. In particular, tasks like image classification rely heavily on convolutional neural networks, which we will discuss shortly.
+
+
+
+
1.7 Deep Learning Is Not Just for Image Classification
+
Deep learning’s effectiveness for classifying images has been widely discussed in recent years, even showing superhuman results on complex tasks like recognizing malignant tumors in CT scans. But it can do a lot more than this, as we will show here.
+
For instance, let’s talk about something that is critically important for autonomous vehicles: localizing objects in a picture. If a self-driving car doesn’t know where a pedestrian is, then it doesn’t know how to avoid one! Creating a model that can recognize the content of every individual pixel in an image is called segmentation. Here is how we can train a segmentation model with fastai, using a subset of the Camvid dataset from the paper “Semantic Object Classes in Video: A High-Definition Ground Truth Database” by Gabruel J. Brostow, Julien Fauqueur, and Roberto Cipolla:
We are not even going to walk through this code line by line, because it is nearly identical to our previous example! (Although we will be doing a deep dive into segmentation models in Chapter 15, along with all of the other models that we are briefly introducing in this chapter, and many, many more.)
+
We can visualize how well it achieved its task, by asking the model to color-code each pixel of an image. As you can see, it nearly perfectly classifies every pixel in every object. For instance, notice that all of the cars are overlaid with the same color and all of the trees are overlaid with the same color (in each pair of images, the lefthand image is the ground truth label and the right is the prediction from the model):
+
+
learn.show_results(max_n=6, figsize=(7,8))
+
+
+
+
+
+
+
+
+
+
One other area where deep learning has dramatically improved in the last couple of years is natural language processing (NLP). Computers can now generate text, translate automatically from one language to another, analyze comments, label words in sentences, and much more. Here is all of the code necessary to train a model that can classify the sentiment of a movie review better than anything that existed in the world just five years ago:
#clean If you hit a “CUDA out of memory error” after running this cell, click on the menu Kernel, then restart. Instead of executing the cell above, copy and paste the following code in it:
This reduces the batch size to 32 (we will explain this later). If you keep hitting the same error, change 32 to 16.
+
This model is using the “IMDb Large Movie Review dataset” from the paper “Learning Word Vectors for Sentiment Analysis” by Andrew Maas et al. It works well with movie reviews of many thousands of words, but let’s test it out on a very short one to see how it does its thing:
+
+
learn.predict("I really liked that movie!")
+
+
+
+
+
('neg', tensor(0), tensor([0.8786, 0.1214]))
+
+
+
Here we can see the model has considered the review to be positive. The second part of the result is the index of “pos” in our data vocabulary and the last part is the probabilities attributed to each class (99.6% for “pos” and 0.4% for “neg”).
+
Now it’s your turn! Write your own mini movie review, or copy one from the internet, and you can see what this model thinks about it.
+
+
Sidebar: The Order Matters
+
In a Jupyter notebook, the order in which you execute each cell is very important. It’s not like Excel, where everything gets updated as soon as you type something anywhere—it has an inner state that gets updated each time you execute a cell. For instance, when you run the first cell of the notebook (with the “CLICK ME” comment), you create an object called learn that contains a model and data for an image classification problem. If we were to run the cell just shown in the text (the one that predicts if a review is good or not) straight after, we would get an error as this learn object does not contain a text classification model. This cell needs to be run after the one containing:
The outputs themselves can be deceiving, because they include the results of the last time the cell was executed; if you change the code inside a cell without executing it, the old (misleading) results will remain.
+
Except when we mention it explicitly, the notebooks provided on the book website are meant to be run in order, from top to bottom. In general, when experimenting, you will find yourself executing cells in any order to go fast (which is a super neat feature of Jupyter Notebook), but once you have explored and arrived at the final version of your code, make sure you can run the cells of your notebooks in order (your future self won’t necessarily remember the convoluted path you took otherwise!).
+
In command mode, pressing 0 twice will restart the kernel (which is the engine powering your notebook). This will wipe your state clean and make it as if you had just started in the notebook. Choose Run All Above from the Cell menu to run all cells above the point where you are. We have found this to be very useful when developing the fastai library.
+
+
+
End sidebar
+
If you ever have any questions about a fastai method, you should use the function doc, passing it the method name:
+
doc(learn.predict)
+
This will make a small window pop up with content like this:
+
+
A brief one-line explanation is provided by doc. The “Show in docs” link takes you to the full documentation, where you’ll find all the details and lots of examples. Also, most of fastai’s methods are just a handful of lines, so you can click the “source” link to see exactly what’s going on behind the scenes.
+
Let’s move on to something much less sexy, but perhaps significantly more widely commercially useful: building models from plain tabular data.
+
+
jargon: Tabular: Data that is in the form of a table, such as from a spreadsheet, database, or CSV file. A tabular model is a model that tries to predict one column of a table based on information in other columns of the table.
+
+
It turns out that looks very similar too. Here is the code necessary to train a model that will predict whether a person is a high-income earner, based on their socioeconomic background:
As you see, we had to tell fastai which columns are categorical (that is, contain values that are one of a discrete set of choices, such as occupation) and which are continuous (that is, contain a number that represents a quantity, such as age).
+
There is no pretrained model available for this task (in general, pretrained models are not widely available for any tabular modeling tasks, although some organizations have created them for internal use), so we don’t use fine_tune in this case. Instead we use fit_one_cycle, the most commonly used method for training fastai models from scratch (i.e. without transfer learning):
+
+
learn.fit_one_cycle(3)
+
+
+
+
+
epoch
+
train_loss
+
valid_loss
+
accuracy
+
time
+
+
+
+
+
0
+
0.372397
+
0.357177
+
0.832463
+
00:08
+
+
+
1
+
0.351544
+
0.341505
+
0.841523
+
00:08
+
+
+
2
+
0.338763
+
0.339184
+
0.845670
+
00:08
+
+
+
+
+
+
This model is using the Adult dataset, from the paper “Scaling Up the Accuracy of Naive-Bayes Classifiers: a Decision-Tree Hybrid” by Rob Kohavi, which contains some demographic data about individuals (like their education, marital status, race, sex, and whether or not they have an annual income greater than $50k). The model is over 80% accurate, and took around 30 seconds to train.
+
Let’s look at one more. Recommendation systems are very important, particularly in e-commerce. Companies like Amazon and Netflix try hard to recommend products or movies that users might like. Here’s how to train a model that will predict movies people might like, based on their previous viewing habits, using the MovieLens dataset:
This model is predicting movie ratings on a scale of 0.5 to 5.0 to within around 0.6 average error. Since we’re predicting a continuous number, rather than a category, we have to tell fastai what range our target has, using the y_range parameter.
+
Although we’re not actually using a pretrained model (for the same reason that we didn’t for the tabular model), this example shows that fastai lets us use fine_tune anyway in this case (you’ll learn how and why this works in Chapter 5). Sometimes it’s best to experiment with fine_tune versus fit_one_cycle to see which works best for your dataset.
+
We can use the same show_results call we saw earlier to view a few examples of user and movie IDs, actual ratings, and predictions:
+
+
learn.show_results()
+
+
+
+
+
+
+
+
+
userId
+
movieId
+
rating
+
rating_pred
+
+
+
+
+
0
+
66.0
+
79.0
+
4.0
+
3.978900
+
+
+
1
+
97.0
+
15.0
+
4.0
+
3.851795
+
+
+
2
+
55.0
+
79.0
+
3.5
+
3.945623
+
+
+
3
+
98.0
+
91.0
+
4.0
+
4.458704
+
+
+
4
+
53.0
+
7.0
+
5.0
+
4.670005
+
+
+
5
+
26.0
+
69.0
+
5.0
+
4.319870
+
+
+
6
+
81.0
+
16.0
+
4.5
+
4.426761
+
+
+
7
+
80.0
+
7.0
+
4.0
+
4.046183
+
+
+
8
+
51.0
+
94.0
+
5.0
+
3.499996
+
+
+
+
+
+
+
+
Sidebar: Datasets: Food for Models
+
You’ve already seen quite a few models in this section, each one trained using a different dataset to do a different task. In machine learning and deep learning, we can’t do anything without data. So, the people that create datasets for us to train our models on are the (often underappreciated) heroes. Some of the most useful and important datasets are those that become important academic baselines; that is, datasets that are widely studied by researchers and used to compare algorithmic changes. Some of these become household names (at least, among households that train models!), such as MNIST, CIFAR-10, and ImageNet.
+
The datasets used in this book have been selected because they provide great examples of the kinds of data that you are likely to encounter, and the academic literature has many examples of model results using these datasets to which you can compare your work.
+
Most datasets used in this book took the creators a lot of work to build. For instance, later in the book we’ll be showing you how to create a model that can translate between French and English. The key input to this is a French/English parallel text corpus prepared back in 2009 by Professor Chris Callison-Burch of the University of Pennsylvania. This dataset contains over 20 million sentence pairs in French and English. He built the dataset in a really clever way: by crawling millions of Canadian web pages (which are often multilingual) and then using a set of simple heuristics to transform URLs of French content onto URLs pointing to the same content in English.
+
As you look at datasets throughout this book, think about where they might have come from, and how they might have been curated. Then think about what kinds of interesting datasets you could create for your own projects. (We’ll even take you step by step through the process of creating your own image dataset soon.)
+
fast.ai has spent a lot of time creating cut-down versions of popular datasets that are specially designed to support rapid prototyping and experimentation, and to be easier to learn with. In this book we will often start by using one of the cut-down versions and later scale up to the full-size version (just as we’re doing in this chapter!). In fact, this is how the world’s top practitioners do their modeling in practice; they do most of their experimentation and prototyping with subsets of their data, and only use the full dataset when they have a good understanding of what they have to do.
+
+
+
End sidebar
+
Each of the models we trained showed a training and validation loss. A good validation set is one of the most important pieces of the training process. Let’s see why and learn how to create one.
+
+
+
+
1.8 Validation Sets and Test Sets
+
As we’ve discussed, the goal of a model is to make predictions about data. But the model training process is fundamentally dumb. If we trained a model with all our data, and then evaluated the model using that same data, we would not be able to tell how well our model can perform on data it hasn’t seen. Without this very valuable piece of information to guide us in training our model, there is a very good chance it would become good at making predictions about that data but would perform poorly on new data.
+
To avoid this, our first step was to split our dataset into two sets: the training set (which our model sees in training) and the validation set, also known as the development set (which is used only for evaluation). This lets us test that the model learns lessons from the training data that generalize to new data, the validation data.
+
One way to understand this situation is that, in a sense, we don’t want our model to get good results by “cheating.” If it makes an accurate prediction for a data item, that should be because it has learned characteristics of that kind of item, and not because the model has been shaped by actually having seen that particular item.
+
Splitting off our validation data means our model never sees it in training and so is completely untainted by it, and is not cheating in any way. Right?
+
In fact, not necessarily. The situation is more subtle. This is because in realistic scenarios we rarely build a model just by training its weight parameters once. Instead, we are likely to explore many versions of a model through various modeling choices regarding network architecture, learning rates, data augmentation strategies, and other factors we will discuss in upcoming chapters. Many of these choices can be described as choices of hyperparameters. The word reflects that they are parameters about parameters, since they are the higher-level choices that govern the meaning of the weight parameters.
+
The problem is that even though the ordinary training process is only looking at predictions on the training data when it learns values for the weight parameters, the same is not true of us. We, as modelers, are evaluating the model by looking at predictions on the validation data when we decide to explore new hyperparameter values! So subsequent versions of the model are, indirectly, shaped by us having seen the validation data. Just as the automatic training process is in danger of overfitting the training data, we are in danger of overfitting the validation data through human trial and error and exploration.
+
The solution to this conundrum is to introduce another level of even more highly reserved data, the test set. Just as we hold back the validation data from the training process, we must hold back the test set data even from ourselves. It cannot be used to improve the model; it can only be used to evaluate the model at the very end of our efforts. In effect, we define a hierarchy of cuts of our data, based on how fully we want to hide it from training and modeling processes: training data is fully exposed, the validation data is less exposed, and test data is totally hidden. This hierarchy parallels the different kinds of modeling and evaluation processes themselves—the automatic training process with back propagation, the more manual process of trying different hyper-parameters between training sessions, and the assessment of our final result.
+
The test and validation sets should have enough data to ensure that you get a good estimate of your accuracy. If you’re creating a cat detector, for instance, you generally want at least 30 cats in your validation set. That means that if you have a dataset with thousands of items, using the default 20% validation set size may be more than you need. On the other hand, if you have lots of data, using some of it for validation probably doesn’t have any downsides.
+
Having two levels of “reserved data”—a validation set and a test set, with one level representing data that you are virtually hiding from yourself—may seem a bit extreme. But the reason it is often necessary is because models tend to gravitate toward the simplest way to do good predictions (memorization), and we as fallible humans tend to gravitate toward fooling ourselves about how well our models are performing. The discipline of the test set helps us keep ourselves intellectually honest. That doesn’t mean we always need a separate test set—if you have very little data, you may need to just have a validation set—but generally it’s best to use one if at all possible.
+
This same discipline can be critical if you intend to hire a third party to perform modeling work on your behalf. A third party might not understand your requirements accurately, or their incentives might even encourage them to misunderstand them. A good test set can greatly mitigate these risks and let you evaluate whether their work solves your actual problem.
+
To put it bluntly, if you’re a senior decision maker in your organization (or you’re advising senior decision makers), the most important takeaway is this: if you ensure that you really understand what test and validation sets are and why they’re important, then you’ll avoid the single biggest source of failures we’ve seen when organizations decide to use AI. For instance, if you’re considering bringing in an external vendor or service, make sure that you hold out some test data that the vendor never gets to see. Then you check their model on your test data, using a metric that you choose based on what actually matters to you in practice, and you decide what level of performance is adequate. (It’s also a good idea for you to try out some simple baseline yourself, so you know what a really simple model can achieve. Often it’ll turn out that your simple model performs just as well as one produced by an external “expert”!)
+
+
Use Judgment in Defining Test Sets
+
To do a good job of defining a validation set (and possibly a test set), you will sometimes want to do more than just randomly grab a fraction of your original dataset. Remember: a key property of the validation and test sets is that they must be representative of the new data you will see in the future. This may sound like an impossible order! By definition, you haven’t seen this data yet. But you usually still do know some things.
+
It’s instructive to look at a few example cases. Many of these examples come from predictive modeling competitions on the Kaggle platform, which is a good representation of problems and methods you might see in practice.
+
One case might be if you are looking at time series data. For a time series, choosing a random subset of the data will be both too easy (you can look at the data both before and after the dates you are trying to predict) and not representative of most business use cases (where you are using historical data to build a model for use in the future). If your data includes the date and you are building a model to use in the future, you will want to choose a continuous section with the latest dates as your validation set (for instance, the last two weeks or last month of available data).
+
Suppose you want to split the time series data in Figure 1.20 into training and validation sets.
+
+
+
+
A random subset is a poor choice (too easy to fill in the gaps, and not indicative of what you’ll need in production), as we can see in Figure 1.21.
+
+
+
+
Instead, use the earlier data as your training set (and the later data for the validation set), as shown in Figure 1.22.
+
+
+
+
For example, Kaggle had a competition to predict the sales in a chain of Ecuadorian grocery stores. Kaggle’s training data ran from Jan 1 2013 to Aug 15 2017, and the test data spanned Aug 16 2017 to Aug 31 2017. That way, the competition organizer ensured that entrants were making predictions for a time period that was in the future, from the perspective of their model. This is similar to the way quant hedge fund traders do back-testing to check whether their models are predictive of future periods, based on past data.
+
A second common case is when you can easily anticipate ways the data you will be making predictions for in production may be qualitatively different from the data you have to train your model with.
+
In the Kaggle distracted driver competition, the independent variables are pictures of drivers at the wheel of a car, and the dependent variables are categories such as texting, eating, or safely looking ahead. Lots of pictures are of the same drivers in different positions, as we can see in Figure 1.23. If you were an insurance company building a model from this data, note that you would be most interested in how the model performs on drivers it hasn’t seen before (since you would likely have training data only for a small group of people). In recognition of this, the test data for the competition consists of images of people that don’t appear in the training set.
+
+
+
+
If you put one of the images in Figure 1.23 in your training set and one in the validation set, your model will have an easy time making a prediction for the one in the validation set, so it will seem to be performing better than it would on new people. Another perspective is that if you used all the people in training your model, your model might be overfitting to particularities of those specific people, and not just learning the states (texting, eating, etc.).
+
A similar dynamic was at work in the Kaggle fisheries competition to identify the species of fish caught by fishing boats in order to reduce illegal fishing of endangered populations. The test set consisted of boats that didn’t appear in the training data. This means that you’d want your validation set to include boats that are not in the training set.
+
Sometimes it may not be clear how your validation data will differ. For instance, for a problem using satellite imagery, you’d need to gather more information on whether the training set just contained certain geographic locations, or if it came from geographically scattered data.
+
Now that you have gotten a taste of how to build a model, you can decide what you want to dig into next.
+
+
+
+
1.9 A Choose Your Own Adventure moment
+
If you would like to learn more about how to use deep learning models in practice, including how to identify and fix errors, create a real working web application, and avoid your model causing unexpected harm to your organization or society more generally, then keep reading the next two chapters. If you would like to start learning the foundations of how deep learning works under the hood, skip to Chapter 4. (Did you ever read Choose Your Own Adventure books as a kid? Well, this is kind of like that… except with more deep learning than that book series contained.)
+
You will need to read all these chapters to progress further in the book, but it is totally up to you which order you read them in. They don’t depend on each other. If you skip ahead to Chapter 4, we will remind you at the end to come back and read the chapters you skipped over before you go any further.
+
+
+
1.10 Questionnaire
+
It can be hard to know in pages and pages of prose what the key things are that you really need to focus on and remember. So, we’ve prepared a list of questions and suggested steps to complete at the end of each chapter. All the answers are in the text of the chapter, so if you’re not sure about anything here, reread that part of the text and make sure you understand it. Answers to all these questions are also available on the book’s website. You can also visit the forums if you get stuck to get help from other folks studying this material.
+
For more questions, including detailed answers and links to the video timeline, have a look at Radek Osmulski’s aiquizzes.
+
+
Do you need these for deep learning?
+
+
Lots of math T / F
+
Lots of data T / F
+
Lots of expensive computers T / F
+
A PhD T / F
+
+
Name five areas where deep learning is now the best in the world.
+
What was the name of the first device that was based on the principle of the artificial neuron?
+
Based on the book of the same name, what are the requirements for parallel distributed processing (PDP)?
+
What were the two theoretical misunderstandings that held back the field of neural networks?
+
What is a GPU?
+
Open a notebook and execute a cell containing: 1+1. What happens?
+
Follow through each cell of the stripped version of the notebook for this chapter. Before executing each cell, guess what will happen.
+
Complete the Jupyter Notebook online appendix.
+
Why is it hard to use a traditional computer program to recognize images in a photo?
+
What did Samuel mean by “weight assignment”?
+
What term do we normally use in deep learning for what Samuel called “weights”?
+
Draw a picture that summarizes Samuel’s view of a machine learning model.
+
Why is it hard to understand why a deep learning model makes a particular prediction?
+
What is the name of the theorem that shows that a neural network can solve any mathematical problem to any level of accuracy?
+
What do you need in order to train a model?
+
How could a feedback loop impact the rollout of a predictive policing model?
+
Do we always have to use 224×224-pixel images with the cat recognition model?
+
What is the difference between classification and regression?
+
What is a validation set? What is a test set? Why do we need them?
+
What will fastai do if you don’t provide a validation set?
+
Can we always use a random sample for a validation set? Why or why not?
+
What is overfitting? Provide an example.
+
What is a metric? How does it differ from “loss”?
+
How can pretrained models help?
+
What is the “head” of a model?
+
What kinds of features do the early layers of a CNN find? How about the later layers?
+
Are image models only useful for photos?
+
What is an “architecture”?
+
What is segmentation?
+
What is y_range used for? When do we need it?
+
What are “hyperparameters”?
+
What’s the best way to avoid failures when using AI in an organization?
+
+
+
Further Research
+
Each chapter also has a “Further Research” section that poses questions that aren’t fully answered in the text, or gives more advanced assignments. Answers to these questions aren’t on the book’s website; you’ll need to do your own research!
+
+
Why is a GPU useful for deep learning? How is a CPU different, and why is it less effective for deep learning?
+
Try to think of three areas where feedback loops might impact the use of machine learning. See if you can find documented examples of that happening in practice.
Having seen what it looks like to actually train a variety of models in Chapter 2, let’s now look under the hood and see exactly what is going on. We’ll start by using computer vision to introduce fundamental tools and concepts for deep learning.
+
To be exact, we’ll discuss the roles of arrays and tensors and of broadcasting, a powerful technique for using them expressively. We’ll explain stochastic gradient descent (SGD), the mechanism for learning by updating weights automatically. We’ll discuss the choice of a loss function for our basic classification task, and the role of mini-batches. We’ll also describe the math that a basic neural network is actually doing. Finally, we’ll put all these pieces together.
+
In future chapters we’ll do deep dives into other applications as well, and see how these concepts and tools generalize. But this chapter is about laying foundation stones. To be frank, that also makes this one of the hardest chapters, because of how these concepts all depend on each other. Like an arch, all the stones need to be in place for the structure to stay up. Also like an arch, once that happens, it’s a powerful structure that can support other things. But it requires some patience to assemble.
+
Let’s begin. The first step is to consider how images are represented in a computer.
+
+
4.1 Pixels: The Foundations of Computer Vision
+
In order to understand what happens in a computer vision model, we first have to understand how computers handle images. We’ll use one of the most famous datasets in computer vision, MNIST, for our experiments. MNIST contains images of handwritten digits, collected by the National Institute of Standards and Technology and collated into a machine learning dataset by Yann Lecun and his colleagues. Lecun used MNIST in 1998 in Lenet-5, the first computer system to demonstrate practically useful recognition of handwritten digit sequences. This was one of the most important breakthroughs in the history of AI.
+
+
+
4.2 Sidebar: Tenacity and Deep Learning
+
The story of deep learning is one of tenacity and grit by a handful of dedicated researchers. After early hopes (and hype!) neural networks went out of favor in the 1990’s and 2000’s, and just a handful of researchers kept trying to make them work well. Three of them, Yann Lecun, Yoshua Bengio, and Geoffrey Hinton, were awarded the highest honor in computer science, the Turing Award (generally considered the “Nobel Prize of computer science”), in 2018 after triumphing despite the deep skepticism and disinterest of the wider machine learning and statistics community.
+
Geoff Hinton has told of how even academic papers showing dramatically better results than anything previously published would be rejected by top journals and conferences, just because they used a neural network. Yann Lecun’s work on convolutional neural networks, which we will study in the next section, showed that these models could read handwritten text—something that had never been achieved before. However, his breakthrough was ignored by most researchers, even as it was used commercially to read 10% of the checks in the US!
+
In addition to these three Turing Award winners, there are many other researchers who have battled to get us to where we are today. For instance, Jurgen Schmidhuber (who many believe should have shared in the Turing Award) pioneered many important ideas, including working with his student Sepp Hochreiter on the long short-term memory (LSTM) architecture (widely used for speech recognition and other text modeling tasks, and used in the IMDb example in Chapter 1). Perhaps most important of all, Paul Werbos in 1974 invented back-propagation for neural networks, the technique shown in this chapter and used universally for training neural networks (Werbos 1994). His development was almost entirely ignored for decades, but today it is considered the most important foundation of modern AI.
+
There is a lesson here for all of us! On your deep learning journey you will face many obstacles, both technical, and (even more difficult) posed by people around you who don’t believe you’ll be successful. There’s one guaranteed way to fail, and that’s to stop trying. We’ve seen that the only consistent trait amongst every fast.ai student that’s gone on to be a world-class practitioner is that they are all very tenacious.
+
+
+
4.3 End sidebar
+
For this initial tutorial we are just going to try to create a model that can classify any image as a 3 or a 7. So let’s download a sample of MNIST that contains images of just these digits:
+
+
path = untar_data(URLs.MNIST_SAMPLE)
+
+
We can see what’s in this directory by using ls, a method added by fastai. This method returns an object of a special fastai class called L, which has all the same functionality of Python’s built-in list, plus a lot more. One of its handy features is that, when printed, it displays the count of items, before listing the items themselves (if there are more than 10 items, it just shows the first few):
The MNIST dataset follows a common layout for machine learning datasets: separate folders for the training set and the validation set (and/or test set). Let’s see what’s inside the training set:
+
+
(path/'train').ls()
+
+
(#2) [Path('train/7'),Path('train/3')]
+
+
+
There’s a folder of 3s, and a folder of 7s. In machine learning parlance, we say that “3” and “7” are the labels (or targets) in this dataset. Let’s take a look in one of these folders (using sorted to ensure we all get the same order of files):
As we might expect, it’s full of image files. Let’s take a look at one now. Here’s an image of a handwritten number 3, taken from the famous MNIST dataset of handwritten numbers:
Here we are using the Image class from the Python Imaging Library (PIL), which is the most widely used Python package for opening, manipulating, and viewing images. Jupyter knows about PIL images, so it displays the image for us automatically.
+
In a computer, everything is represented as a number. To view the numbers that make up this image, we have to convert it to a NumPy array or a PyTorch tensor. For instance, here’s what a section of the image looks like, converted to a NumPy array:
The 4:10 indicates we requested the rows from index 4 (included) to 10 (not included) and the same for the columns. NumPy indexes from top to bottom and left to right, so this section is located in the top-left corner of the image. Here’s the same thing as a PyTorch tensor:
We can slice the array to pick just the part with the top of the digit in it, and then use a Pandas DataFrame to color-code the values using a gradient, which shows us clearly how the image is created from the pixel values:
You can see that the background white pixels are stored as the number 0, black is the number 255, and shades of gray are between the two. The entire image contains 28 pixels across and 28 pixels down, for a total of 784 pixels. (This is much smaller than an image that you would get from a phone camera, which has millions of pixels, but is a convenient size for our initial learning and experiments. We will build up to bigger, full-color images soon.)
+
So, now you’ve seen what an image looks like to a computer, let’s recall our goal: create a model that can recognize 3s and 7s. How might you go about getting a computer to do that?
+
+
Warning: Stop and Think!: Before you read on, take a moment to think about how a computer might be able to recognize these two different digits. What kinds of features might it be able to look at? How might it be able to identify these features? How could it combine them together? Learning works best when you try to solve problems yourself, rather than just reading somebody else’s answers; so step away from this book for a few minutes, grab a piece of paper and pen, and jot some ideas down…
+
+
+
+
4.4 First Try: Pixel Similarity
+
So, here is a first idea: how about we find the average pixel value for every pixel of the 3s, then do the same for the 7s. This will give us two group averages, defining what we might call the “ideal” 3 and 7. Then, to classify an image as one digit or the other, we see which of these two ideal digits the image is most similar to. This certainly seems like it should be better than nothing, so it will make a good baseline.
+
+
jargon: Baseline: A simple model which you are confident should perform reasonably well. It should be very simple to implement, and very easy to test, so that you can then test each of your improved ideas, and make sure they are always better than your baseline. Without starting with a sensible baseline, it is very difficult to know whether your super-fancy models are actually any good. One good approach to creating a baseline is doing what we have done here: think of a simple, easy-to-implement model. Another good approach is to search around to find other people that have solved similar problems to yours, and download and run their code on your dataset. Ideally, try both of these!
+
+
Step one for our simple model is to get the average of pixel values for each of our two groups. In the process of doing this, we will learn a lot of neat Python numeric programming tricks!
+
Let’s create a tensor containing all of our 3s stacked together. We already know how to create a tensor containing a single image. To create a tensor containing all the images in a directory, we will first use a Python list comprehension to create a plain list of the single image tensors.
+
We will use Jupyter to do some little checks of our work along the way—in this case, making sure that the number of returned items seems reasonable:
+
+
seven_tensors = [tensor(Image.open(o)) for o in sevens]
+three_tensors = [tensor(Image.open(o)) for o in threes]
+len(three_tensors),len(seven_tensors)
+
+
(6131, 6265)
+
+
+
+
note: List Comprehensions: List and dictionary comprehensions are a wonderful feature of Python. Many Python programmers use them every day, including the authors of this book—they are part of “idiomatic Python.” But programmers coming from other languages may have never seen them before. There are a lot of great tutorials just a web search away, so we won’t spend a long time discussing them now. Here is a quick explanation and example to get you started. A list comprehension looks like this: new_list = [f(o) for o in a_list if o>0]. This will return every element of a_list that is greater than 0, after passing it to the function f. There are three parts here: the collection you are iterating over (a_list), an optional filter (if o>0), and something to do to each element (f(o)). It’s not only shorter to write but way faster than the alternative ways of creating the same list with a loop.
+
+
We’ll also check that one of the images looks okay. Since we now have tensors (which Jupyter by default will print as values), rather than PIL images (which Jupyter by default will display as images), we need to use fastai’s show_image function to display it:
+
+
show_image(three_tensors[1]);
+
+
+
+
+
+
+
For every pixel position, we want to compute the average over all the images of the intensity of that pixel. To do this we first combine all the images in this list into a single three-dimensional tensor. The most common way to describe such a tensor is to call it a rank-3 tensor. We often need to stack up individual tensors in a collection into a single tensor. Unsurprisingly, PyTorch comes with a function called stack that we can use for this purpose.
+
Some operations in PyTorch, such as taking a mean, require us to cast our integer types to float types. Since we’ll be needing this later, we’ll also cast our stacked tensor to float now. Casting in PyTorch is as simple as typing the name of the type you wish to cast to, and treating it as a method.
+
Generally when images are floats, the pixel values are expected to be between 0 and 1, so we will also divide by 255 here:
Perhaps the most important attribute of a tensor is its shape. This tells you the length of each axis. In this case, we can see that we have 6,131 images, each of size 28×28 pixels. There is nothing specifically about this tensor that says that the first axis is the number of images, the second is the height, and the third is the width—the semantics of a tensor are entirely up to us, and how we construct it. As far as PyTorch is concerned, it is just a bunch of numbers in memory.
+
The length of a tensor’s shape is its rank:
+
+
len(stacked_threes.shape)
+
+
3
+
+
+
It is really important for you to commit to memory and practice these bits of tensor jargon: rank is the number of axes or dimensions in a tensor; shape is the size of each axis of a tensor.
+
+
A: Watch out because the term “dimension” is sometimes used in two ways. Consider that we live in “three-dimensonal space” where a physical position can be described by a 3-vector v. But according to PyTorch, the attribute v.ndim (which sure looks like the “number of dimensions” of v) equals one, not three! Why? Because v is a vector, which is a tensor of rank one, meaning that it has only one axis (even if that axis has a length of three). In other words, sometimes dimension is used for the size of an axis (“space is three-dimensional”); other times, it is used for the rank, or the number of axes (“a matrix has two dimensions”). When confused, I find it helpful to translate all statements into terms of rank, axis, and length, which are unambiguous terms.
+
+
We can also get a tensor’s rank directly with ndim:
+
+
stacked_threes.ndim
+
+
3
+
+
+
Finally, we can compute what the ideal 3 looks like. We calculate the mean of all the image tensors by taking the mean along dimension 0 of our stacked, rank-3 tensor. This is the dimension that indexes over all the images.
+
In other words, for every pixel position, this will compute the average of that pixel over all images. The result will be one value for every pixel position, or a single image. Here it is:
According to this dataset, this is the ideal number 3! (You may not like it, but this is what peak number 3 performance looks like.) You can see how it’s very dark where all the images agree it should be dark, but it becomes wispy and blurry where the images disagree.
+
Let’s do the same thing for the 7s, but put all the steps together at once to save some time:
Let’s now pick an arbitrary 3 and measure its distance from our “ideal digits.”
+
+
stop: Stop and Think!: How would you calculate how similar a particular image is to each of our ideal digits? Remember to step away from this book and jot down some ideas before you move on! Research shows that recall and understanding improves dramatically when you are engaged with the learning process by solving problems, experimenting, and trying new ideas yourself
+
+
Here’s a sample 3:
+
+
a_3 = stacked_threes[1]
+show_image(a_3);
+
+
+
+
+
+
+
How can we determine its distance from our ideal 3? We can’t just add up the differences between the pixels of this image and the ideal digit. Some differences will be positive while others will be negative, and these differences will cancel out, resulting in a situation where an image that is too dark in some places and too light in others might be shown as having zero total differences from the ideal. That would be misleading!
+
To avoid this, there are two main ways data scientists measure distance in this context:
+
+
Take the mean of the absolute value of differences (absolute value is the function that replaces negative values with positive values). This is called the mean absolute difference or L1 norm
+
Take the mean of the square of differences (which makes everything positive) and then take the square root (which undoes the squaring). This is called the root mean squared error (RMSE) or L2 norm.
+
+
+
important: It’s Okay to Have Forgotten Your Math: In this book we generally assume that you have completed high school math, and remember at least some of it… But everybody forgets some things! It all depends on what you happen to have had reason to practice in the meantime. Perhaps you have forgotten what a square root is, or exactly how they work. No problem! Any time you come across a maths concept that is not explained fully in this book, don’t just keep moving on; instead, stop and look it up. Make sure you understand the basic idea, how it works, and why we might be using it. One of the best places to refresh your understanding is Khan Academy. For instance, Khan Academy has a great introduction to square roots.
In both cases, the distance between our 3 and the “ideal” 3 is less than the distance to the ideal 7. So our simple model will give the right prediction in this case.
+
PyTorch already provides both of these as loss functions. You’ll find these inside torch.nn.functional, which the PyTorch team recommends importing as F (and is available by default under that name in fastai):
Here mse stands for mean squared error, and l1 refers to the standard mathematical jargon for mean absolute value (in math it’s called the L1 norm).
+
+
S: Intuitively, the difference between L1 norm and mean squared error (MSE) is that the latter will penalize bigger mistakes more heavily than the former (and be more lenient with small mistakes).
+
+
+
J: When I first came across this “L1” thingie, I looked it up to see what on earth it meant. I found on Google that it is a vector norm using absolute value, so looked up vector norm and started reading: Given a vector space V over a field F of the real or complex numbers, a norm on V is a nonnegative-valued any function p: V → [0,+∞) with the following properties: For all a ∈ F and all u, v ∈ V, p(u + v) ≤ p(u) + p(v)… Then I stopped reading. “Ugh, I’ll never understand math!” I thought, for the thousandth time. Since then I’ve learned that every time these complex mathy bits of jargon come up in practice, it turns out I can replace them with a tiny bit of code! Like, the L1 loss is just equal to (a-b).abs().mean(), where a and b are tensors. I guess mathy folks just think differently than me… I’ll make sure in this book that every time some mathy jargon comes up, I’ll give you the little bit of code it’s equal to as well, and explain in common-sense terms what’s going on.
+
+
We just completed various mathematical operations on PyTorch tensors. If you’ve done some numeric programming in NumPy before, you may recognize these as being similar to NumPy arrays. Let’s have a look at those two very important data structures.
+
+
NumPy Arrays and PyTorch Tensors
+
NumPy is the most widely used library for scientific and numeric programming in Python. It provides very similar functionality and a very similar API to that provided by PyTorch; however, it does not support using the GPU or calculating gradients, which are both critical for deep learning. Therefore, in this book we will generally use PyTorch tensors instead of NumPy arrays, where possible.
+
(Note that fastai adds some features to NumPy and PyTorch to make them a bit more similar to each other. If any code in this book doesn’t work on your computer, it’s possible that you forgot to include a line like this at the start of your notebook: from fastai.vision.all import *.)
+
But what are arrays and tensors, and why should you care?
+
Python is slow compared to many languages. Anything fast in Python, NumPy, or PyTorch is likely to be a wrapper for a compiled object written (and optimized) in another language—specifically C. In fact, NumPy arrays and PyTorch tensors can finish computations many thousands of times faster than using pure Python.
+
A NumPy array is a multidimensional table of data, with all items of the same type. Since that can be any type at all, they can even be arrays of arrays, with the innermost arrays potentially being different sizes—this is called a “jagged array.” By “multidimensional table” we mean, for instance, a list (dimension of one), a table or matrix (dimension of two), a “table of tables” or “cube” (dimension of three), and so forth. If the items are all of some simple type such as integer or float, then NumPy will store them as a compact C data structure in memory. This is where NumPy shines. NumPy has a wide variety of operators and methods that can run computations on these compact structures at the same speed as optimized C, because they are written in optimized C.
+
A PyTorch tensor is nearly the same thing as a NumPy array, but with an additional restriction that unlocks some additional capabilities. It’s the same in that it, too, is a multidimensional table of data, with all items of the same type. However, the restriction is that a tensor cannot use just any old type—it has to use a single basic numeric type for all components. For example, a PyTorch tensor cannot be jagged. It is always a regularly shaped multidimensional rectangular structure.
+
The vast majority of methods and operators supported by NumPy on these structures are also supported by PyTorch, but PyTorch tensors have additional capabilities. One major capability is that these structures can live on the GPU, in which case their computation will be optimized for the GPU and can run much faster (given lots of values to work on). In addition, PyTorch can automatically calculate derivatives of these operations, including combinations of operations. As you’ll see, it would be impossible to do deep learning in practice without this capability.
+
+
S: If you don’t know what C is, don’t worry as you won’t need it at all. In a nutshell, it’s a low-level (low-level means more similar to the language that computers use internally) language that is very fast compared to Python. To take advantage of its speed while programming in Python, try to avoid as much as possible writing loops, and replace them by commands that work directly on arrays or tensors.
+
+
Perhaps the most important new coding skill for a Python programmer to learn is how to effectively use the array/tensor APIs. We will be showing lots more tricks later in this book, but here’s a summary of the key things you need to know for now.
+
To create an array or tensor, pass a list (or list of lists, or list of lists of lists, etc.) to array() or tensor():
+
+
data = [[1,2,3],[4,5,6]]
+arr = array (data)
+tns = tensor(data)
+
+
+
arr # numpy
+
+
array([[1, 2, 3],
+ [4, 5, 6]])
+
+
+
+
tns # pytorch
+
+
tensor([[1, 2, 3],
+ [4, 5, 6]])
+
+
+
All the operations that follow are shown on tensors, but the syntax and results for NumPy arrays is identical.
+
You can select a row (note that, like lists in Python, tensors are 0-indexed so 1 refers to the second row/column):
+
+
tns[1]
+
+
tensor([4, 5, 6])
+
+
+
or a column, by using : to indicate all of the first axis (we sometimes refer to the dimensions of tensors/arrays as axes):
+
+
tns[:,1]
+
+
tensor([2, 5])
+
+
+
You can combine these with Python slice syntax ([start:end] with end being excluded) to select part of a row or column:
+
+
tns[1,1:3]
+
+
tensor([5, 6])
+
+
+
And you can use the standard operators such as +, -, *, /:
+
+
tns+1
+
+
tensor([[2, 3, 4],
+ [5, 6, 7]])
+
+
+
Tensors have a type:
+
+
tns.type()
+
+
'torch.LongTensor'
+
+
+
And will automatically change type as needed, for example from int to float:
So, is our baseline model any good? To quantify this, we must define a metric.
+
+
+
+
4.5 Computing Metrics Using Broadcasting
+
Recall that a metric is a number that is calculated based on the predictions of our model, and the correct labels in our dataset, in order to tell us how good our model is. For instance, we could use either of the functions we saw in the previous section, mean squared error, or mean absolute error, and take the average of them over the whole dataset. However, neither of these are numbers that are very understandable to most people; in practice, we normally use accuracy as the metric for classification models.
+
As we’ve discussed, we want to calculate our metric over a validation set. This is so that we don’t inadvertently overfit—that is, train a model to work well only on our training data. This is not really a risk with the pixel similarity model we’re using here as a first try, since it has no trained components, but we’ll use a validation set anyway to follow normal practices and to be ready for our second try later.
+
To get a validation set we need to remove some of the data from training entirely, so it is not seen by the model at all. As it turns out, the creators of the MNIST dataset have already done this for us. Do you remember how there was a whole separate directory called valid? That’s what this directory is for!
+
So to start with, let’s create tensors for our 3s and 7s from that directory. These are the tensors we will use to calculate a metric measuring the quality of our first-try model, which measures distance from an ideal image:
+
+
valid_3_tens = torch.stack([tensor(Image.open(o))
+for o in (path/'valid'/'3').ls()])
+valid_3_tens = valid_3_tens.float()/255
+valid_7_tens = torch.stack([tensor(Image.open(o))
+for o in (path/'valid'/'7').ls()])
+valid_7_tens = valid_7_tens.float()/255
+valid_3_tens.shape,valid_7_tens.shape
It’s good to get in the habit of checking shapes as you go. Here we see two tensors, one representing the 3s validation set of 1,010 images of size 28×28, and one representing the 7s validation set of 1,028 images of size 28×28.
+
We ultimately want to write a function, is_3, that will decide if an arbitrary image is a 3 or a 7. It will do this by deciding which of our two “ideal digits” this arbitrary image is closer to. For that we need to define a notion of distance—that is, a function that calculates the distance between two images.
+
We can write a simple function that calculates the mean absolute error using an expression very similar to the one we wrote in the last section:
This is the same value we previously calculated for the distance between these two images, the ideal 3 mean3 and the arbitrary sample 3 a_3, which are both single-image tensors with a shape of [28,28].
+
But in order to calculate a metric for overall accuracy, we will need to calculate the distance to the ideal 3 for every image in the validation set. How do we do that calculation? We could write a loop over all of the single-image tensors that are stacked within our validation set tensor, valid_3_tens, which has a shape of [1010,28,28] representing 1,010 images. But there is a better way.
+
Something very interesting happens when we take this exact same distance function, designed for comparing two single images, but pass in as an argument valid_3_tens, the tensor that represents the 3s validation set:
Instead of complaining about shapes not matching, it returned the distance for every single image as a vector (i.e., a rank-1 tensor) of length 1,010 (the number of 3s in our validation set). How did that happen?
+
Take another look at our function mnist_distance, and you’ll see we have there the subtraction (a-b). The magic trick is that PyTorch, when it tries to perform a simple subtraction operation between two tensors of different ranks, will use broadcasting. That is, it will automatically expand the tensor with the smaller rank to have the same size as the one with the larger rank. Broadcasting is an important capability that makes tensor code much easier to write.
+
After broadcasting so the two argument tensors have the same rank, PyTorch applies its usual logic for two tensors of the same rank: it performs the operation on each corresponding element of the two tensors, and returns the tensor result. For instance:
+
+
tensor([1,2,3]) + tensor(1)
+
+
tensor([2, 3, 4])
+
+
+
So in this case, PyTorch treats mean3, a rank-2 tensor representing a single image, as if it were 1,010 copies of the same image, and then subtracts each of those copies from each 3 in our validation set. What shape would you expect this tensor to have? Try to figure it out yourself before you look at the answer below:
+
+
(valid_3_tens-mean3).shape
+
+
torch.Size([1010, 28, 28])
+
+
+
We are calculating the difference between our “ideal 3” and each of the 1,010 3s in the validation set, for each of 28×28 images, resulting in the shape [1010,28,28].
+
There are a couple of important points about how broadcasting is implemented, which make it valuable not just for expressivity but also for performance:
+
+
PyTorch doesn’t actually copy mean3 1,010 times. It pretends it were a tensor of that shape, but doesn’t actually allocate any additional memory
+
It does the whole calculation in C (or, if you’re using a GPU, in CUDA, the equivalent of C on the GPU), tens of thousands of times faster than pure Python (up to millions of times faster on a GPU!).
+
+
This is true of all broadcasting and elementwise operations and functions done in PyTorch. It’s the most important technique for you to know to create efficient PyTorch code.
+
Next in mnist_distance we see abs. You might be able to guess now what this does when applied to a tensor. It applies the method to each individual element in the tensor, and returns a tensor of the results (that is, it applies the method “elementwise”). So in this case, we’ll get back 1,010 matrices of absolute values.
+
Finally, our function calls mean((-1,-2)). The tuple (-1,-2) represents a range of axes. In Python, -1 refers to the last element, and -2 refers to the second-to-last. So in this case, this tells PyTorch that we want to take the mean ranging over the values indexed by the last two axes of the tensor. The last two axes are the horizontal and vertical dimensions of an image. After taking the mean over the last two axes, we are left with just the first tensor axis, which indexes over our images, which is why our final size was (1010). In other words, for every image, we averaged the intensity of all the pixels in that image.
+
We’ll be learning lots more about broadcasting throughout this book, especially in Chapter 17, and will be practicing it regularly too.
+
We can use mnist_distance to figure out whether an image is a 3 or not by using the following logic: if the distance between the digit in question and the ideal 3 is less than the distance to the ideal 7, then it’s a 3. This function will automatically do broadcasting and be applied elementwise, just like all PyTorch functions and operators:
Note that when we convert the Boolean response to a float, we get 1.0 for True and 0.0 for False. Thanks to broadcasting, we can also test it on the full validation set of 3s:
+
+
is_3(valid_3_tens)
+
+
tensor([True, True, True, ..., True, True, True])
+
+
+
Now we can calculate the accuracy for each of the 3s and 7s by taking the average of that function for all 3s and its inverse for all 7s:
This looks like a pretty good start! We’re getting over 90% accuracy on both 3s and 7s, and we’ve seen how to define a metric conveniently using broadcasting.
+
But let’s be honest: 3s and 7s are very different-looking digits. And we’re only classifying 2 out of the 10 possible digits so far. So we’re going to need to do better!
+
To do better, perhaps it is time to try a system that does some real learning—that is, that can automatically modify itself to improve its performance. In other words, it’s time to talk about the training process, and SGD.
+
+
+
4.6 Stochastic Gradient Descent (SGD)
+
Do you remember the way that Arthur Samuel described machine learning, which we quoted in Chapter 1?
+
+
Suppose we arrange for some automatic means of testing the effectiveness of any current weight assignment in terms of actual performance and provide a mechanism for altering the weight assignment so as to maximize the performance. We need not go into the details of such a procedure to see that it could be made entirely automatic and to see that a machine so programmed would “learn” from its experience.
+
+
As we discussed, this is the key to allowing us to have a model that can get better and better—that can learn. But our pixel similarity approach does not really do this. We do not have any kind of weight assignment, or any way of improving based on testing the effectiveness of a weight assignment. In other words, we can’t really improve our pixel similarity approach by modifying a set of parameters. In order to take advantage of the power of deep learning, we will first have to represent our task in the way that Arthur Samuel described it.
+
Instead of trying to find the similarity between an image and an “ideal image,” we could instead look at each individual pixel and come up with a set of weights for each one, such that the highest weights are associated with those pixels most likely to be black for a particular category. For instance, pixels toward the bottom right are not very likely to be activated for a 7, so they should have a low weight for a 7, but they are likely to be activated for an 8, so they should have a high weight for an 8. This can be represented as a function and set of weight values for each possible category—for instance the probability of being the number 8:
+
def pr_eight(x,w): return (x*w).sum()
+
Here we are assuming that x is the image, represented as a vector—in other words, with all of the rows stacked up end to end into a single long line. And we are assuming that the weights are a vector w. If we have this function, then we just need some way to update the weights to make them a little bit better. With such an approach, we can repeat that step a number of times, making the weights better and better, until they are as good as we can make them.
+
We want to find the specific values for the vector w that causes the result of our function to be high for those images that are actually 8s, and low for those images that are not. Searching for the best vector w is a way to search for the best function for recognising 8s. (Because we are not yet using a deep neural network, we are limited by what our function can actually do—we are going to fix that constraint later in this chapter.)
+
To be more specific, here are the steps that we are going to require, to turn this function into a machine learning classifier:
+
+
Initialize the weights.
+
For each image, use these weights to predict whether it appears to be a 3 or a 7.
+
Based on these predictions, calculate how good the model is (its loss).
+
Calculate the gradient, which measures for each weight, how changing that weight would change the loss
+
Step (that is, change) all the weights based on that calculation.
+
Go back to the step 2, and repeat the process.
+
Iterate until you decide to stop the training process (for instance, because the model is good enough or you don’t want to wait any longer).
+
+
These seven steps, illustrated in Figure 4.2, are the key to the training of all deep learning models. That deep learning turns out to rely entirely on these steps is extremely surprising and counterintuitive. It’s amazing that this process can solve such complex problems. But, as you’ll see, it really does!
+
+
+
+
+
+
+
+
There are many different ways to do each of these seven steps, and we will be learning about them throughout the rest of this book. These are the details that make a big difference for deep learning practitioners, but it turns out that the general approach to each one generally follows some basic principles. Here are a few guidelines:
+
+
Initialize:: We initialize the parameters to random values. This may sound surprising. There are certainly other choices we could make, such as initializing them to the percentage of times that pixel is activated for that category—but since we already know that we have a routine to improve these weights, it turns out that just starting with random weights works perfectly well.
+
Loss:: This is what Samuel referred to when he spoke of testing the effectiveness of any current weight assignment in terms of actual performance. We need some function that will return a number that is small if the performance of the model is good (the standard approach is to treat a small loss as good, and a large loss as bad, although this is just a convention).
+
Step:: A simple way to figure out whether a weight should be increased a bit, or decreased a bit, would be just to try it: increase the weight by a small amount, and see if the loss goes up or down. Once you find the correct direction, you could then change that amount by a bit more, and a bit less, until you find an amount that works well. However, this is slow! As we will see, the magic of calculus allows us to directly figure out in which direction, and by roughly how much, to change each weight, without having to try all these small changes. The way to do this is by calculating gradients. This is just a performance optimization, we would get exactly the same results by using the slower manual process as well.
+
Stop:: Once we’ve decided how many epochs to train the model for (a few suggestions for this were given in the earlier list), we apply that decision. This is where that decision is applied. For our digit classifier, we would keep training until the accuracy of the model started getting worse, or we ran out of time.
+
+
Before applying these steps to our image classification problem, let’s illustrate what they look like in a simpler case. First we will define a very simple function, the quadratic—let’s pretend that this is our loss function, and x is a weight parameter of the function:
+
+
def f(x): return x**2
+
+
Here is a graph of that function:
+
+
plot_function(f, 'x', 'x**2')
+
+
+
+
+
+
+
The sequence of steps we described earlier starts by picking some random value for a parameter, and calculating the value of the loss:
Now we look to see what would happen if we increased or decreased our parameter by a little bit—the adjustment. This is simply the slope at a particular point:
+
+
We can change our weight by a little in the direction of the slope, calculate our loss and adjustment again, and repeat this a few times. Eventually, we will get to the lowest point on our curve:
+
+
This basic idea goes all the way back to Isaac Newton, who pointed out that we can optimize arbitrary functions in this way. Regardless of how complicated our functions become, this basic approach of gradient descent will not significantly change. The only minor changes we will see later in this book are some handy ways we can make it faster, by finding better steps.
+
+
Calculating Gradients
+
The one magic step is the bit where we calculate the gradients. As we mentioned, we use calculus as a performance optimization; it allows us to more quickly calculate whether our loss will go up or down when we adjust our parameters up or down. In other words, the gradients will tell us how much we have to change each weight to make our model better.
+
You may remember from your high school calculus class that the derivative of a function tells you how much a change in its parameters will change its result. If not, don’t worry, lots of us forget calculus once high school is behind us! But you will have to have some intuitive understanding of what a derivative is before you continue, so if this is all very fuzzy in your head, head over to Khan Academy and complete the lessons on basic derivatives. You won’t have to know how to calculate them yourselves, you just have to know what a derivative is.
+
The key point about a derivative is this: for any function, such as the quadratic function we saw in the previous section, we can calculate its derivative. The derivative is another function. It calculates the change, rather than the value. For instance, the derivative of the quadratic function at the value 3 tells us how rapidly the function changes at the value 3. More specifically, you may recall that gradient is defined as rise/run, that is, the change in the value of the function, divided by the change in the value of the parameter. When we know how our function will change, then we know what we need to do to make it smaller. This is the key to machine learning: having a way to change the parameters of a function to make it smaller. Calculus provides us with a computational shortcut, the derivative, which lets us directly calculate the gradients of our functions.
+
One important thing to be aware of is that our function has lots of weights that we need to adjust, so when we calculate the derivative we won’t get back one number, but lots of them—a gradient for every weight. But there is nothing mathematically tricky here; you can calculate the derivative with respect to one weight, and treat all the other ones as constant, then repeat that for each other weight. This is how all of the gradients are calculated, for every weight.
+
We mentioned just now that you won’t have to calculate any gradients yourself. How can that be? Amazingly enough, PyTorch is able to automatically compute the derivative of nearly any function! What’s more, it does it very fast. Most of the time, it will be at least as fast as any derivative function that you can create by hand. Let’s see an example.
+
First, let’s pick a tensor value which we want gradients at:
+
+
xt = tensor(3.).requires_grad_()
+
+
Notice the special method requires_grad_? That’s the magical incantation we use to tell PyTorch that we want to calculate gradients with respect to that variable at that value. It is essentially tagging the variable, so PyTorch will remember to keep track of how to compute gradients of the other, direct calculations on it that you will ask for.
+
+
a: This API might throw you off if you’re coming from math or physics. In those contexts the “gradient” of a function is just another function (i.e., its derivative), so you might expect gradient-related APIs to give you a new function. But in deep learning, “gradients” usually means the value of a function’s derivative at a particular argument value. The PyTorch API also puts the focus on the argument, not the function you’re actually computing the gradients of. It may feel backwards at first, but it’s just a different perspective.
+
+
Now we calculate our function with that value. Notice how PyTorch prints not just the value calculated, but also a note that it has a gradient function it’ll be using to calculate our gradients when needed:
+
+
yt = f(xt)
+yt
+
+
tensor(9., grad_fn=<PowBackward0>)
+
+
+
Finally, we tell PyTorch to calculate the gradients for us:
+
+
yt.backward()
+
+
The “backward” here refers to backpropagation, which is the name given to the process of calculating the derivative of each layer. We’ll see how this is done exactly in chapter Chapter 17, when we calculate the gradients of a deep neural net from scratch. This is called the “backward pass” of the network, as opposed to the “forward pass,” which is where the activations are calculated. Life would probably be easier if backward was just called calculate_grad, but deep learning folks really do like to add jargon everywhere they can!
+
We can now view the gradients by checking the grad attribute of our tensor:
+
+
xt.grad
+
+
tensor(6.)
+
+
+
If you remember your high school calculus rules, the derivative of x**2 is 2*x, and we have x=3, so the gradients should be 2*3=6, which is what PyTorch calculated for us!
+
Now we’ll repeat the preceding steps, but with a vector argument for our function:
+
+
xt = tensor([3.,4.,10.]).requires_grad_()
+xt
+
+
tensor([ 3., 4., 10.], requires_grad=True)
+
+
+
And we’ll add sum to our function so it can take a vector (i.e., a rank-1 tensor), and return a scalar (i.e., a rank-0 tensor):
+
+
def f(x): return (x**2).sum()
+
+yt = f(xt)
+yt
+
+
tensor(125., grad_fn=<SumBackward0>)
+
+
+
Our gradients are 2*xt, as we’d expect!
+
+
yt.backward()
+xt.grad
+
+
tensor([ 6., 8., 20.])
+
+
+
The gradients only tell us the slope of our function, they don’t actually tell us exactly how far to adjust the parameters. But it gives us some idea of how far; if the slope is very large, then that may suggest that we have more adjustments to do, whereas if the slope is very small, that may suggest that we are close to the optimal value.
+
+
+
Stepping With a Learning Rate
+
Deciding how to change our parameters based on the values of the gradients is an important part of the deep learning process. Nearly all approaches start with the basic idea of multiplying the gradient by some small number, called the learning rate (LR). The learning rate is often a number between 0.001 and 0.1, although it could be anything. Often, people select a learning rate just by trying a few, and finding which results in the best model after training (we’ll show you a better approach later in this book, called the learning rate finder). Once you’ve picked a learning rate, you can adjust your parameters using this simple function:
+
w -= gradient(w) * lr
+
This is known as stepping your parameters, using an optimizer step. Notice how we subtract the gradient * lr from the parameter to update it. This allows us to adjust the parameter in the direction of the slope by increasing the parameter when the slope is negative and decreasing the parameter when the slope is positive. We want to adjust our parameters in the direction of the slope because our goal in deep learning is to minimize the loss.
+
If you pick a learning rate that’s too low, it can mean having to do a lot of steps. Figure 4.3 illustrates that.
+
+
+
+
But picking a learning rate that’s too high is even worse—it can actually result in the loss getting worse, as we see in Figure 4.4!
+
+
+
+
If the learning rate is too high, it may also “bounce” around, rather than actually diverging; Figure 4.5 shows how this has the result of taking many steps to train successfully.
+
+
+
+
Now let’s apply all of this in an end-to-end example.
+
+
+
An End-to-End SGD Example
+
We’ve seen how to use gradients to find a minimum. Now it’s time to look at an SGD example and see how finding a minimum can be used to train a model to fit data better.
+
Let’s start with a simple, synthetic, example model. Imagine you were measuring the speed of a roller coaster as it went over the top of a hump. It would start fast, and then get slower as it went up the hill; it would be slowest at the top, and it would then speed up again as it went downhill. You want to build a model of how the speed changes over time. If you were measuring the speed manually every second for 20 seconds, it might look something like this:
We’ve added a bit of random noise, since measuring things manually isn’t precise. This means it’s not that easy to answer the question: what was the roller coaster’s speed? Using SGD we can try to find a function that matches our observations. We can’t consider every possible function, so let’s use a guess that it will be quadratic; i.e., a function of the form a*(time**2)+(b*time)+c.
+
We want to distinguish clearly between the function’s input (the time when we are measuring the coaster’s speed) and its parameters (the values that define which quadratic we’re trying). So, let’s collect the parameters in one argument and thus separate the input, t, and the parameters, params, in the function’s signature:
In other words, we’ve restricted the problem of finding the best imaginable function that fits the data, to finding the best quadratic function. This greatly simplifies the problem, since every quadratic function is fully defined by the three parameters a, b, and c. Thus, to find the best quadratic function, we only need to find the best values for a, b, and c.
+
If we can solve this problem for the three parameters of a quadratic function, we’ll be able to apply the same approach for other, more complex functions with more parameters—such as a neural net. Let’s find the parameters for f first, and then we’ll come back and do the same thing for the MNIST dataset with a neural net.
+
We need to define first what we mean by “best.” We define this precisely by choosing a loss function, which will return a value based on a prediction and a target, where lower values of the function correspond to “better” predictions. It is important for loss functions to return lower values when predictions are more accurate, as the SGD procedure we defined earlier will try to minimize this loss. For continuous data, it’s common to use mean squared error:
This doesn’t look very close—our random parameters suggest that the roller coaster will end up going backwards, since we have negative speeds!
+
+
+
Step 3: Calculate the loss
+
We calculate the loss as follows:
+
+
loss = mse(preds, speed)
+loss
+
+
tensor(25823.8086, grad_fn=<MeanBackward0>)
+
+
+
Our goal is now to improve this. To do that, we’ll need to know the gradients.
+
+
+
Step 4: Calculate the gradients
+
The next step is to calculate the gradients. In other words, calculate an approximation of how the parameters need to change:
+
+
loss.backward()
+params.grad
+
+
tensor([-53195.8633, -3419.7148, -253.8908])
+
+
+
+
params.grad *1e-5
+
+
tensor([-0.5320, -0.0342, -0.0025])
+
+
+
We can use these gradients to improve our parameters. We’ll need to pick a learning rate (we’ll discuss how to do that in practice in the next chapter; for now we’ll just use 1e-5, or 0.00001):
Now we need to update the parameters based on the gradients we just calculated:
+
+
lr =1e-5
+params.data -= lr * params.grad.data
+params.grad =None
+
+
+
a: Understanding this bit depends on remembering recent history. To calculate the gradients we call backward on the loss. But this loss was itself calculated by mse, which in turn took preds as an input, which was calculated using f taking as an input params, which was the object on which we originally called requires_grad_—which is the original call that now allows us to call backward on loss. This chain of function calls represents the mathematical composition of functions, which enables PyTorch to use calculus’s chain rule under the hood to calculate these gradients.
+
+
Let’s see if the loss has improved:
+
+
preds = f(time,params)
+mse(preds, speed)
+
+
tensor(5435.5356, grad_fn=<MeanBackward0>)
+
+
+
And take a look at the plot:
+
+
show_preds(preds)
+
+
+
+
+
+
+
We need to repeat this a few times, so we’ll create a function to apply one step:
The loss is going down, just as we hoped! But looking only at these loss numbers disguises the fact that each iteration represents an entirely different quadratic function being tried, on the way to finding the best possible quadratic function. We can see this process visually if, instead of printing out the loss function, we plot the function at every step. Then we can see how the shape is approaching the best possible quadratic function for our data:
We just decided to stop after 10 epochs arbitrarily. In practice, we would watch the training and validation losses and our metrics to decide when to stop, as we’ve discussed.
+
+
+
+
Summarizing Gradient Descent
+
+
+
+
+
+
+
+
To summarize, at the beginning, the weights of our model can be random (training from scratch) or come from a pretrained model (transfer learning). In the first case, the output we will get from our inputs won’t have anything to do with what we want, and even in the second case, it’s very likely the pretrained model won’t be very good at the specific task we are targeting. So the model will need to learn better weights.
+
We begin by comparing the outputs the model gives us with our targets (we have labeled data, so we know what result the model should give) using a loss function, which returns a number that we want to make as low as possible by improving our weights. To do this, we take a few data items (such as images) from the training set and feed them to our model. We compare the corresponding targets using our loss function, and the score we get tells us how wrong our predictions were. We then change the weights a little bit to make it slightly better.
+
To find how to change the weights to make the loss a bit better, we use calculus to calculate the gradients. (Actually, we let PyTorch do it for us!) Let’s consider an analogy. Imagine you are lost in the mountains with your car parked at the lowest point. To find your way back to it, you might wander in a random direction, but that probably wouldn’t help much. Since you know your vehicle is at the lowest point, you would be better off going downhill. By always taking a step in the direction of the steepest downward slope, you should eventually arrive at your destination. We use the magnitude of the gradient (i.e., the steepness of the slope) to tell us how big a step to take; specifically, we multiply the gradient by a number we choose called the learning rate to decide on the step size. We then iterate until we have reached the lowest point, which will be our parking lot, then we can stop.
+
All of that we just saw can be transposed directly to the MNIST dataset, except for the loss function. Let’s now see how we can define a good training objective.
+
+
+
+
4.7 The MNIST Loss Function
+
We already have our independent variables x—these are the images themselves. We’ll concatenate them all into a single tensor, and also change them from a list of matrices (a rank-3 tensor) to a list of vectors (a rank-2 tensor). We can do this using view, which is a PyTorch method that changes the shape of a tensor without changing its contents. -1 is a special parameter to view that means “make this axis as big as necessary to fit all the data”:
A Dataset in PyTorch is required to return a tuple of (x,y) when indexed. Python provides a zip function which, when combined with list, provides a simple way to get this functionality:
The function weights*pixels won’t be flexible enough—it is always equal to 0 when the pixels are equal to 0 (i.e., its intercept is 0). You might remember from high school math that the formula for a line is y=w*x+b; we still need the b. We’ll initialize it to a random number too:
+
+
bias = init_params(1)
+
+
In neural networks, the w in the equation y=w*x+b is called the weights, and the b is called the bias. Together, the weights and bias make up the parameters.
+
+
jargon: Parameters: The weights and biases of a model. The weights are the w in the equation w*x+b, and the biases are the b in that equation.
+
+
We can now calculate a prediction for one image:
+
+
(train_x[0]*weights.T).sum() + bias
+
+
tensor([20.2336], grad_fn=<AddBackward0>)
+
+
+
While we could use a Python for loop to calculate the prediction for each image, that would be very slow. Because Python loops don’t run on the GPU, and because Python is a slow language for loops in general, we need to represent as much of the computation in a model as possible using higher-level functions.
+
In this case, there’s an extremely convenient mathematical operation that calculates w*x for every row of a matrix—it’s called matrix multiplication. Figure 4.7 shows what matrix multiplication looks like.
+
+
+
+
This image shows two matrices, A and B, being multiplied together. Each item of the result, which we’ll call AB, contains each item of its corresponding row of A multiplied by each item of its corresponding column of B, added together. For instance, row 1, column 2 (the yellow dot with a red border) is calculated as \(a_{1,1} * b_{1,2} + a_{1,2} * b_{2,2}\). If you need a refresher on matrix multiplication, we suggest you take a look at the Intro to Matrix Multiplication on Khan Academy, since this is the most important mathematical operation in deep learning.
+
In Python, matrix multiplication is represented with the @ operator. Let’s try it:
The first element is the same as we calculated before, as we’d expect. This equation, batch@weights + bias, is one of the two fundamental equations of any neural network (the other one is the activation function, which we’ll see in a moment).
+
Let’s check our accuracy. To decide if an output represents a 3 or a 7, we can just check whether it’s greater than 0.0, so our accuracy for each item can be calculated (using broadcasting, so no loops!) with:
Now let’s see what the change in accuracy is for a small change in one of the weights (note that we have to ask PyTorch not to calculate gradients as we do this, which is what with torch.no_grad() is doing here):
As we’ve seen, we need gradients in order to improve our model using SGD, and in order to calculate gradients we need some loss function that represents how good our model is. That is because the gradients are a measure of how that loss function changes with small tweaks to the weights.
+
So, we need to choose a loss function. The obvious approach would be to use accuracy, which is our metric, as our loss function as well. In this case, we would calculate our prediction for each image, collect these values to calculate an overall accuracy, and then calculate the gradients of each weight with respect to that overall accuracy.
+
Unfortunately, we have a significant technical problem here. The gradient of a function is its slope, or its steepness, which can be defined as rise over run—that is, how much the value of the function goes up or down, divided by how much we changed the input. We can write this in mathematically as: (y_new - y_old) / (x_new - x_old). This gives us a good approximation of the gradient when x_new is very similar to x_old, meaning that their difference is very small. But accuracy only changes at all when a prediction changes from a 3 to a 7, or vice versa. The problem is that a small change in weights from x_old to x_new isn’t likely to cause any prediction to change, so (y_new - y_old) will almost always be 0. In other words, the gradient is 0 almost everywhere.
+
A very small change in the value of a weight will often not actually change the accuracy at all. This means it is not useful to use accuracy as a loss function—if we do, most of the time our gradients will actually be 0, and the model will not be able to learn from that number.
+
+
S: In mathematical terms, accuracy is a function that is constant almost everywhere (except at the threshold, 0.5), so its derivative is nil almost everywhere (and infinity at the threshold). This then gives gradients that are 0 or infinite, which are useless for updating the model.
+
+
Instead, we need a loss function which, when our weights result in slightly better predictions, gives us a slightly better loss. So what does a “slightly better prediction” look like, exactly? Well, in this case, it means that if the correct answer is a 3 the score is a little higher, or if the correct answer is a 7 the score is a little lower.
+
Let’s write such a function now. What form does it take?
+
The loss function receives not the images themselves, but the predictions from the model. Let’s make one argument, prds, of values between 0 and 1, where each value is the prediction that an image is a 3. It is a vector (i.e., a rank-1 tensor), indexed over the images.
+
The purpose of the loss function is to measure the difference between predicted values and the true values — that is, the targets (aka labels). Let’s make another argument, trgts, with values of 0 or 1 which tells whether an image actually is a 3 or not. It is also a vector (i.e., another rank-1 tensor), indexed over the images.
+
So, for instance, suppose we had three images which we knew were a 3, a 7, and a 3. And suppose our model predicted with high confidence (0.9) that the first was a 3, with slight confidence (0.4) that the second was a 7, and with fair confidence (0.2), but incorrectly, that the last was a 7. This would mean our loss function would receive these values as its inputs:
We’re using a new function, torch.where(a,b,c). This is the same as running the list comprehension [b[i] if a[i] else c[i] for i in range(len(a))], except it works on tensors, at C/CUDA speed. In plain English, this function will measure how distant each prediction is from 1 if it should be 1, and how distant it is from 0 if it should be 0, and then it will take the mean of all those distances.
+
+
note: Read the Docs: It’s important to learn about PyTorch functions like this, because looping over tensors in Python performs at Python speed, not C/CUDA speed! Try running help(torch.where) now to read the docs for this function, or, better still, look it up on the PyTorch documentation site.
+
+
Let’s try it on our prds and trgts:
+
+
torch.where(trgts==1, 1-prds, prds)
+
+
tensor([0.1000, 0.4000, 0.8000])
+
+
+
You can see that this function returns a lower number when predictions are more accurate, when accurate predictions are more confident (higher absolute values), and when inaccurate predictions are less confident. In PyTorch, we always assume that a lower value of a loss function is better. Since we need a scalar for the final loss, mnist_loss takes the mean of the previous tensor:
+
+
mnist_loss(prds,trgts)
+
+
tensor(0.4333)
+
+
+
For instance, if we change our prediction for the one “false” target from 0.2 to 0.8 the loss will go down, indicating that this is a better prediction:
+
+
mnist_loss(tensor([0.9, 0.4, 0.8]),trgts)
+
+
tensor(0.2333)
+
+
+
One problem with mnist_loss as currently defined is that it assumes that predictions are always between 0 and 1. We need to ensure, then, that this is actually the case! As it happens, there is a function that does exactly that—let’s take a look.
+
+
Sigmoid
+
The sigmoid function always outputs a number between 0 and 1. It’s defined as follows:
+
+
def sigmoid(x): return1/(1+torch.exp(-x))
+
+
Pytorch defines an accelerated version for us, so we don’t really need our own. This is an important function in deep learning, since we often want to ensure values are between 0 and 1. This is what it looks like:
As you can see, it takes any input value, positive or negative, and smooshes it onto an output value between 0 and 1. It’s also a smooth curve that only goes up, which makes it easier for SGD to find meaningful gradients.
+
Let’s update mnist_loss to first apply sigmoid to the inputs:
Now we can be confident our loss function will work, even if the predictions are not between 0 and 1. All that is required is that a higher prediction corresponds to higher confidence an image is a 3.
+
Having defined a loss function, now is a good moment to recapitulate why we did this. After all, we already had a metric, which was overall accuracy. So why did we define a loss?
+
The key difference is that the metric is to drive human understanding and the loss is to drive automated learning. To drive automated learning, the loss must be a function that has a meaningful derivative. It can’t have big flat sections and large jumps, but instead must be reasonably smooth. This is why we designed a loss function that would respond to small changes in confidence level. This requirement means that sometimes it does not really reflect exactly what we are trying to achieve, but is rather a compromise between our real goal and a function that can be optimized using its gradient. The loss function is calculated for each item in our dataset, and then at the end of an epoch the loss values are all averaged and the overall mean is reported for the epoch.
+
Metrics, on the other hand, are the numbers that we really care about. These are the values that are printed at the end of each epoch that tell us how our model is really doing. It is important that we learn to focus on these metrics, rather than the loss, when judging the performance of a model.
+
+
+
SGD and Mini-Batches
+
Now that we have a loss function that is suitable for driving SGD, we can consider some of the details involved in the next phase of the learning process, which is to change or update the weights based on the gradients. This is called an optimization step.
+
In order to take an optimization step we need to calculate the loss over one or more data items. How many should we use? We could calculate it for the whole dataset, and take the average, or we could calculate it for a single data item. But neither of these is ideal. Calculating it for the whole dataset would take a very long time. Calculating it for a single item would not use much information, so it would result in a very imprecise and unstable gradient. That is, you’d be going to the trouble of updating the weights, but taking into account only how that would improve the model’s performance on that single item.
+
So instead we take a compromise between the two: we calculate the average loss for a few data items at a time. This is called a mini-batch. The number of data items in the mini-batch is called the batch size. A larger batch size means that you will get a more accurate and stable estimate of your dataset’s gradients from the loss function, but it will take longer, and you will process fewer mini-batches per epoch. Choosing a good batch size is one of the decisions you need to make as a deep learning practitioner to train your model quickly and accurately. We will talk about how to make this choice throughout this book.
+
Another good reason for using mini-batches rather than calculating the gradient on individual data items is that, in practice, we nearly always do our training on an accelerator such as a GPU. These accelerators only perform well if they have lots of work to do at a time, so it’s helpful if we can give them lots of data items to work on. Using mini-batches is one of the best ways to do this. However, if you give them too much data to work on at once, they run out of memory—making GPUs happy is also tricky!
+
As we saw in our discussion of data augmentation in Chapter 2, we get better generalization if we can vary things during training. One simple and effective thing we can vary is what data items we put in each mini-batch. Rather than simply enumerating our dataset in order for every epoch, instead what we normally do is randomly shuffle it on every epoch, before we create mini-batches. PyTorch and fastai provide a class that will do the shuffling and mini-batch collation for you, called DataLoader.
+
A DataLoader can take any Python collection and turn it into an iterator over mini-batches, like so:
For training a model, we don’t just want any Python collection, but a collection containing independent and dependent variables (that is, the inputs and targets of the model). A collection that contains tuples of independent and dependent variables is known in PyTorch as a Dataset. Here’s an example of an extremely simple Dataset:
When we pass a Dataset to a DataLoader we will get back mini-batches which are themselves tuples of tensors representing batches of independent and dependent variables:
The gradients have changed! The reason for this is that loss.backward actually adds the gradients of loss to any gradients that are currently stored. So, we have to set the current gradients to 0 first:
+
+
weights.grad.zero_()
+bias.grad.zero_();
+
+
+
note: Inplace Operations: Methods in PyTorch whose names end in an underscore modify their objects in place. For instance, bias.zero_() sets all elements of the tensor bias to 0.
+
+
Our only remaining step is to update the weights and biases based on the gradient and learning rate. When we do so, we have to tell PyTorch not to take the gradient of this step too—otherwise things will get very confusing when we try to compute the derivative at the next batch! If we assign to the data attribute of a tensor then PyTorch will not take the gradient of that step. Here’s our basic training loop for an epoch:
+
+
def train_epoch(model, lr, params):
+for xb,yb in dl:
+ calc_grad(xb, yb, model)
+for p in params:
+ p.data -= p.grad*lr
+ p.grad.zero_()
+
+
We also want to check how we’re doing, by looking at the accuracy of the validation set. To decide if an output represents a 3 or a 7, we can just check whether it’s greater than 0. So our accuracy for each item can be calculated (using broadcasting, so no loops!) with:
Looking good! We’re already about at the same accuracy as our “pixel similarity” approach, and we’ve created a general-purpose foundation we can build on. Our next step will be to create an object that will handle the SGD step for us. In PyTorch, it’s called an optimizer.
+
+
Creating an Optimizer
+
Because this is such a general foundation, PyTorch provides some useful classes to make it easier to implement. The first thing we can do is replace our linear1 function with PyTorch’s nn.Linear module. A module is an object of a class that inherits from the PyTorch nn.Module class. Objects of this class behave identically to standard Python functions, in that you can call them using parentheses and they will return the activations of a model.
+
nn.Linear does the same thing as our init_params and linear together. It contains both the weights and biases in a single class. Here’s how we replicate our model from the previous section:
+
+
linear_model = nn.Linear(28*28,1)
+
+
Every PyTorch module knows what parameters it has that can be trained; they are available through the parameters method:
+
+
w,b = linear_model.parameters()
+w.shape,b.shape
+
+
(torch.Size([1, 784]), torch.Size([1]))
+
+
+
We can use this information to create an optimizer:
+
+
class BasicOptim:
+def__init__(self,params,lr): self.params,self.lr =list(params),lr
+
+def step(self, *args, **kwargs):
+for p inself.params: p.data -= p.grad.data *self.lr
+
+def zero_grad(self, *args, **kwargs):
+for p inself.params: p.grad =None
+
+
We can create our optimizer by passing in the model’s parameters:
fastai also provides Learner.fit, which we can use instead of train_model. To create a Learner we first need to create a DataLoaders, by passing in our training and validation DataLoaders:
+
+
dls = DataLoaders(dl, valid_dl)
+
+
To create a Learner without using an application (such as vision_learner) we need to pass in all the elements that we’ve created in this chapter: the DataLoaders, the model, the optimization function (which will be passed the parameters), the loss function, and optionally any metrics to print:
As you can see, there’s nothing magic about the PyTorch and fastai classes. They are just convenient pre-packaged pieces that make your life a bit easier! (They also provide a lot of extra functionality we’ll be using in future chapters.)
+
With these classes, we can now replace our linear model with a neural network.
+
+
+
+
4.9 Adding a Nonlinearity
+
So far we have a general procedure for optimizing the parameters of a function, and we have tried it out on a very boring function: a simple linear classifier. A linear classifier is very constrained in terms of what it can do. To make it a bit more complex (and able to handle more tasks), we need to add something nonlinear between two linear classifiers—this is what gives us a neural network.
+
Here is the entire definition of a basic neural network:
+
+
def simple_net(xb):
+ res = xb@w1 + b1
+ res = res.max(tensor(0.0))
+ res = res@w2 + b2
+return res
+
+
That’s it! All we have in simple_net is two linear classifiers with a max function between them.
+
Here, w1 and w2 are weight tensors, and b1 and b2 are bias tensors; that is, parameters that are initially randomly initialized, just like we did in the previous section:
The key point about this is that w1 has 30 output activations (which means that w2 must have 30 input activations, so they match). That means that the first layer can construct 30 different features, each representing some different mix of pixels. You can change that 30 to anything you like, to make the model more or less complex.
+
That little function res.max(tensor(0.0)) is called a rectified linear unit, also known as ReLU. We think we can all agree that rectified linear unit sounds pretty fancy and complicated… But actually, there’s nothing more to it than res.max(tensor(0.0))—in other words, replace every negative number with a zero. This tiny function is also available in PyTorch as F.relu:
+
+
plot_function(F.relu)
+
+
+
+
+
+
+
+
J: There is an enormous amount of jargon in deep learning, including terms like rectified linear unit. The vast vast majority of this jargon is no more complicated than can be implemented in a short line of code, as we saw in this example. The reality is that for academics to get their papers published they need to make them sound as impressive and sophisticated as possible. One of the ways that they do that is to introduce jargon. Unfortunately, this has the result that the field ends up becoming far more intimidating and difficult to get into than it should be. You do have to learn the jargon, because otherwise papers and tutorials are not going to mean much to you. But that doesn’t mean you have to find the jargon intimidating. Just remember, when you come across a word or phrase that you haven’t seen before, it will almost certainly turn out to be referring to a very simple concept.
+
+
The basic idea is that by using more linear layers, we can have our model do more computation, and therefore model more complex functions. But there’s no point just putting one linear layer directly after another one, because when we multiply things together and then add them up multiple times, that could be replaced by multiplying different things together and adding them up just once! That is to say, a series of any number of linear layers in a row can be replaced with a single linear layer with a different set of parameters.
+
But if we put a nonlinear function between them, such as max, then this is no longer true. Now each linear layer is actually somewhat decoupled from the other ones, and can do its own useful work. The max function is particularly interesting, because it operates as a simple if statement.
+
+
S: Mathematically, we say the composition of two linear functions is another linear function. So, we can stack as many linear classifiers as we want on top of each other, and without nonlinear functions between them, it will just be the same as one linear classifier.
+
+
Amazingly enough, it can be mathematically proven that this little function can solve any computable problem to an arbitrarily high level of accuracy, if you can find the right parameters for w1 and w2 and if you make these matrices big enough. For any arbitrarily wiggly function, we can approximate it as a bunch of lines joined together; to make it closer to the wiggly function, we just have to use shorter lines. This is known as the universal approximation theorem. The three lines of code that we have here are known as layers. The first and third are known as linear layers, and the second line of code is known variously as a nonlinearity, or activation function.
+
Just like in the previous section, we can replace this code with something a bit simpler, by taking advantage of PyTorch:
nn.Sequential creates a module that will call each of the listed layers or functions in turn.
+
nn.ReLU is a PyTorch module that does exactly the same thing as the F.relu function. Most functions that can appear in a model also have identical forms that are modules. Generally, it’s just a case of replacing F with nn and changing the capitalization. When using nn.Sequential, PyTorch requires us to use the module version. Since modules are classes, we have to instantiate them, which is why you see nn.ReLU() in this example.
+
Because nn.Sequential is a module, we can get its parameters, which will return a list of all the parameters of all the modules it contains. Let’s try it out! As this is a deeper model, we’ll use a lower learning rate and a few more epochs.
We’re not showing the 40 lines of output here to save room; the training process is recorded in learn.recorder, with the table of output stored in the values attribute, so we can plot the accuracy over training as:
+
+
plt.plot(L(learn.recorder.values).itemgot(2));
+
+
+
+
+
+
+
And we can view the final accuracy:
+
+
learn.recorder.values[-1][2]
+
+
0.982826292514801
+
+
+
At this point we have something that is rather magical:
+
+
A function that can solve any problem to any level of accuracy (the neural network) given the correct set of parameters
+
A way to find the best set of parameters for any function (stochastic gradient descent)
+
+
This is why deep learning can do things which seem rather magical, such fantastic things. Believing that this combination of simple techniques can really solve any problem is one of the biggest steps that we find many students have to take. It seems too good to be true—surely things should be more difficult and complicated than this? Our recommendation: try it out! We just tried it on the MNIST dataset and you have seen the results. And since we are doing everything from scratch ourselves (except for calculating the gradients) you know that there is no special magic hiding behind the scenes.
+
+
Going Deeper
+
There is no need to stop at just two linear layers. We can add as many as we want, as long as we add a nonlinearity between each pair of linear layers. As you will learn, however, the deeper the model gets, the harder it is to optimize the parameters in practice. Later in this book you will learn about some simple but brilliantly effective techniques for training deeper models.
+
We already know that a single nonlinearity with two linear layers is enough to approximate any function. So why would we use deeper models? The reason is performance. With a deeper model (that is, one with more layers) we do not need to use as many parameters; it turns out that we can use smaller matrices with more layers, and get better results than we would get with larger matrices, and few layers.
+
That means that we can train the model more quickly, and it will take up less memory. In the 1990s researchers were so focused on the universal approximation theorem that very few were experimenting with more than one nonlinearity. This theoretical but not practical foundation held back the field for years. Some researchers, however, did experiment with deep models, and eventually were able to show that these models could perform much better in practice. Eventually, theoretical results were developed which showed why this happens. Today, it is extremely unusual to find anybody using a neural network with just one nonlinearity.
+
Here is what happens when we train an 18-layer model using the same approach we saw in Chapter 1:
Nearly 100% accuracy! That’s a big difference compared to our simple neural net. But as you’ll learn in the remainder of this book, there are just a few little tricks you need to use to get such great results from scratch yourself. You already know the key foundational pieces. (Of course, even once you know all the tricks, you’ll nearly always want to work with the pre-built classes provided by PyTorch and fastai, because they save you having to think about all the little details yourself.)
+
+
+
+
4.10 Jargon Recap
+
Congratulations: you now know how to create and train a deep neural network from scratch! We’ve gone through quite a few steps to get to this point, but you might be surprised at how simple it really is.
+
Now that we are at this point, it is a good opportunity to define, and review, some jargon and key concepts.
+
A neural network contains a lot of numbers, but they are only of two types: numbers that are calculated, and the parameters that these numbers are calculated from. This gives us the two most important pieces of jargon to learn:
+
+
Activations:: Numbers that are calculated (both by linear and nonlinear layers)
+
Parameters:: Numbers that are randomly initialized, and optimized (that is, the numbers that define the model)
+
+
We will often talk in this book about activations and parameters. Remember that they have very specific meanings. They are numbers. They are not abstract concepts, but they are actual specific numbers that are in your model. Part of becoming a good deep learning practitioner is getting used to the idea of actually looking at your activations and parameters, and plotting them and testing whether they are behaving correctly.
+
Our activations and parameters are all contained in tensors. These are simply regularly shaped arrays—for example, a matrix. Matrices have rows and columns; we call these the axes or dimensions. The number of dimensions of a tensor is its rank. There are some special tensors:
+
+
Rank zero: scalar
+
Rank one: vector
+
Rank two: matrix
+
+
A neural network contains a number of layers. Each layer is either linear or nonlinear. We generally alternate between these two kinds of layers in a neural network. Sometimes people refer to both a linear layer and its subsequent nonlinearity together as a single layer. Yes, this is confusing. Sometimes a nonlinearity is referred to as an activation function.
+
Table 4.1 summarizes the key concepts related to SGD.
+
+
+
+Table 4.1: Deep learning vocabulary
+
+
+
+
+
+
+
+
+
+
Term
+
Meaning
+
+
+
+
+
ReLU
+
Function that returns 0 for negative numbers and doesn’t change positive numbers.
+
+
+
Mini-batch
+
A small group of inputs and labels gathered together in two arrays. A gradient descent step is updated on this batch (rather than a whole epoch).
+
+
+
Forward pass
+
Applying the model to some input and computing the predictions.
+
+
+
Loss
+
A value that represents how well (or badly) our model is doing.
+
+
+
Gradient
+
The derivative of the loss with respect to some parameter of the model.
+
+
+
Backward pass
+
Computing the gradients of the loss with respect to all model parameters.
+
+
+
Gradient descent
+
Taking a step in the directions opposite to the gradients to make the model parameters a little bit better.
+
+
+
Learning rate
+
The size of the step we take when applying SGD to update the parameters of the model.
+
+
+
+
+
+
+
+
note: Choose Your Own Adventure Reminder: Did you choose to skip over chapters 2 & 3, in your excitement to peek under the hood? Well, here’s your reminder to head back to chapter 2 now, because you’ll be needing to know that stuff very soon!
+
+
+
+
4.11 Questionnaire
+
+
How is a grayscale image represented on a computer? How about a color image?
+
How are the files and folders in the MNIST_SAMPLE dataset structured? Why?
+
Explain how the “pixel similarity” approach to classifying digits works.
+
What is a list comprehension? Create one now that selects odd numbers from a list and doubles them.
+
What is a “rank-3 tensor”?
+
What is the difference between tensor rank and shape? How do you get the rank from the shape?
+
What are RMSE and L1 norm?
+
How can you apply a calculation on thousands of numbers at once, many thousands of times faster than a Python loop?
+
Create a 3×3 tensor or array containing the numbers from 1 to 9. Double it. Select the bottom-right four numbers.
+
What is broadcasting?
+
Are metrics generally calculated using the training set, or the validation set? Why?
+
What is SGD?
+
Why does SGD use mini-batches?
+
What are the seven steps in SGD for machine learning?
+
How do we initialize the weights in a model?
+
What is “loss”?
+
Why can’t we always use a high learning rate?
+
What is a “gradient”?
+
Do you need to know how to calculate gradients yourself?
+
Why can’t we use accuracy as a loss function?
+
Draw the sigmoid function. What is special about its shape?
+
What is the difference between a loss function and a metric?
+
What is the function to calculate new weights using a learning rate?
+
What does the DataLoader class do?
+
Write pseudocode showing the basic steps taken in each epoch for SGD.
+
Create a function that, if passed two arguments [1,2,3,4] and 'abcd', returns [(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')]. What is special about that output data structure?
+
What does view do in PyTorch?
+
What are the “bias” parameters in a neural network? Why do we need them?
+
What does the @ operator do in Python?
+
What does the backward method do?
+
Why do we have to zero the gradients?
+
What information do we have to pass to Learner?
+
Show Python or pseudocode for the basic steps of a training loop.
+
What is “ReLU”? Draw a plot of it for values from -2 to +2.
+
What is an “activation function”?
+
What’s the difference between F.relu and nn.ReLU?
+
The universal approximation theorem shows that any function can be approximated as closely as needed using just one nonlinearity. So why do we normally use more?
+
+
+
Further Research
+
+
Create your own implementation of Learner from scratch, based on the training loop shown in this chapter.
+
Complete all the steps in this chapter using the full MNIST datasets (that is, for all digits, not just 3s and 7s). This is a significant project and will take you quite a bit of time to complete! You’ll need to do some of your own research to figure out how to overcome some obstacles you’ll meet on the way.
In this chapter, we will build on top of the CNNs introduced in the previous chapter and explain to you the ResNet (residual network) architecture. It was introduced in 2015 by Kaiming He et al. in the article “Deep Residual Learning for Image Recognition” and is by far the most used model architecture nowadays. More recent developments in image models almost always use the same trick of residual connections, and most of the time, they are just a tweak of the original ResNet.
+
We will first show you the basic ResNet as it was first designed, then explain to you what modern tweaks make it more performant. But first, we will need a problem a little bit more difficult than the MNIST dataset, since we are already close to 100% accuracy with a regular CNN on it.
+
+
14.1 Going Back to Imagenette
+
It’s going to be tough to judge any improvements we make to our models when we are already at an accuracy that is as high as we saw on MNIST in the previous chapter, so we will tackle a tougher image classification problem by going back to Imagenette. We’ll stick with small images to keep things reasonably fast.
+
Let’s grab the data—we’ll use the already-resized 160 px version to make things faster still, and will random crop to 128 px:
When we looked at MNIST we were dealing with 28×28-pixel images. For Imagenette we are going to be training with 128×128-pixel images. Later, we would like to be able to use larger images as well—at least as big as 224×224 pixels, the ImageNet standard. Do you recall how we managed to get a single vector of activations for each image out of the MNIST convolutional neural network?
+
The approach we used was to ensure that there were enough stride-2 convolutions such that the final layer would have a grid size of 1. Then we just flattened out the unit axes that we ended up with, to get a vector for each image (so, a matrix of activations for a mini-batch). We could do the same thing for Imagenette, but that would cause two problems:
+
+
We’d need lots of stride-2 layers to make our grid 1×1 at the end—perhaps more than we would otherwise choose.
+
The model would not work on images of any size other than the size we originally trained on.
+
+
One approach to dealing with the first of these issues would be to flatten the final convolutional layer in a way that handles a grid size other than 1×1. That is, we could simply flatten a matrix into a vector as we have done before, by laying out each row after the previous row. In fact, this is the approach that convolutional neural networks up until 2013 nearly always took. The most famous example is the 2013 ImageNet winner VGG, still sometimes used today. But there was another problem with this architecture: not only did it not work with images other than those of the same size used in the training set, but it required a lot of memory, because flattening out the convolutional layer resulted in many activations being fed into the final layers. Therefore, the weight matrices of the final layers were enormous.
+
This problem was solved through the creation of fully convolutional networks. The trick in fully convolutional networks is to take the average of activations across a convolutional grid. In other words, we can simply use this function:
+
+
def avg_pool(x): return x.mean((2,3))
+
+
As you see, it is taking the mean over the x- and y-axes. This function will always convert a grid of activations into a single activation per image. PyTorch provides a slightly more versatile module called nn.AdaptiveAvgPool2d, which averages a grid of activations into whatever sized destination you require (although we nearly always use a size of 1).
+
A fully convolutional network, therefore, has a number of convolutional layers, some of which will be stride 2, at the end of which is an adaptive average pooling layer, a flatten layer to remove the unit axes, and finally a linear layer. Here is our first fully convolutional network:
We’re going to be replacing the implementation of block in the network with other variants in a moment, which is why we’re not calling it conv any more. We’re also saving some time by taking advantage of fastai’s ConvLayer, which that already provides the functionality of conv from the last chapter (plus a lot more!).
+
+
stop: Consider this question: would this approach makes sense for an optical character recognition (OCR) problem such as MNIST? The vast majority of practitioners tackling OCR and similar problems tend to use fully convolutional networks, because that’s what nearly everybody learns nowadays. But it really doesn’t make any sense! You can’t decide, for instance, whether a number is a 3 or an 8 by slicing it into small pieces, jumbling them up, and deciding whether on average each piece looks like a 3 or an 8. But that’s what adaptive average pooling effectively does! Fully convolutional networks are only really a good choice for objects that don’t have a single correct orientation or size (e.g., like most natural photos).
+
+
Once we are done with our convolutional layers, we will get activations of size bs x ch x h x w (batch size, a certain number of channels, height, and width). We want to convert this to a tensor of size bs x ch, so we take the average over the last two dimensions and flatten the trailing 1×1 dimension like we did in our previous model.
+
This is different from regular pooling in the sense that those layers will generally take the average (for average pooling) or the maximum (for max pooling) of a window of a given size. For instance, max pooling layers of size 2, which were very popular in older CNNs, reduce the size of our image by half on each dimension by taking the maximum of each 2×2 window (with a stride of 2).
+
As before, we can define a Learner with our custom model and then train it on the data we grabbed earlier:
3e-3 is often a good learning rate for CNNs, and that appears to be the case here too, so let’s try that:
+
+
learn.fit_one_cycle(5, 3e-3)
+
+
+
+
+
epoch
+
train_loss
+
valid_loss
+
accuracy
+
time
+
+
+
+
+
0
+
1.901582
+
2.155090
+
0.325350
+
00:07
+
+
+
1
+
1.559855
+
1.586795
+
0.507771
+
00:07
+
+
+
2
+
1.296350
+
1.295499
+
0.571720
+
00:07
+
+
+
3
+
1.144139
+
1.139257
+
0.639236
+
00:07
+
+
+
4
+
1.049770
+
1.092619
+
0.659108
+
00:07
+
+
+
+
+
+
That’s a pretty good start, considering we have to pick the correct one of 10 categories, and we’re training from scratch for just 5 epochs! We can do way better than this using a deeper mode, but just stacking new layers won’t really improve our results (you can try and see for yourself!). To work around this problem, ResNets introduce the idea of skip connections. We’ll explore those and other aspects of ResNets in the next section.
+
+
+
14.2 Building a Modern CNN: ResNet
+
We now have all the pieces we need to build the models we have been using in our computer vision tasks since the beginning of this book: ResNets. We’ll introduce the main idea behind them and show how it improves accuracy on Imagenette compared to our previous model, before building a version with all the recent tweaks.
+
+
Skip Connections
+
In 2015, the authors of the ResNet paper noticed something that they found curious. Even after using batchnorm, they saw that a network using more layers was doing less well than a network using fewer layers—and there were no other differences between the models. Most interestingly, the difference was observed not only in the validation set, but also in the training set; so, it wasn’t just a generalization issue, but a training issue. As the paper explains:
+
+
Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as [previously reported] and thoroughly verified by our experiments.
+
+
This phenomenon was illustrated by the graph in Figure 14.1, with training error on the left and test error on the right.
+
+
+
+
As the authors mention here, they are not the first people to have noticed this curious fact. But they were the first to make a very important leap:
+
+
Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model.
+
+
As this is an academic paper this process is described in a rather inaccessible way, but the concept is actually very simple: start with a 20-layer neural network that is trained well, and add another 36 layers that do nothing at all (for instance, they could be linear layers with a single weight equal to 1, and bias equal to 0). The result will be a 56-layer network that does exactly the same thing as the 20-layer network, proving that there are always deep networks that should be at least as good as any shallow network. But for some reason, SGD does not seem able to find them.
+
+
jargon: Identity mapping: Returning the input without changing it at all. This process is performed by an identity function.
+
+
Actually, there is another way to create those extra 36 layers, which is much more interesting. What if we replaced every occurrence of conv(x) with x + conv(x), where conv is the function from the previous chapter that adds a second convolution, then a batchnorm layer, then a ReLU. Furthermore, recall that batchnorm does gamma*y + beta. What if we initialized gamma to zero for every one of those final batchnorm layers? Then our conv(x) for those extra 36 layers will always be equal to zero, which means x+conv(x) will always be equal to x.
+
What has that gained us? The key thing is that those 36 extra layers, as they stand, are an identity mapping, but they have parameters, which means they are trainable. So, we can start with our best 20-layer model, add these 36 extra layers which initially do nothing at all, and then fine-tune the whole 56-layer model. Those extra 36 layers can then learn the parameters that make them most useful.
+
The ResNet paper actually proposed a variant of this, which is to instead “skip over” every second convolution, so effectively we get x+conv2(conv1(x)). This is shown by the diagram in Figure 14.2 (from the paper).
+
+
+
+
That arrow on the right is just the x part of x+conv2(conv1(x)), and is known as the identity branch or skip connection. The path on the left is the conv2(conv1(x)) part. You can think of the identity path as providing a direct route from the input to the output.
+
In a ResNet, we don’t actually proceed by first training a smaller number of layers, and then adding new layers on the end and fine-tuning. Instead, we use ResNet blocks like the one in Figure 14.2 throughout the CNN, initialized from scratch in the usual way, and trained with SGD in the usual way. We rely on the skip connections to make the network easier to train with SGD.
+
There’s another (largely equivalent) way to think of these ResNet blocks. This is how the paper describes it:
+
+
Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as H(x), we let the stacked nonlinear layers fit another mapping of F(x) := H(x)−x. The original mapping is recast into F(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.
+
+
Again, this is rather inaccessible prose—so let’s try to restate it in plain English! If the outcome of a given layer is x, when using a ResNet block that returns y = x+block(x) we’re not asking the block to predict y, we are asking it to predict the difference between y and x. So the job of those blocks isn’t to predict certain features, but to minimize the error between x and the desired y. A ResNet is, therefore, good at learning about slight differences between doing nothing and passing though a block of two convolutional layers (with trainable weights). This is how these models got their name: they’re predicting residuals (reminder: “residual” is prediction minus target).
+
One key concept that both of these two ways of thinking about ResNets share is the idea of ease of learning. This is an important theme. Recall the universal approximation theorem, which states that a sufficiently large network can learn anything. This is still true, but there turns out to be a very important difference between what a network can learn in principle, and what it is easy for it to learn with realistic data and training regimes. Many of the advances in neural networks over the last decade have been like the ResNet block: the result of realizing how to make something that was always possible actually feasible.
+
+
note: True Identity Path: The original paper didn’t actually do the trick of using zero for the initial value of gamma in the last batchnorm layer of each block; that came a couple of years later. So, the original version of ResNet didn’t quite begin training with a truly identity path through the ResNet blocks, but nonetheless having the ability to “navigate through” the skip connections did indeed make it train better. Adding the batchnorm gamma init trick made the models train at even higher learning rates.
+
+
Here’s the definition of a simple ResNet block (where norm_type=NormType.BatchZero causes fastai to init the gamma weights of the last batchnorm layer to zero):
+
+
class ResBlock(Module):
+def__init__(self, ni, nf):
+self.convs = nn.Sequential(
+ ConvLayer(ni,nf),
+ ConvLayer(nf,nf, norm_type=NormType.BatchZero))
+
+def forward(self, x): return x +self.convs(x)
+
+
There are two problems with this, however: it can’t handle a stride other than 1, and it requires that ni==nf. Stop for a moment to think carefully about why this is.
+
The issue is that with a stride of, say, 2 on one of the convolutions, the grid size of the output activations will be half the size on each axis of the input. So then we can’t add that back to x in forward because x and the output activations have different dimensions. The same basic issue occurs if ni!=nf: the shapes of the input and output connections won’t allow us to add them together.
+
To fix this, we need a way to change the shape of x to match the result of self.convs. Halving the grid size can be done using an average pooling layer with a stride of 2: that is, a layer that takes 2×2 patches from the input and replaces them with their average.
+
Changing the number of channels can be done by using a convolution. We want this skip connection to be as close to an identity map as possible, however, which means making this convolution as simple as possible. The simplest possible convolution is one where the kernel size is 1. That means that the kernel is size ni*nf*1*1, so it’s only doing a dot product over the channels of each input pixel—it’s not combining across pixels at all. This kind of 1x1 convolution is very widely used in modern CNNs, so take a moment to think about how it works.
+
+
jargon: 1x1 convolution: A convolution with a kernel size of 1.
+
+
Here’s a ResBlock using these tricks to handle changing shape in the skip connection:
Note that we’re using the noop function here, which simply returns its input unchanged (noop is a computer science term that stands for “no operation”). In this case, idconv does nothing at all if ni==nf, and pool does nothing if stride==1, which is what we wanted in our skip connection.
+
Also, you’ll see that we’ve removed the ReLU (act_cls=None) from the final convolution in convs and from idconv, and moved it to after we add the skip connection. The thinking behind this is that the whole ResNet block is like a layer, and you want your activation to be after your layer.
+
Let’s replace our block with ResBlock, and try it out:
It’s not much better. But the whole point of this was to allow us to train deeper models, and we’re not really taking advantage of that yet. To create a model that’s, say, twice as deep, all we need to do is replace our block with two ResBlocks in a row:
The authors of the ResNet paper went on to win the 2015 ImageNet challenge. At the time, this was by far the most important annual event in computer vision. We have already seen another ImageNet winner: the 2013 winners, Zeiler and Fergus. It is interesting to note that in both cases the starting points for the breakthroughs were experimental observations: observations about what layers actually learn, in the case of Zeiler and Fergus, and observations about which kinds of networks can be trained, in the case of the ResNet authors. This ability to design and analyze thoughtful experiments, or even just to see an unexpected result, say “Hmmm, that’s interesting,” and then, most importantly, set about figuring out what on earth is going on, with great tenacity, is at the heart of many scientific discoveries. Deep learning is not like pure mathematics. It is a heavily experimental field, so it’s important to be a strong practitioner, not just a theoretician.
+
Since the ResNet was introduced, it’s been widely studied and applied to many domains. One of the most interesting papers, published in 2018, is Hao Li et al.’s “Visualizing the Loss Landscape of Neural Nets”. It shows that using skip connections helps smooth the loss function, which makes training easier as it avoids falling into a very sharp area. Figure 14.3 shows a stunning picture from the paper, illustrating the difference between the bumpy terrain that SGD has to navigate to optimize a regular CNN (left) versus the smooth surface of a ResNet (right).
+
+
+
+
Our first model is already good, but further research has discovered more tricks we can apply to make it better. We’ll look at those next.
+
+
+
A State-of-the-Art ResNet
+
In “Bag of Tricks for Image Classification with Convolutional Neural Networks”, Tong He et al. study different variations of the ResNet architecture that come at almost no additional cost in terms of number of parameters or computation. By using a tweaked ResNet-50 architecture and Mixup they achieved 94.6% top-5 accuracy on ImageNet, in comparison to 92.2% with a regular ResNet-50 without Mixup. This result is better than that achieved by regular ResNet models that are twice as deep (and twice as slow, and much more likely to overfit).
+
+
jargon: top-5 accuracy: A metric testing how often the label we want is in the top 5 predictions of our model. It was used in the ImageNet competition because many of the images contained multiple objects, or contained objects that could be easily confused or may even have been mislabeled with a similar label. In these situations, looking at top-1 accuracy may be inappropriate. However, recently CNNs have been getting so good that top-5 accuracy is nearly 100%, so some researchers are using top-1 accuracy for ImageNet too now.
+
+
We’ll use this tweaked version as we scale up to the full ResNet, because it’s substantially better. It differs a little bit from our previous implementation, in that instead of just starting with ResNet blocks, it begins with a few convolutional layers followed by a max pooling layer. This is what the first layers, called the stem of the network, look like:
jargon: Stem: The first few layers of a CNN. Generally, the stem has a different structure than the main body of the CNN.
+
+
The reason that we have a stem of plain convolutional layers, instead of ResNet blocks, is based on a very important insight about all deep convolutional neural networks: the vast majority of the computation occurs in the early layers. Therefore, we should keep the early layers as fast and simple as possible.
+
To see why so much computation occurs in the early layers, consider the very first convolution on a 128-pixel input image. If it is a stride-1 convolution, then it will apply the kernel to every one of the 128×128 pixels. That’s a lot of work! In the later layers, however, the grid size could be as small as 4×4 or even 2×2, so there are far fewer kernel applications to do.
+
On the other hand, the first-layer convolution only has 3 input features and 32 output features. Since it is a 3×3 kernel, this is 3×32×3×3 = 864 parameters in the weights. But the last convolution will have 256 input features and 512 output features, resulting in 1,179,648 weights! So the first layers contain the vast majority of the computation, but the last layers contain the vast majority of the parameters.
+
A ResNet block takes more computation than a plain convolutional block, since (in the stride-2 case) a ResNet block has three convolutions and a pooling layer. That’s why we want to have plain convolutions to start off our ResNet.
+
We’re now ready to show the implementation of a modern ResNet, with the “bag of tricks.” It uses four groups of ResNet blocks, with 64, 128, 256, then 512 filters. Each group starts with a stride-2 block, except for the first one, since it’s just after a MaxPooling layer:
+
+
class ResNet(nn.Sequential):
+def__init__(self, n_out, layers, expansion=1):
+ stem = _resnet_stem(3,32,32,64)
+self.block_szs = [64, 64, 128, 256, 512]
+for i inrange(1,5): self.block_szs[i] *= expansion
+ blocks = [self._make_layer(*o) for o inenumerate(layers)]
+super().__init__(*stem, *blocks,
+ nn.AdaptiveAvgPool2d(1), Flatten(),
+ nn.Linear(self.block_szs[-1], n_out))
+
+def _make_layer(self, idx, n_layers):
+ stride =1if idx==0else2
+ ch_in,ch_out =self.block_szs[idx:idx+2]
+return nn.Sequential(*[
+ ResBlock(ch_in if i==0else ch_out, ch_out, stride if i==0else1)
+for i inrange(n_layers)
+ ])
+
+
The _make_layer function is just there to create a series of n_layers blocks. The first one is going from ch_in to ch_out with the indicated stride and all the others are blocks of stride 1 with ch_out to ch_out tensors. Once the blocks are defined, our model is purely sequential, which is why we define it as a subclass of nn.Sequential. (Ignore the expansion parameter for now; we’ll discuss it in the next section. For now, it’ll be 1, so it doesn’t do anything.)
+
The various versions of the models (ResNet-18, -34, -50, etc.) just change the number of blocks in each of those groups. This is the definition of a ResNet-18:
+
+
rn = ResNet(dls.c, [2,2,2,2])
+
+
Let’s train it for a little bit and see how it fares compared to the previous model:
Even though we have more channels (and our model is therefore even more accurate), our training is just as fast as before, thanks to our optimized stem.
+
To make our model deeper without taking too much compute or memory, we can use another kind of layer introduced by the ResNet paper for ResNets with a depth of 50 or more: the bottleneck layer.
+
+
+
Bottleneck Layers
+
Instead of stacking two convolutions with a kernel size of 3, bottleneck layers use three different convolutions: two 1×1 (at the beginning and the end) and one 3×3, as shown on the right in Figure 14.4.
+
+
+
+
Why is that useful? 1×1 convolutions are much faster, so even if this seems to be a more complex design, this block executes faster than the first ResNet block we saw. This then lets us use more filters: as we see in the illustration, the number of filters in and out is 4 times higher (256 instead of 64) diminish then restore the number of channels (hence the name bottleneck). The overall impact is that we can use more filters in the same amount of time.
+
Let’s try replacing our ResBlock with this bottleneck design:
We’ll use this to create a ResNet-50 with group sizes of (3,4,6,3). We now need to pass 4 in to the expansion parameter of ResNet, since we need to start with four times less channels and we’ll end with four times more channels.
+
Deeper networks like this don’t generally show improvements when training for only 5 epochs, so we’ll bump it up to 20 epochs this time to make the most of our bigger model. And to really get great results, let’s use bigger images too:
We don’t have to do anything to account for the larger 224-pixel images; thanks to our fully convolutional network, it just works. This is also why we were able to do progressive resizing earlier in the book—the models we used were fully convolutional, so we were even able to fine-tune models trained with different sizes. We can now train our model and see the effects:
We’re getting a great result now! Try adding Mixup, and then training this for a hundred epochs while you go get lunch. You’ll have yourself a very accurate image classifier, trained from scratch.
+
The bottleneck design we’ve shown here is typically only used in ResNet-50, -101, and -152 models. ResNet-18 and -34 models usually use the non-bottleneck design seen in the previous section. However, we’ve noticed that the bottleneck layer generally works better even for the shallower networks. This just goes to show that the little details in papers tend to stick around for years, even if they’re actually not quite the best design! Questioning assumptions and “stuff everyone knows” is always a good idea, because this is still a new field, and there are lots of details that aren’t always done well.
+
+
+
+
14.3 Conclusion
+
You have now seen how the models we have been using for computer vision since the first chapter are built, using skip connections to allow deeper models to be trained. Even if there has been a lot of research into better architectures, they all use one version or another of this trick, to make a direct path from the input to the end of the network. When using transfer learning, the ResNet is the pretrained model. In the next chapter, we will look at the final details of how the models we actually used were built from it.
+
+
+
14.4 Questionnaire
+
+
How did we get to a single vector of activations in the CNNs used for MNIST in previous chapters? Why isn’t that suitable for Imagenette?
+
What do we do for Imagenette instead?
+
What is “adaptive pooling”?
+
What is “average pooling”?
+
Why do we need Flatten after an adaptive average pooling layer?
+
What is a “skip connection”?
+
Why do skip connections allow us to train deeper models?
+
What does Figure 14.1 show? How did that lead to the idea of skip connections?
+
What is “identity mapping”?
+
What is the basic equation for a ResNet block (ignoring batchnorm and ReLU layers)?
+
What do ResNets have to do with residuals?
+
How do we deal with the skip connection when there is a stride-2 convolution? How about when the number of filters changes?
+
How can we express a 1×1 convolution in terms of a vector dot product?
+
Create a 1x1 convolution with F.conv2d or nn.Conv2d and apply it to an image. What happens to the shape of the image?
When is top-5 accuracy a better metric than top-1 accuracy?
+
What is the “stem” of a CNN?
+
Why do we use plain convolutions in the CNN stem, instead of ResNet blocks?
+
How does a bottleneck block differ from a plain ResNet block?
+
Why is a bottleneck block faster?
+
How do fully convolutional nets (and nets with adaptive pooling in general) allow for progressive resizing?
+
+
+
Further Research
+
+
Try creating a fully convolutional net with adaptive average pooling for MNIST (note that you’ll need fewer stride-2 layers). How does it compare to a network without such a pooling layer?
+
In Chapter 17 we introduce Einstein summation notation. Skip ahead to see how this works, and then write an implementation of the 1×1 convolution operation using torch.einsum. Compare it to the same operation using torch.conv2d.
+
Write a “top-5 accuracy” function using plain PyTorch or plain Python.
+
Train a model on Imagenette for more epochs, with and without label smoothing. Take a look at the Imagenette leaderboards and see how close you can get to the best results shown. Read the linked pages describing the leading approaches.
+
+
+
+
+
\ No newline at end of file
diff --git a/resnet_files/figure-html/cell-10-output-2.png b/resnet_files/figure-html/cell-10-output-2.png
new file mode 100644
index 0000000..d4721fb
Binary files /dev/null and b/resnet_files/figure-html/cell-10-output-2.png differ
diff --git a/resnet_files/figure-html/cell-6-output-1.png b/resnet_files/figure-html/cell-6-output-1.png
new file mode 100644
index 0000000..d0b2bf1
Binary files /dev/null and b/resnet_files/figure-html/cell-6-output-1.png differ
diff --git a/robots.txt b/robots.txt
new file mode 100644
index 0000000..6c8076f
--- /dev/null
+++ b/robots.txt
@@ -0,0 +1 @@
+Sitemap: https://fastai.github.io/fastbook2e/sitemap.xml
diff --git a/search.json b/search.json
new file mode 100644
index 0000000..61aeea0
--- /dev/null
+++ b/search.json
@@ -0,0 +1,848 @@
+[
+ {
+ "objectID": "intro.html",
+ "href": "intro.html",
+ "title": "1 Your Deep Learning Journey",
+ "section": "",
+ "text": "1.1 Deep Learning Is for Everyone\nA lot of people assume that you need all kinds of hard-to-find stuff to get great results with deep learning, but as you’ll see in this book, those people are wrong. Table 1.1 is a list of a few thing you absolutely don’t need to do world-class deep learning.\nDeep learning is a computer technique to extract and transform data–-with use cases ranging from human speech recognition to animal imagery classification–-by using multiple layers of neural networks. Each of these layers takes its inputs from previous layers and progressively refines them. The layers are trained by algorithms that minimize their errors and improve their accuracy. In this way, the network learns to perform a specified task. We will discuss training algorithms in detail in the next section.\nDeep learning has power, flexibility, and simplicity. That’s why we believe it should be applied across many disciplines. These include the social and physical sciences, the arts, medicine, finance, scientific research, and many more. To give a personal example, despite having no background in medicine, Jeremy started Enlitic, a company that uses deep learning algorithms to diagnose illness and disease. Within months of starting the company, it was announced that its algorithm could identify malignant tumors more accurately than radiologists.\nHere’s a list of some of the thousands of tasks in different areas at which deep learning, or methods heavily using deep learning, is now the best in the world:\nWhat is remarkable is that deep learning has such varied application yet nearly all of deep learning is based on a single type of model, the neural network.\nBut neural networks are not in fact completely new. In order to have a wider perspective on the field, it is worth it to start with a bit of history.",
+ "crumbs": [
+ "Getting Started",
+ "1Your Deep Learning Journey"
+ ]
+ },
+ {
+ "objectID": "intro.html#deep-learning-is-for-everyone",
+ "href": "intro.html#deep-learning-is-for-everyone",
+ "title": "1 Your Deep Learning Journey",
+ "section": "",
+ "text": "Table 1.1: What you don’t need to do deep learning\n\n\n\n\n\n\n\n\n\nMyth (don’t need)\nTruth\n\n\n\n\nLots of math\nJust high school math is sufficient\n\n\nLots of data\nWe’ve seen record-breaking results with <50 items of data\n\n\nLots of expensive computers\nYou can get what you need for state of the art work for free\n\n\n\n\n\n\n\n\n\n\nNatural language processing (NLP):: Answering questions; speech recognition; summarizing documents; classifying documents; finding names, dates, etc. in documents; searching for articles mentioning a concept\nComputer vision:: Satellite and drone imagery interpretation (e.g., for disaster resilience); face recognition; image captioning; reading traffic signs; locating pedestrians and vehicles in autonomous vehicles\nMedicine:: Finding anomalies in radiology images, including CT, MRI, and X-ray images; counting features in pathology slides; measuring features in ultrasounds; diagnosing diabetic retinopathy\nBiology:: Folding proteins; classifying proteins; many genomics tasks, such as tumor-normal sequencing and classifying clinically actionable genetic mutations; cell classification; analyzing protein/protein interactions\nImage generation:: Colorizing images; increasing image resolution; removing noise from images; converting images to art in the style of famous artists\nRecommendation systems:: Web search; product recommendations; home page layout\nPlaying games:: Chess, Go, most Atari video games, and many real-time strategy games\nRobotics:: Handling objects that are challenging to locate (e.g., transparent, shiny, lacking texture) or hard to pick up\nOther applications:: Financial and logistical forecasting, text to speech, and much more…",
+ "crumbs": [
+ "Getting Started",
+ "1Your Deep Learning Journey"
+ ]
+ },
+ {
+ "objectID": "intro.html#neural-networks-a-brief-history",
+ "href": "intro.html#neural-networks-a-brief-history",
+ "title": "1 Your Deep Learning Journey",
+ "section": "1.2 Neural Networks: A Brief History",
+ "text": "1.2 Neural Networks: A Brief History\nIn 1943 Warren McCulloch, a neurophysiologist, and Walter Pitts, a logician, teamed up to develop a mathematical model of an artificial neuron. In their paper “A Logical Calculus of the Ideas Immanent in Nervous Activity” they declared that:\n\nBecause of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms.\n\nMcCulloch and Pitts realized that a simplified model of a real neuron could be represented using simple addition and thresholding, as shown in Figure 1.1. Pitts was self-taught, and by age 12, had received an offer to study at Cambridge University with the great Bertrand Russell. He did not take up this invitation, and indeed throughout his life did not accept any offers of advanced degrees or positions of authority. Most of his famous work was done while he was homeless. Despite his lack of an officially recognized position and increasing social isolation, his work with McCulloch was influential, and was taken up by a psychologist named Frank Rosenblatt.\n\n\n\n\n\n\nFigure 1.1: Natural and artificial neurons\n\n\n\nRosenblatt further developed the artificial neuron to give it the ability to learn. Even more importantly, he worked on building the first device that actually used these principles, the Mark I Perceptron. In “The Design of an Intelligent Automaton” Rosenblatt wrote about this work: “We are now about to witness the birth of such a machine–-a machine capable of perceiving, recognizing and identifying its surroundings without any human training or control.” The perceptron was built, and was able to successfully recognize simple shapes.\nAn MIT professor named Marvin Minsky (who was a grade behind Rosenblatt at the same high school!), along with Seymour Papert, wrote a book called Perceptrons (MIT Press), about Rosenblatt’s invention. They showed that a single layer of these devices was unable to learn some simple but critical mathematical functions (such as XOR). In the same book, they also showed that using multiple layers of the devices would allow these limitations to be addressed. Unfortunately, only the first of these insights was widely recognized. As a result, the global academic community nearly entirely gave up on neural networks for the next two decades.\nPerhaps the most pivotal work in neural networks in the last 50 years was the multi-volume Parallel Distributed Processing (PDP) by David Rumelhart, James McClellan, and the PDP Research Group, released in 1986 by MIT Press. Chapter 1 lays out a similar hope to that shown by Rosenblatt:\n\nPeople are smarter than today’s computers because the brain employs a basic computational architecture that is more suited to deal with a central aspect of the natural information processing tasks that people are so good at. …We will introduce a computational framework for modeling cognitive processes that seems… closer than other frameworks to the style of computation as it might be done by the brain.\n\nThe premise that PDP is using here is that traditional computer programs work very differently to brains, and that might be why computer programs had been (at that point) so bad at doing things that brains find easy (such as recognizing objects in pictures). The authors claimed that the PDP approach was “closer than other frameworks” to how the brain works, and therefore it might be better able to handle these kinds of tasks.\nIn fact, the approach laid out in PDP is very similar to the approach used in today’s neural networks. The book defined parallel distributed processing as requiring:\n\nA set of processing units\nA state of activation\nAn output function for each unit\nA pattern of connectivity among units\nA propagation rule for propagating patterns of activities through the network of connectivities\nAn activation rule for combining the inputs impinging on a unit with the current state of that unit to produce an output for the unit\nA learning rule whereby patterns of connectivity are modified by experience\nAn environment within which the system must operate\n\nWe will see in this book that modern neural networks handle each of these requirements.\nIn the 1980’s most models were built with a second layer of neurons, thus avoiding the problem that had been identified by Minsky and Papert (this was their “pattern of connectivity among units,” to use the framework above). And indeed, neural networks were widely used during the ’80s and ’90s for real, practical projects. However, again a misunderstanding of the theoretical issues held back the field. In theory, adding just one extra layer of neurons was enough to allow any mathematical function to be approximated with these neural networks, but in practice such networks were often too big and too slow to be useful.\nAlthough researchers showed 30 years ago that to get practical good performance you need to use even more layers of neurons, it is only in the last decade that this principle has been more widely appreciated and applied. Neural networks are now finally living up to their potential, thanks to the use of more layers, coupled with the capacity to do so due to improvements in computer hardware, increases in data availability, and algorithmic tweaks that allow neural networks to be trained faster and more easily. We now have what Rosenblatt promised: “a machine capable of perceiving, recognizing, and identifying its surroundings without any human training or control.”\nThis is what you will learn how to build in this book. But first, since we are going to be spending a lot of time together, let’s get to know each other a bit…",
+ "crumbs": [
+ "Getting Started",
+ "1Your Deep Learning Journey"
+ ]
+ },
+ {
+ "objectID": "intro.html#who-we-are",
+ "href": "intro.html#who-we-are",
+ "title": "1 Your Deep Learning Journey",
+ "section": "1.3 Who We Are",
+ "text": "1.3 Who We Are\nWe are Sylvain and Jeremy, your guides on this journey. We hope that you will find us well suited for this position.\nJeremy has been using and teaching machine learning for around 30 years. He started using neural networks 25 years ago. During this time, he has led many companies and projects that have machine learning at their core, including founding the first company to focus on deep learning and medicine, Enlitic, and taking on the role of President and Chief Scientist of the world’s largest machine learning community, Kaggle. He is the co-founder, along with Dr. Rachel Thomas, of fast.ai, the organization that built the course this book is based on.\nFrom time to time you will hear directly from us, in sidebars like this one from Jeremy:\n\nJ: Hi everybody, I’m Jeremy! You might be interested to know that I do not have any formal technical education. I completed a BA, with a major in philosophy, and didn’t have great grades. I was much more interested in doing real projects, rather than theoretical studies, so I worked full time at a management consulting firm called McKinsey and Company throughout my university years. If you’re somebody who would rather get their hands dirty building stuff than spend years learning abstract concepts, then you will understand where I am coming from! Look out for sidebars from me to find information most suited to people with a less mathematical or formal technical background—that is, people like me…\n\nSylvain, on the other hand, knows a lot about formal technical education. In fact, he has written 10 math textbooks, covering the entire advanced French maths curriculum!\n\nS: Unlike Jeremy, I have not spent many years coding and applying machine learning algorithms. Rather, I recently came to the machine learning world, by watching Jeremy’s fast.ai course videos. So, if you are somebody who has not opened a terminal and written commands at the command line, then you will understand where I am coming from! Look out for sidebars from me to find information most suited to people with a more mathematical or formal technical background, but less real-world coding experience—that is, people like me…\n\nThe fast.ai course has been studied by hundreds of thousands of students, from all walks of life, from all parts of the world. Sylvain stood out as the most impressive student of the course that Jeremy had ever seen, which led to him joining fast.ai, and then becoming the coauthor, along with Jeremy, of the fastai software library.\nAll this means that between us you have the best of both worlds: the people who know more about the software than anybody else, because they wrote it; an expert on math, and an expert on coding and machine learning; and also people who understand both what it feels like to be a relative outsider in math, and a relative outsider in coding and machine learning.\nAnybody who has watched sports knows that if you have a two-person commentary team then you also need a third person to do “special comments.” Our special commentator is Alexis Gallagher. Alexis has a very diverse background: he has been a researcher in mathematical biology, a screenplay writer, an improv performer, a McKinsey consultant (like Jeremy!), a Swift coder, and a CTO.\n\nA: I’ve decided it’s time for me to learn about this AI stuff! After all, I’ve tried pretty much everything else… But I don’t really have a background in building machine learning models. Still… how hard can it be? I’m going to be learning throughout this book, just like you are. Look out for my sidebars for learning tips that I found helpful on my journey, and hopefully you will find helpful too.",
+ "crumbs": [
+ "Getting Started",
+ "1Your Deep Learning Journey"
+ ]
+ },
+ {
+ "objectID": "intro.html#how-to-learn-deep-learning",
+ "href": "intro.html#how-to-learn-deep-learning",
+ "title": "1 Your Deep Learning Journey",
+ "section": "1.4 How to Learn Deep Learning",
+ "text": "1.4 How to Learn Deep Learning\nHarvard professor David Perkins, who wrote Making Learning Whole (Jossey-Bass), has much to say about teaching. The basic idea is to teach the whole game. That means that if you’re teaching baseball, you first take people to a baseball game or get them to play it. You don’t teach them how to wind twine to make a baseball from scratch, the physics of a parabola, or the coefficient of friction of a ball on a bat.\nPaul Lockhart, a Columbia math PhD, former Brown professor, and K-12 math teacher, imagines in the influential essay “A Mathematician’s Lament” a nightmare world where music and art are taught the way math is taught. Children are not allowed to listen to or play music until they have spent over a decade mastering music notation and theory, spending classes transposing sheet music into a different key. In art class, students study colors and applicators, but aren’t allowed to actually paint until college. Sound absurd? This is how math is taught–-we require students to spend years doing rote memorization and learning dry, disconnected fundamentals that we claim will pay off later, long after most of them quit the subject.\nUnfortunately, this is where many teaching resources on deep learning begin–-asking learners to follow along with the definition of the Hessian and theorems for the Taylor approximation of your loss functions, without ever giving examples of actual working code. We’re not knocking calculus. We love calculus, and Sylvain has even taught it at the college level, but we don’t think it’s the best place to start when learning deep learning!\nIn deep learning, it really helps if you have the motivation to fix your model to get it to do better. That’s when you start learning the relevant theory. But you need to have the model in the first place. We teach almost everything through real examples. As we build out those examples, we go deeper and deeper, and we’ll show you how to make your projects better and better. This means that you’ll be gradually learning all the theoretical foundations you need, in context, in such a way that you’ll see why it matters and how it works.\nSo, here’s our commitment to you. Throughout this book, we will follow these principles:\n\nTeaching the whole game. We’ll start by showing how to use a complete, working, very usable, state-of-the-art deep learning network to solve real-world problems, using simple, expressive tools. And then we’ll gradually dig deeper and deeper into understanding how those tools are made, and how the tools that make those tools are made, and so on…\nAlways teaching through examples. We’ll ensure that there is a context and a purpose that you can understand intuitively, rather than starting with algebraic symbol manipulation.\nSimplifying as much as possible. We’ve spent years building tools and teaching methods that make previously complex topics very simple.\nRemoving barriers. Deep learning has, until now, been a very exclusive game. We’re breaking it open, and ensuring that everyone can play.\n\nThe hardest part of deep learning is artisanal: how do you know if you’ve got enough data, whether it is in the right format, if your model is training properly, and, if it’s not, what you should do about it? That is why we believe in learning by doing. As with basic data science skills, with deep learning you only get better through practical experience. Trying to spend too much time on the theory can be counterproductive. The key is to just code and try to solve problems: the theory can come later, when you have context and motivation.\nThere will be times when the journey will feel hard. Times where you feel stuck. Don’t give up! Rewind through the book to find the last bit where you definitely weren’t stuck, and then read slowly through from there to find the first thing that isn’t clear. Then try some code experiments yourself, and Google around for more tutorials on whatever the issue you’re stuck with is—often you’ll find some different angle on the material might help it to click. Also, it’s expected and normal to not understand everything (especially the code) on first reading. Trying to understand the material serially before proceeding can sometimes be hard. Sometimes things click into place after you get more context from parts down the road, from having a bigger picture. So if you do get stuck on a section, try moving on anyway and make a note to come back to it later.\nRemember, you don’t need any particular academic background to succeed at deep learning. Many important breakthroughs are made in research and industry by folks without a PhD, such as “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks”—one of the most influential papers of the last decade—with over 5,000 citations, which was written by Alec Radford when he was an undergraduate. Even at Tesla, where they’re trying to solve the extremely tough challenge of making a self-driving car, CEO Elon Musk says:\n\nA PhD is definitely not required. All that matters is a deep understanding of AI & ability to implement NNs in a way that is actually useful (latter point is what’s truly hard). Don’t care if you even graduated high school.\n\nWhat you will need to do to succeed however is to apply what you learn in this book to a personal project, and always persevere.\n\nYour Projects and Your Mindset\nWhether you’re excited to identify if plants are diseased from pictures of their leaves, auto-generate knitting patterns, diagnose TB from X-rays, or determine when a raccoon is using your cat door, we will get you using deep learning on your own problems (via pre-trained models from others) as quickly as possible, and then will progressively drill into more details. You’ll learn how to use deep learning to solve your own problems at state-of-the-art accuracy within the first 30 minutes of the next chapter! (And feel free to skip straight there now if you’re dying to get coding right away.) There is a pernicious myth out there that you need to have computing resources and datasets the size of those at Google to be able to do deep learning, but it’s not true.\nSo, what sorts of tasks make for good test cases? You could train your model to distinguish between Picasso and Monet paintings or to pick out pictures of your daughter instead of pictures of your son. It helps to focus on your hobbies and passions–-setting yourself four or five little projects rather than striving to solve a big, grand problem tends to work better when you’re getting started. Since it is easy to get stuck, trying to be too ambitious too early can often backfire. Then, once you’ve got the basics mastered, aim to complete something you’re really proud of!\n\nJ: Deep learning can be set to work on almost any problem. For instance, my first startup was a company called FastMail, which provided enhanced email services when it launched in 1999 (and still does to this day). In 2002 I set it up to use a primitive form of deep learning, single-layer neural networks, to help categorize emails and stop customers from receiving spam.\n\nCommon character traits in the people that do well at deep learning include playfulness and curiosity. The late physicist Richard Feynman is an example of someone who we’d expect to be great at deep learning: his development of an understanding of the movement of subatomic particles came from his amusement at how plates wobble when they spin in the air.\nLet’s now focus on what you will learn, starting with the software.",
+ "crumbs": [
+ "Getting Started",
+ "1Your Deep Learning Journey"
+ ]
+ },
+ {
+ "objectID": "intro.html#the-software-pytorch-fastai-and-jupyter",
+ "href": "intro.html#the-software-pytorch-fastai-and-jupyter",
+ "title": "1 Your Deep Learning Journey",
+ "section": "1.5 The Software: PyTorch, fastai, and Jupyter",
+ "text": "1.5 The Software: PyTorch, fastai, and Jupyter\n(And Why It Doesn’t Matter)\nWe’ve completed hundreds of machine learning projects using dozens of different packages, and many different programming languages. At fast.ai, we have written courses using most of the main deep learning and machine learning packages used today. After PyTorch came out in 2017 we spent over a thousand hours testing it before deciding that we would use it for future courses, software development, and research. Since that time PyTorch has become the world’s fastest-growing deep learning library and is already used for most research papers at top conferences. This is generally a leading indicator of usage in industry, because these are the papers that end up getting used in products and services commercially. We have found that PyTorch is the most flexible and expressive library for deep learning. It does not trade off speed for simplicity, but provides both.\nPyTorch works best as a low-level foundation library, providing the basic operations for higher-level functionality. The fastai library is the most popular library for adding this higher-level functionality on top of PyTorch. It’s also particularly well suited to the purposes of this book, because it is unique in providing a deeply layered software architecture (there’s even a peer-reviewed academic paper about this layered API). In this book, as we go deeper and deeper into the foundations of deep learning, we will also go deeper and deeper into the layers of fastai. This book covers version 2 of the fastai library, which is a from-scratch rewrite providing many unique features.\nHowever, it doesn’t really matter what software you learn, because it takes only a few days to learn to switch from one library to another. What really matters is learning the deep learning foundations and techniques properly. Our focus will be on using code that clearly expresses the concepts that you need to learn. Where we are teaching high-level concepts, we will use high-level fastai code. Where we are teaching low-level concepts, we will use low-level PyTorch, or even pure Python code.\nIf it feels like new deep learning libraries are appearing at a rapid pace nowadays, then you need to be prepared for a much faster rate of change in the coming months and years. As more people enter the field, they will bring more skills and ideas, and try more things. You should assume that whatever specific libraries and software you learn today will be obsolete in a year or two. Just think about the number of changes in libraries and technology stacks that occur all the time in the world of web programming—a much more mature and slow-growing area than deep learning. We strongly believe that the focus in learning needs to be on understanding the underlying techniques and how to apply them in practice, and how to quickly build expertise in new tools and techniques as they are released.\nBy the end of the book, you’ll understand nearly all the code that’s inside fastai (and much of PyTorch too), because in each chapter we’ll be digging a level deeper to show you exactly what’s going on as we build and train our models. This means that you’ll have learned the most important best practices used in modern deep learning—not just how to use them, but how they really work and are implemented. If you want to use those approaches in another framework, you’ll have the knowledge you need to do so if needed.\nSince the most important thing for learning deep learning is writing code and experimenting, it’s important that you have a great platform for experimenting with code. The most popular programming experimentation platform is called Jupyter. This is what we will be using throughout this book. We will show you how you can use Jupyter to train and experiment with models and introspect every stage of the data pre-processing and model development pipeline. Jupyter Notebook is the most popular tool for doing data science in Python, for good reason. It is powerful, flexible, and easy to use. We think you will love it!\nLet’s see it in practice and train our first model.",
+ "crumbs": [
+ "Getting Started",
+ "1Your Deep Learning Journey"
+ ]
+ },
+ {
+ "objectID": "intro.html#your-first-model",
+ "href": "intro.html#your-first-model",
+ "title": "1 Your Deep Learning Journey",
+ "section": "1.6 Your First Model",
+ "text": "1.6 Your First Model\nAs we said before, we will teach you how to do things before we explain why they work. Following this top-down approach, we will begin by actually training an image classifier to recognize dogs and cats with almost 100% accuracy. To train this model and run our experiments, you will need to do some initial setup. Don’t worry, it’s not as hard as it looks.\n\ns: Do not skip the setup part even if it looks intimidating at first, especially if you have little or no experience using things like a terminal or the command line. Most of that is actually not necessary and you will find that the easiest servers can be set up with just your usual web browser. It is crucial that you run your own experiments in parallel with this book in order to learn.\n\n\nGetting a GPU Deep Learning Server\nTo do nearly everything in this book, you’ll need access to a computer with an NVIDIA GPU (unfortunately other brands of GPU are not fully supported by the main deep learning libraries). However, we don’t recommend you buy one; in fact, even if you already have one, we don’t suggest you use it just yet! Setting up a computer takes time and energy, and you want all your energy to focus on deep learning right now. Therefore, we instead suggest you rent access to a computer that already has everything you need preinstalled and ready to go. Costs can be as little as US$0.25 per hour while you’re using it, and some options are even free.\n\njargon: Graphics Processing Unit (GPU): Also known as a graphics card. A special kind of processor in your computer that can handle thousands of single tasks at the same time, especially designed for displaying 3D environments on a computer for playing games. These same basic tasks are very similar to what neural networks do, such that GPUs can run neural networks hundreds of times faster than regular CPUs. All modern computers contain a GPU, but few contain the right kind of GPU necessary for deep learning.\n\nThe best choice of GPU servers to use with this book will change over time, as companies come and go and prices change. We maintain a list of our recommended options on the book’s website, so go there now and follow the instructions to get connected to a GPU deep learning server. Don’t worry, it only takes about two minutes to get set up on most platforms, and many don’t even require any payment, or even a credit card, to get started.\n\nA: My two cents: heed this advice! If you like computers you will be tempted to set up your own box. Beware! It is feasible but surprisingly involved and distracting. There is a good reason this book is not titled, Everything You Ever Wanted to Know About Ubuntu System Administration, NVIDIA Driver Installation, apt-get, conda, pip, and Jupyter Notebook Configuration. That would be a book of its own. Having designed and deployed our production machine learning infrastructure at work, I can testify it has its satisfactions, but it is as unrelated to modeling as maintaining an airplane is to flying one.\n\nEach option shown on the website includes a tutorial; after completing the tutorial, you will end up with a screen looking like Figure 1.2.\n\n\n\n\n\n\nFigure 1.2: Initial view of Jupyter Notebook\n\n\n\nYou are now ready to run your first Jupyter notebook!\n\njargon: Jupyter Notebook: A piece of software that allows you to include formatted text, code, images, videos, and much more, all within a single interactive document. Jupyter received the highest honor for software, the ACM Software System Award, thanks to its wide use and enormous impact in many academic fields and in industry. Jupyter Notebook is the software most widely used by data scientists for developing and interacting with deep learning models.\n\n\n\nRunning Your First Notebook\nThe notebooks are labeled by chapter and then by notebook number, so that they are in the same order as they are presented in this book. So, the very first notebook you will see listed is the notebook that you need to use now. You will be using this notebook to train a model that can recognize dog and cat photos. To do this, you’ll be downloading a dataset of dog and cat photos, and using that to train a model. A dataset is simply a bunch of data—it could be images, emails, financial indicators, sounds, or anything else. There are many datasets made freely available that are suitable for training models. Many of these datasets are created by academics to help advance research, many are made available for competitions (there are competitions where data scientists can compete to see who has the most accurate model!), and some are by-products of other processes (such as financial filings).\n\nnote: Full and Stripped Notebooks: There are two folders containing different versions of the notebooks. The full folder contains the exact notebooks used to create the book you’re reading now, with all the prose and outputs. The stripped version has the same headings and code cells, but all outputs and prose have been removed. After reading a section of the book, we recommend working through the stripped notebooks, with the book closed, and seeing if you can figure out what each cell will show before you execute it. Also try to recall what the code is demonstrating.\n\nTo open a notebook, just click on it. The notebook will open, and it will look something like Figure 1.3 (note that there may be slight differences in details across different platforms; you can ignore those differences).\n\n\n\n\n\n\nFigure 1.3: A Jupyter notebook\n\n\n\nA notebook consists of cells. There are two main types of cell:\n\nCells containing formatted text, images, and so forth. These use a format called markdown, which you will learn about soon.\nCells containing code that can be executed, and outputs will appear immediately underneath (which could be plain text, tables, images, animations, sounds, or even interactive applications).\n\nJupyter notebooks can be in one of two modes: edit mode or command mode. In edit mode typing on your keyboard enters the letters into the cell in the usual way. However, in command mode, you will not see any flashing cursor, and the keys on your keyboard will each have a special function.\nBefore continuing, press the Escape key on your keyboard to switch to command mode (if you are already in command mode, this does nothing, so press it now just in case). To see a complete list of all of the functions available, press H; press Escape to remove this help screen. Notice that in command mode, unlike most programs, commands do not require you to hold down Control, Alt, or similar—you simply press the required letter key.\nYou can make a copy of a cell by pressing C (the cell needs to be selected first, indicated with an outline around it; if it is not already selected, click on it once). Then press V to paste a copy of it.\nClick on the cell that begins with the line “# CLICK ME” to select it. The first character in that line indicates that what follows is a comment in Python, so it is ignored when executing the cell. The rest of the cell is, believe it or not, a complete system for creating and training a state-of-the-art model for recognizing cats versus dogs. So, let’s train it now! To do so, just press Shift-Enter on your keyboard, or press the Play button on the toolbar. Then wait a few minutes while the following things happen:\n\nA dataset called the Oxford-IIIT Pet Dataset that contains 7,349 images of cats and dogs from 37 different breeds will be downloaded from the fast.ai datasets collection to the GPU server you are using, and will then be extracted.\nA pretrained model that has already been trained on 1.3 million images, using a competition-winning model will be downloaded from the internet.\nThe pretrained model will be fine-tuned using the latest advances in transfer learning, to create a model that is specially customized for recognizing dogs and cats.\n\nThe first two steps only need to be run once on your GPU server. If you run the cell again, it will use the dataset and model that have already been downloaded, rather than downloading them again. Let’s take a look at the contents of the cell, and the results:\n\n# CLICK ME\nfrom fastai.vision.all import *\npath = untar_data(URLs.PETS)/'images'\n\ndef is_cat(x): return x[0].isupper()\ndls = ImageDataLoaders.from_name_func(\n path, get_image_files(path), valid_pct=0.2, seed=42,\n label_func=is_cat, item_tfms=Resize(224))\n\nlearn = vision_learner(dls, resnet34, metrics=error_rate)\nlearn.fine_tune(1)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\nerror_rate\ntime\n\n\n\n\n0\n0.180385\n0.023942\n0.006766\n00:16\n\n\n\n\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\nerror_rate\ntime\n\n\n\n\n0\n0.056023\n0.007580\n0.004060\n00:20\n\n\n\n\n\nYou will probably not see exactly the same results that are in the book. There are a lot of sources of small random variation involved in training models. We generally see an error rate of well less than 0.02 in this example, however.\n\nimportant: Training Time: Depending on your network speed, it might take a few minutes to download the pretrained model and dataset. Running fine_tune might take a minute or so. Often models in this book take a few minutes to train, as will your own models, so it’s a good idea to come up with good techniques to make the most of this time. For instance, keep reading the next section while your model trains, or open up another notebook and use it for some coding experiments.\n\n\n\nSidebar: This Book Was Written in Jupyter Notebooks\nWe wrote this book using Jupyter notebooks, so for nearly every chart, table, and calculation in this book, we’ll be showing you the exact code required to replicate it yourself. That’s why very often in this book, you will see some code immediately followed by a table, a picture or just some text. If you go on the book’s website you will find all the code, and you can try running and modifying every example yourself.\nYou just saw how a cell that outputs a table looks inside the book. Here is an example of a cell that outputs text:\n\n1+1\n\n2\n\n\nJupyter will always print or show the result of the last line (if there is one). For instance, here is an example of a cell that outputs an image:\n\nimg = PILImage.create(image_cat())\nimg.to_thumb(192)\n\n\n\n\n\n\n\n\n\n\nEnd sidebar\nSo, how do we know if this model is any good? In the last column of the table you can see the error rate, which is the proportion of images that were incorrectly identified. The error rate serves as our metric—our measure of model quality, chosen to be intuitive and comprehensible. As you can see, the model is nearly perfect, even though the training time was only a few seconds (not including the one-time downloading of the dataset and the pretrained model). In fact, the accuracy you’ve achieved already is far better than anybody had ever achieved just 10 years ago!\nFinally, let’s check that this model actually works. Go and get a photo of a dog, or a cat; if you don’t have one handy, just search Google Images and download an image that you find there. Now execute the cell with uploader defined. It will output a button you can click, so you can select the image you want to classify:\n\nuploader = widgets.FileUpload()\nuploader\n\n\n\n\n\n\n\n\n\n\nNow you can pass the uploaded file to the model. Make sure that it is a clear photo of a single dog or a cat, and not a line drawing, cartoon, or similar. The notebook will tell you whether it thinks it is a dog or a cat, and how confident it is. Hopefully, you’ll find that your model did a great job:\n\nimg = PILImage.create(uploader.data[0])\nis_cat,_,probs = learn.predict(img)\nprint(f\"Is this a cat?: {is_cat}.\")\nprint(f\"Probability it's a cat: {probs[1].item():.6f}\")\n\n\n\n\nIs this a cat?: True.\nProbability it's a cat: 1.000000\n\n\nCongratulations on your first classifier!\nBut what does this mean? What did you actually do? In order to explain this, let’s zoom out again to take in the big picture.\n\n\nWhat Is Machine Learning?\nYour classifier is a deep learning model. As was already mentioned, deep learning models use neural networks, which originally date from the 1950s and have become powerful very recently thanks to recent advancements.\nAnother key piece of context is that deep learning is just a modern area in the more general discipline of machine learning. To understand the essence of what you did when you trained your own classification model, you don’t need to understand deep learning. It is enough to see how your model and your training process are examples of the concepts that apply to machine learning in general.\nSo in this section, we will describe what machine learning is. We will look at the key concepts, and show how they can be traced back to the original essay that introduced them.\nMachine learning is, like regular programming, a way to get computers to complete a specific task. But how would we use regular programming to do what we just did in the last section: recognize dogs versus cats in photos? We would have to write down for the computer the exact steps necessary to complete the task.\nNormally, it’s easy enough for us to write down the steps to complete a task when we’re writing a program. We just think about the steps we’d take if we had to do the task by hand, and then we translate them into code. For instance, we can write a function that sorts a list. In general, we’d write a function that looks something like Figure 1.5 (where inputs might be an unsorted list, and results a sorted list).\n\n\n\n\n\n\n\n\nFigure 1.5: A traditional program\n\n\n\n\n\nBut for recognizing objects in a photo that’s a bit tricky; what are the steps we take when we recognize an object in a picture? We really don’t know, since it all happens in our brain without us being consciously aware of it!\nRight back at the dawn of computing, in 1949, an IBM researcher named Arthur Samuel started working on a different way to get computers to complete tasks, which he called machine learning. In his classic 1962 essay “Artificial Intelligence: A Frontier of Automation”, he wrote:\n\nProgramming a computer for such computations is, at best, a difficult task, not primarily because of any inherent complexity in the computer itself but, rather, because of the need to spell out every minute step of the process in the most exasperating detail. Computers, as any programmer will tell you, are giant morons, not giant brains.\n\nHis basic idea was this: instead of telling the computer the exact steps required to solve a problem, show it examples of the problem to solve, and let it figure out how to solve it itself. This turned out to be very effective: by 1961 his checkers-playing program had learned so much that it beat the Connecticut state champion! Here’s how he described his idea (from the same essay as above):\n\nSuppose we arrange for some automatic means of testing the effectiveness of any current weight assignment in terms of actual performance and provide a mechanism for altering the weight assignment so as to maximize the performance. We need not go into the details of such a procedure to see that it could be made entirely automatic and to see that a machine so programmed would “learn” from its experience.\n\nThere are a number of powerful concepts embedded in this short statement:\n\nThe idea of a “weight assignment”\nThe fact that every weight assignment has some “actual performance”\nThe requirement that there be an “automatic means” of testing that performance,\n\nThe need for a “mechanism” (i.e., another automatic process) for improving the performance by changing the weight assignments\n\nLet us take these concepts one by one, in order to understand how they fit together in practice. First, we need to understand what Samuel means by a weight assignment.\nWeights are just variables, and a weight assignment is a particular choice of values for those variables. The program’s inputs are values that it processes in order to produce its results—for instance, taking image pixels as inputs, and returning the classification “dog” as a result. The program’s weight assignments are other values that define how the program will operate.\nSince they will affect the program they are in a sense another kind of input, so we will update our basic picture in Figure 1.5 and replace it with Figure 1.6 in order to take this into account.\n\n\n\n\n\n\n\n\nFigure 1.6: A program using weight assignment\n\n\n\n\n\nWe’ve changed the name of our box from program to model. This is to follow modern terminology and to reflect that the model is a special kind of program: it’s one that can do many different things, depending on the weights. It can be implemented in many different ways. For instance, in Samuel’s checkers program, different values of the weights would result in different checkers-playing strategies.\n(By the way, what Samuel called “weights” are most generally referred to as model parameters these days, in case you have encountered that term. The term weights is reserved for a particular type of model parameter.)\nNext, Samuel said we need an automatic means of testing the effectiveness of any current weight assignment in terms of actual performance. In the case of his checkers program, the “actual performance” of a model would be how well it plays. And you could automatically test the performance of two models by setting them to play against each other, and seeing which one usually wins.\nFinally, he says we need a mechanism for altering the weight assignment so as to maximize the performance. For instance, we could look at the difference in weights between the winning model and the losing model, and adjust the weights a little further in the winning direction.\nWe can now see why he said that such a procedure could be made entirely automatic and… a machine so programmed would “learn” from its experience. Learning would become entirely automatic when the adjustment of the weights was also automatic—when instead of us improving a model by adjusting its weights manually, we relied on an automated mechanism that produced adjustments based on performance.\nFigure 1.7 shows the full picture of Samuel’s idea of training a machine learning model.\n\n\n\n\n\n\n\n\nFigure 1.7: Training a machine learning model\n\n\n\n\n\nNotice the distinction between the model’s results (e.g., the moves in a checkers game) and its performance (e.g., whether it wins the game, or how quickly it wins).\nAlso note that once the model is trained—that is, once we’ve chosen our final, best, favorite weight assignment—then we can think of the weights as being part of the model, since we’re not varying them any more.\nTherefore, actually using a model after it’s trained looks like Figure 1.8.\n\n\n\n\n\n\n\n\nFigure 1.8: Using a trained model as a program\n\n\n\n\n\nThis looks identical to our original diagram in Figure 1.5, just with the word program replaced with model. This is an important insight: a trained model can be treated just like a regular computer program.\n\njargon: Machine Learning: The training of programs developed by allowing a computer to learn from its experience, rather than through manually coding the individual steps.\n\n\n\nWhat Is a Neural Network?\nIt’s not too hard to imagine what the model might look like for a checkers program. There might be a range of checkers strategies encoded, and some kind of search mechanism, and then the weights could vary how strategies are selected, what parts of the board are focused on during a search, and so forth. But it’s not at all obvious what the model might look like for an image recognition program, or for understanding text, or for many other interesting problems we might imagine.\nWhat we would like is some kind of function that is so flexible that it could be used to solve any given problem, just by varying its weights. Amazingly enough, this function actually exists! It’s the neural network, which we already discussed. That is, if you regard a neural network as a mathematical function, it turns out to be a function which is extremely flexible depending on its weights. A mathematical proof called the universal approximation theorem shows that this function can solve any problem to any level of accuracy, in theory. The fact that neural networks are so flexible means that, in practice, they are often a suitable kind of model, and you can focus your effort on the process of training them—that is, of finding good weight assignments.\nBut what about that process? One could imagine that you might need to find a new “mechanism” for automatically updating weights for every problem. This would be laborious. What we’d like here as well is a completely general way to update the weights of a neural network, to make it improve at any given task. Conveniently, this also exists!\nThis is called stochastic gradient descent (SGD). We’ll see how neural networks and SGD work in detail in Chapter 4, as well as explaining the universal approximation theorem. For now, however, we will instead use Samuel’s own words: We need not go into the details of such a procedure to see that it could be made entirely automatic and to see that a machine so programmed would “learn” from its experience.\n\nJ: Don’t worry, neither SGD nor neural nets are mathematically complex. Both nearly entirely rely on addition and multiplication to do their work (but they do a lot of addition and multiplication!). The main reaction we hear from students when they see the details is: “Is that all it is?”\n\nIn other words, to recap, a neural network is a particular kind of machine learning model, which fits right in to Samuel’s original conception. Neural networks are special because they are highly flexible, which means they can solve an unusually wide range of problems just by finding the right weights. This is powerful, because stochastic gradient descent provides us a way to find those weight values automatically.\nHaving zoomed out, let’s now zoom back in and revisit our image classification problem using Samuel’s framework.\nOur inputs are the images. Our weights are the weights in the neural net. Our model is a neural net. Our results are the values that are calculated by the neural net, like “dog” or “cat.”\nWhat about the next piece, an automatic means of testing the effectiveness of any current weight assignment in terms of actual performance? Determining “actual performance” is easy enough: we can simply define our model’s performance as its accuracy at predicting the correct answers.\nPutting this all together, and assuming that SGD is our mechanism for updating the weight assignments, we can see how our image classifier is a machine learning model, much like Samuel envisioned.\n\n\nA Bit of Deep Learning Jargon\nSamuel was working in the 1960s, and since then terminology has changed. Here is the modern deep learning terminology for all the pieces we have discussed:\n\nThe functional form of the model is called its architecture (but be careful—sometimes people use model as a synonym of architecture, so this can get confusing).\nThe weights are called parameters.\nThe predictions are calculated from the independent variable, which is the data not including the labels.\nThe results of the model are called predictions.\nThe measure of performance is called the loss.\nThe loss depends not only on the predictions, but also the correct labels (also known as targets or the dependent variable); e.g., “dog” or “cat.”\n\nAfter making these changes, our diagram in Figure 1.7 looks like Figure 1.9.\n\n\n\n\n\n\n\n\nFigure 1.9: Detailed training loop\n\n\n\n\n\n\n\nLimitations Inherent To Machine Learning\nFrom this picture we can now see some fundamental things about training a deep learning model:\n\nA model cannot be created without data.\nA model can only learn to operate on the patterns seen in the input data used to train it.\nThis learning approach only creates predictions, not recommended actions.\nIt’s not enough to just have examples of input data; we need labels for that data too (e.g., pictures of dogs and cats aren’t enough to train a model; we need a label for each one, saying which ones are dogs, and which are cats).\n\nGenerally speaking, we’ve seen that most organizations that say they don’t have enough data, actually mean they don’t have enough labeled data. If any organization is interested in doing something in practice with a model, then presumably they have some inputs they plan to run their model against. And presumably they’ve been doing that some other way for a while (e.g., manually, or with some heuristic program), so they have data from those processes! For instance, a radiology practice will almost certainly have an archive of medical scans (since they need to be able to check how their patients are progressing over time), but those scans may not have structured labels containing a list of diagnoses or interventions (since radiologists generally create free-text natural language reports, not structured data). We’ll be discussing labeling approaches a lot in this book, because it’s such an important issue in practice.\nSince these kinds of machine learning models can only make predictions (i.e., attempt to replicate labels), this can result in a significant gap between organizational goals and model capabilities. For instance, in this book you’ll learn how to create a recommendation system that can predict what products a user might purchase. This is often used in e-commerce, such as to customize products shown on a home page by showing the highest-ranked items. But such a model is generally created by looking at a user and their buying history (inputs) and what they went on to buy or look at (labels), which means that the model is likely to tell you about products the user already has or already knows about, rather than new products that they are most likely to be interested in hearing about. That’s very different to what, say, an expert at your local bookseller might do, where they ask questions to figure out your taste, and then tell you about authors or series that you’ve never heard of before.\nAnother critical insight comes from considering how a model interacts with its environment. This can create feedback loops, as described here:\n\nA predictive policing model is created based on where arrests have been made in the past. In practice, this is not actually predicting crime, but rather predicting arrests, and is therefore partially simply reflecting biases in existing policing processes.\nLaw enforcement officers then might use that model to decide where to focus their police activity, resulting in increased arrests in those areas.\nData on these additional arrests would then be fed back in to retrain future versions of the model.\n\nThis is a positive feedback loop, where the more the model is used, the more biased the data becomes, making the model even more biased, and so forth.\nFeedback loops can also create problems in commercial settings. For instance, a video recommendation system might be biased toward recommending content consumed by the biggest watchers of video (e.g., conspiracy theorists and extremists tend to watch more online video content than the average), resulting in those users increasing their video consumption, resulting in more of those kinds of videos being recommended. We’ll consider this topic more in detail in Chapter 3.\nNow that you have seen the base of the theory, let’s go back to our code example and see in detail how the code corresponds to the process we just described.\n\n\nHow Our Image Recognizer Works\nLet’s see just how our image recognizer code maps to these ideas. We’ll put each line into a separate cell, and look at what each one is doing (we won’t explain every detail of every parameter yet, but will give a description of the important bits; full details will come later in the book).\nThe first line imports all of the fastai.vision library.\nfrom fastai.vision.all import *\nThis gives us all of the functions and classes we will need to create a wide variety of computer vision models.\n\nJ: A lot of Python coders recommend avoiding importing a whole library like this (using the import * syntax), because in large software projects it can cause problems. However, for interactive work such as in a Jupyter notebook, it works great. The fastai library is specially designed to support this kind of interactive use, and it will only import the necessary pieces into your environment.\n\nThe second line downloads a standard dataset from the fast.ai datasets collection (if not previously downloaded) to your server, extracts it (if not previously extracted), and returns a Path object with the extracted location:\npath = untar_data(URLs.PETS)/'images'\n\nS: Throughout my time studying at fast.ai, and even still today, I’ve learned a lot about productive coding practices. The fastai library and fast.ai notebooks are full of great little tips that have helped make me a better programmer. For instance, notice that the fastai library doesn’t just return a string containing the path to the dataset, but a Path object. This is a really useful class from the Python 3 standard library that makes accessing files and directories much easier. If you haven’t come across it before, be sure to check out its documentation or a tutorial and try it out. Note that the https://book.fast.ai[website] contains links to recommended tutorials for each chapter. I’ll keep letting you know about little coding tips I’ve found useful as we come across them.\n\nIn the third line we define a function, is_cat, which labels cats based on a filename rule provided by the dataset creators:\ndef is_cat(x): return x[0].isupper()\nWe use that function in the fourth line, which tells fastai what kind of dataset we have and how it is structured:\ndls = ImageDataLoaders.from_name_func(\n path, get_image_files(path), valid_pct=0.2, seed=42,\n label_func=is_cat, item_tfms=Resize(224))\nThere are various different classes for different kinds of deep learning datasets and problems—here we’re using ImageDataLoaders. The first part of the class name will generally be the type of data you have, such as image, or text.\nThe other important piece of information that we have to tell fastai is how to get the labels from the dataset. Computer vision datasets are normally structured in such a way that the label for an image is part of the filename, or path—most commonly the parent folder name. fastai comes with a number of standardized labeling methods, and ways to write your own. Here we’re telling fastai to use the is_cat function we just defined.\nFinally, we define the Transforms that we need. A Transform contains code that is applied automatically during training; fastai includes many predefined Transforms, and adding new ones is as simple as creating a Python function. There are two kinds: item_tfms are applied to each item (in this case, each item is resized to a 224-pixel square), while batch_tfms are applied to a batch of items at a time using the GPU, so they’re particularly fast (we’ll see many examples of these throughout this book).\nWhy 224 pixels? This is the standard size for historical reasons (old pretrained models require this size exactly), but you can pass pretty much anything. If you increase the size, you’ll often get a model with better results (since it will be able to focus on more details), but at the price of speed and memory consumption; the opposite is true if you decrease the size.\n\nNote: Classification and Regression: classification and regression have very specific meanings in machine learning. These are the two main types of model that we will be investigating in this book. A classification model is one which attempts to predict a class, or category. That is, it’s predicting from a number of discrete possibilities, such as “dog” or “cat.” A regression model is one which attempts to predict one or more numeric quantities, such as a temperature or a location. Sometimes people use the word regression to refer to a particular kind of model called a linear regression model; this is a bad practice, and we won’t be using that terminology in this book!\n\nThe Pet dataset contains 7,390 pictures of dogs and cats, consisting of 37 different breeds. Each image is labeled using its filename: for instance the file great_pyrenees_173.jpg is the 173rd example of an image of a Great Pyrenees breed dog in the dataset. The filenames start with an uppercase letter if the image is a cat, and a lowercase letter otherwise. We have to tell fastai how to get labels from the filenames, which we do by calling from_name_func (which means that labels can be extracted using a function applied to the filename), and passing is_cat, which returns x[0].isupper(), which evaluates to True if the first letter is uppercase (i.e., it’s a cat).\nThe most important parameter to mention here is valid_pct=0.2. This tells fastai to hold out 20% of the data and not use it for training the model at all. This 20% of the data is called the validation set; the remaining 80% is called the training set. The validation set is used to measure the accuracy of the model. By default, the 20% that is held out is selected randomly. The parameter seed=42 sets the random seed to the same value every time we run this code, which means we get the same validation set every time we run it—this way, if we change our model and retrain it, we know that any differences are due to the changes to the model, not due to having a different random validation set.\nfastai will always show you your model’s accuracy using only the validation set, never the training set. This is absolutely critical, because if you train a large enough model for a long enough time, it will eventually memorize the label of every item in your dataset! The result will not actually be a useful model, because what we care about is how well our model works on previously unseen images. That is always our goal when creating a model: for it to be useful on data that the model only sees in the future, after it has been trained.\nEven when your model has not fully memorized all your data, earlier on in training it may have memorized certain parts of it. As a result, the longer you train for, the better your accuracy will get on the training set; the validation set accuracy will also improve for a while, but eventually it will start getting worse as the model starts to memorize the training set, rather than finding generalizable underlying patterns in the data. When this happens, we say that the model is overfitting.\nFigure 1.10 shows what happens when you overfit, using a simplified example where we have just one parameter, and some randomly generated data based on the function x**2. As you can see, although the predictions in the overfit model are accurate for data near the observed data points, they are way off when outside of that range.\n\n\n\n\n\n\nFigure 1.10: Example of overfitting\n\n\n\nOverfitting is the single most important and challenging issue when training for all machine learning practitioners, and all algorithms. As you will see, it is very easy to create a model that does a great job at making predictions on the exact data it has been trained on, but it is much harder to make accurate predictions on data the model has never seen before. And of course, this is the data that will actually matter in practice. For instance, if you create a handwritten digit classifier (as we will very soon!) and use it to recognize numbers written on checks, then you are never going to see any of the numbers that the model was trained on—checks will have slightly different variations of writing to deal with. You will learn many methods to avoid overfitting in this book. However, you should only use those methods after you have confirmed that overfitting is actually occurring (i.e., you have actually observed the validation accuracy getting worse during training). We often see practitioners using over-fitting avoidance techniques even when they have enough data that they didn’t need to do so, ending up with a model that may be less accurate than what they could have achieved.\n\nimportant: Validation Set: When you train a model, you must always have both a training set and a validation set, and must measure the accuracy of your model only on the validation set. If you train for too long, with not enough data, you will see the accuracy of your model start to get worse; this is called overfitting. fastai defaults valid_pct to 0.2, so even if you forget, fastai will create a validation set for you!\n\nThe fifth line of the code training our image recognizer tells fastai to create a convolutional neural network (CNN) and specifies what architecture to use (i.e. what kind of model to create), what data we want to train it on, and what metric to use:\nlearn = vision_learner(dls, resnet34, metrics=error_rate)\nWhy a CNN? It’s the current state-of-the-art approach to creating computer vision models. We’ll be learning all about how CNNs work in this book. Their structure is inspired by how the human vision system works.\nThere are many different architectures in fastai, which we will introduce in this book (as well as discussing how to create your own). Most of the time, however, picking an architecture isn’t a very important part of the deep learning process. It’s something that academics love to talk about, but in practice it is unlikely to be something you need to spend much time on. There are some standard architectures that work most of the time, and in this case we’re using one called ResNet that we’ll be talking a lot about during the book; it is both fast and accurate for many datasets and problems. The 34 in resnet34 refers to the number of layers in this variant of the architecture (other options are 18, 50, 101, and 152). Models using architectures with more layers take longer to train, and are more prone to overfitting (i.e. you can’t train them for as many epochs before the accuracy on the validation set starts getting worse). On the other hand, when using more data, they can be quite a bit more accurate.\nWhat is a metric? A metric is a function that measures the quality of the model’s predictions using the validation set, and will be printed at the end of each epoch. In this case, we’re using error_rate, which is a function provided by fastai that does just what it says: tells you what percentage of images in the validation set are being classified incorrectly. Another common metric for classification is accuracy (which is just 1.0 - error_rate). fastai provides many more, which will be discussed throughout this book.\nThe concept of a metric may remind you of loss, but there is an important distinction. The entire purpose of loss is to define a “measure of performance” that the training system can use to update weights automatically. In other words, a good choice for loss is a choice that is easy for stochastic gradient descent to use. But a metric is defined for human consumption, so a good metric is one that is easy for you to understand, and that hews as closely as possible to what you want the model to do. At times, you might decide that the loss function is a suitable metric, but that is not necessarily the case.\nvision_learner also has a parameter pretrained, which defaults to True (so it’s used in this case, even though we haven’t specified it), which sets the weights in your model to values that have already been trained by experts to recognize a thousand different categories across 1.3 million photos (using the famous ImageNet dataset). A model that has weights that have already been trained on some other dataset is called a pretrained model. You should nearly always use a pretrained model, because it means that your model, before you’ve even shown it any of your data, is already very capable. And, as you’ll see, in a deep learning model many of these capabilities are things you’ll need, almost regardless of the details of your project. For instance, parts of pretrained models will handle edge, gradient, and color detection, which are needed for many tasks.\nWhen using a pretrained model, vision_learner will remove the last layer, since that is always specifically customized to the original training task (i.e. ImageNet dataset classification), and replace it with one or more new layers with randomized weights, of an appropriate size for the dataset you are working with. This last part of the model is known as the head.\nUsing pretrained models is the most important method we have to allow us to train more accurate models, more quickly, with less data, and less time and money. You might think that would mean that using pretrained models would be the most studied area in academic deep learning… but you’d be very, very wrong! The importance of pretrained models is generally not recognized or discussed in most courses, books, or software library features, and is rarely considered in academic papers. As we write this at the start of 2020, things are just starting to change, but it’s likely to take a while. So be careful: most people you speak to will probably greatly underestimate what you can do in deep learning with few resources, because they probably won’t deeply understand how to use pretrained models.\nUsing a pretrained model for a task different to what it was originally trained for is known as transfer learning. Unfortunately, because transfer learning is so under-studied, few domains have pretrained models available. For instance, there are currently few pretrained models available in medicine, making transfer learning challenging to use in that domain. In addition, it is not yet well understood how to use transfer learning for tasks such as time series analysis.\n\njargon: Transfer learning: Using a pretrained model for a task different to what it was originally trained for.\n\nThe sixth line of our code tells fastai how to fit the model:\nlearn.fine_tune(1)\nAs we’ve discussed, the architecture only describes a template for a mathematical function; it doesn’t actually do anything until we provide values for the millions of parameters it contains.\nThis is the key to deep learning—determining how to fit the parameters of a model to get it to solve your problem. In order to fit a model, we have to provide at least one piece of information: how many times to look at each image (known as number of epochs). The number of epochs you select will largely depend on how much time you have available, and how long you find it takes in practice to fit your model. If you select a number that is too small, you can always train for more epochs later.\nBut why is the method called fine_tune, and not fit? fastai actually does have a method called fit, which does indeed fit a model (i.e. look at images in the training set multiple times, each time updating the parameters to make the predictions closer and closer to the target labels). But in this case, we’ve started with a pretrained model, and we don’t want to throw away all those capabilities that it already has. As you’ll learn in this book, there are some important tricks to adapt a pretrained model for a new dataset—a process called fine-tuning.\n\njargon: Fine-tuning: A transfer learning technique where the parameters of a pretrained model are updated by training for additional epochs using a different task to that used for pretraining.\n\nWhen you use the fine_tune method, fastai will use these tricks for you. There are a few parameters you can set (which we’ll discuss later), but in the default form shown here, it does two steps:\n\nUse one epoch to fit just those parts of the model necessary to get the new random head to work correctly with your dataset.\nUse the number of epochs requested when calling the method to fit the entire model, updating the weights of the later layers (especially the head) faster than the earlier layers (which, as we’ll see, generally don’t require many changes from the pretrained weights).\n\nThe head of a model is the part that is newly added to be specific to the new dataset. An epoch is one complete pass through the dataset. After calling fit, the results after each epoch are printed, showing the epoch number, the training and validation set losses (the “measure of performance” used for training the model), and any metrics you’ve requested (error rate, in this case).\nSo, with all this code our model learned to recognize cats and dogs just from labeled examples. But how did it do it?\n\n\nWhat Our Image Recognizer Learned\nAt this stage we have an image recognizer that is working very well, but we have no idea what it is actually doing! Although many people complain that deep learning results in impenetrable “black box” models (that is, something that gives predictions but that no one can understand), this really couldn’t be further from the truth. There is a vast body of research showing how to deeply inspect deep learning models, and get rich insights from them. Having said that, all kinds of machine learning models (including deep learning, and traditional statistical models) can be challenging to fully understand, especially when considering how they will behave when coming across data that is very different to the data used to train them. We’ll be discussing this issue throughout this book.\nIn 2013 a PhD student, Matt Zeiler, and his supervisor, Rob Fergus, published the paper “Visualizing and Understanding Convolutional Networks”, which showed how to visualize the neural network weights learned in each layer of a model. They carefully analyzed the model that won the 2012 ImageNet competition, and used this analysis to greatly improve the model, such that they were able to go on to win the 2013 competition! Figure 1.11 is the picture that they published of the first layer’s weights.\n\n\n\n\n\n\nFigure 1.11: Activations of the first layer of a CNN (courtesy of Matthew D. Zeiler and Rob Fergus)\n\n\n\nThis picture requires some explanation. For each layer, the image part with the light gray background shows the reconstructed weights pictures, and the larger section at the bottom shows the parts of the training images that most strongly matched each set of weights. For layer 1, what we can see is that the model has discovered weights that represent diagonal, horizontal, and vertical edges, as well as various different gradients. (Note that for each layer only a subset of the features are shown; in practice there are thousands across all of the layers.) These are the basic building blocks that the model has learned for computer vision. They have been widely analyzed by neuroscientists and computer vision researchers, and it turns out that these learned building blocks are very similar to the basic visual machinery in the human eye, as well as the handcrafted computer vision features that were developed prior to the days of deep learning. The next layer is represented in Figure 1.12.\n\n\n\n\n\n\nFigure 1.12: Activations of the second layer of a CNN (courtesy of Matthew D. Zeiler and Rob Fergus)\n\n\n\nFor layer 2, there are nine examples of weight reconstructions for each of the features found by the model. We can see that the model has learned to create feature detectors that look for corners, repeating lines, circles, and other simple patterns. These are built from the basic building blocks developed in the first layer. For each of these, the right-hand side of the picture shows small patches from actual images which these features most closely match. For instance, the particular pattern in row 2, column 1 matches the gradients and textures associated with sunsets.\nFigure 1.13 shows the image from the paper showing the results of reconstructing the features of layer 3.\n\n\n\n\n\n\nFigure 1.13: Activations of the third layer of a CNN (courtesy of Matthew D. Zeiler and Rob Fergus)\n\n\n\nAs you can see by looking at the righthand side of this picture, the features are now able to identify and match with higher-level semantic components, such as car wheels, text, and flower petals. Using these components, layers four and five can identify even higher-level concepts, as shown in Figure 1.14.\n\n\n\n\n\n\nFigure 1.14: Activations of layers 4 and 5 of a CNN (courtesy of Matthew D. Zeiler and Rob Fergus)\n\n\n\nThis article was studying an older model called AlexNet that only contained five layers. Networks developed since then can have hundreds of layers—so you can imagine how rich the features developed by these models can be!\nWhen we fine-tuned our pretrained model earlier, we adapted what those last layers focus on (flowers, humans, animals) to specialize on the cats versus dogs problem. More generally, we could specialize such a pretrained model on many different tasks. Let’s have a look at some examples.\n\n\nImage Recognizers Can Tackle Non-Image Tasks\nAn image recognizer can, as its name suggests, only recognize images. But a lot of things can be represented as images, which means that an image recogniser can learn to complete many tasks.\nFor instance, a sound can be converted to a spectrogram, which is a chart that shows the amount of each frequency at each time in an audio file. Fast.ai student Ethan Sutin used this approach to easily beat the published accuracy of a state-of-the-art environmental sound detection model using a dataset of 8,732 urban sounds. fastai’s show_batch clearly shows how each different sound has a quite distinctive spectrogram, as you can see in Figure 1.15.\n\n\n\n\n\n\nFigure 1.15: show_batch with spectrograms of sounds\n\n\n\nA time series can easily be converted into an image by simply plotting the time series on a graph. However, it is often a good idea to try to represent your data in a way that makes it as easy as possible to pull out the most important components. In a time series, things like seasonality and anomalies are most likely to be of interest. There are various transformations available for time series data. For instance, fast.ai student Ignacio Oguiza created images from a time series dataset for olive oil classification, using a technique called Gramian Angular Difference Field (GADF); you can see the result in Figure 1.16. He then fed those images to an image classification model just like the one you see in this chapter. His results, despite having only 30 training set images, were well over 90% accurate, and close to the state of the art.\n\n\n\n\n\n\nFigure 1.16: Converting a time series into an image\n\n\n\nAnother interesting fast.ai student project example comes from Gleb Esman. He was working on fraud detection at Splunk, using a dataset of users’ mouse movements and mouse clicks. He turned these into pictures by drawing an image where the position, speed, and acceleration of the mouse pointer was displayed using coloured lines, and the clicks were displayed using small colored circles, as shown in Figure 1.17. He then fed this into an image recognition model just like the one we’ve used in this chapter, and it worked so well that it led to a patent for this approach to fraud analytics!\n\n\n\n\n\n\nFigure 1.17: Converting computer mouse behavior to an image\n\n\n\nAnother example comes from the paper “Malware Classification with Deep Convolutional Neural Networks” by Mahmoud Kalash et al., which explains that “the malware binary file is divided into 8-bit sequences which are then converted to equivalent decimal values. This decimal vector is reshaped and a gray-scale image is generated that represents the malware sample,” like in Figure 1.18.\n\n\n\n\n\n\nFigure 1.18: Malware classification process\n\n\n\nThe authors then show “pictures” generated through this process of malware in different categories, as shown in Figure 1.19.\n\n\n\n\n\n\nFigure 1.19: Malware examples\n\n\n\nAs you can see, the different types of malware look very distinctive to the human eye. The model the researchers trained based on this image representation was more accurate at malware classification than any previous approach shown in the academic literature. This suggests a good rule of thumb for converting a dataset into an image representation: if the human eye can recognize categories from the images, then a deep learning model should be able to do so too.\nIn general, you’ll find that a small number of general approaches in deep learning can go a long way, if you’re a bit creative in how you represent your data! You shouldn’t think of approaches like the ones described here as “hacky workarounds,” because actually they often (as here) beat previously state-of-the-art results. These really are the right ways to think about these problem domains.\n\n\nJargon Recap\nWe just covered a lot of information so let’s recap briefly, Table 1.2 provides a handy vocabulary.\n\n\n\nTable 1.2: Deep learning vocabulary\n\n\n\n\n\n\n\n\n\nTerm\nMeaning\n\n\n\n\nLabel\nThe data that we’re trying to predict, such as “dog” or “cat”\n\n\nArchitecture\nThe template of the model that we’re trying to fit; the actual mathematical function that we’re passing the input data and parameters to\n\n\nModel\nThe combination of the architecture with a particular set of parameters\n\n\nParameters\nThe values in the model that change what task it can do, and are updated through model training\n\n\nFit\nUpdate the parameters of the model such that the predictions of the model using the input data match the target labels\n\n\nTrain\nA synonym for fit\n\n\nPretrained model\nA model that has already been trained, generally using a large dataset, and will be fine-tuned\n\n\nFine-tune\nUpdate a pretrained model for a different task\n\n\nEpoch\nOne complete pass through the input data\n\n\nLoss\nA measure of how good the model is, chosen to drive training via SGD\n\n\nMetric\nA measurement of how good the model is, using the validation set, chosen for human consumption\n\n\nValidation set\nA set of data held out from training, used only for measuring how good the model is\n\n\nTraining set\nThe data used for fitting the model; does not include any data from the validation set\n\n\nOverfitting\nTraining a model in such a way that it remembers specific features of the input data, rather than generalizing well to data not seen during training\n\n\nCNN\nConvolutional neural network; a type of neural network that works particularly well for computer vision tasks\n\n\n\n\n\n\nWith this vocabulary in hand, we are now in a position to bring together all the key concepts introduced so far. Take a moment to review those definitions and read the following summary. If you can follow the explanation, then you’re well equipped to understand the discussions to come.\nMachine learning is a discipline where we define a program not by writing it entirely ourselves, but by learning from data. Deep learning is a specialty within machine learning that uses neural networks with multiple layers. Image classification is a representative example (also known as image recognition). We start with labeled data; that is, a set of images where we have assigned a label to each image indicating what it represents. Our goal is to produce a program, called a model, which, given a new image, will make an accurate prediction regarding what that new image represents.\nEvery model starts with a choice of architecture, a general template for how that kind of model works internally. The process of training (or fitting) the model is the process of finding a set of parameter values (or weights) that specialize that general architecture into a model that works well for our particular kind of data. In order to define how well a model does on a single prediction, we need to define a loss function, which determines how we score a prediction as good or bad.\nTo make the training process go faster, we might start with a pretrained model—a model that has already been trained on someone else’s data. We can then adapt it to our data by training it a bit more on our data, a process called fine-tuning.\nWhen we train a model, a key concern is to ensure that our model generalizes—that is, that it learns general lessons from our data which also apply to new items it will encounter, so that it can make good predictions on those items. The risk is that if we train our model badly, instead of learning general lessons it effectively memorizes what it has already seen, and then it will make poor predictions about new images. Such a failure is called overfitting. In order to avoid this, we always divide our data into two parts, the training set and the validation set. We train the model by showing it only the training set and then we evaluate how well the model is doing by seeing how well it performs on items from the validation set. In this way, we check if the lessons the model learns from the training set are lessons that generalize to the validation set. In order for a person to assess how well the model is doing on the validation set overall, we define a metric. During the training process, when the model has seen every item in the training set, we call that an epoch.\nAll these concepts apply to machine learning in general. That is, they apply to all sorts of schemes for defining a model by training it with data. What makes deep learning distinctive is a particular class of architectures: the architectures based on neural networks. In particular, tasks like image classification rely heavily on convolutional neural networks, which we will discuss shortly.",
+ "crumbs": [
+ "Getting Started",
+ "1Your Deep Learning Journey"
+ ]
+ },
+ {
+ "objectID": "intro.html#deep-learning-is-not-just-for-image-classification",
+ "href": "intro.html#deep-learning-is-not-just-for-image-classification",
+ "title": "1 Your Deep Learning Journey",
+ "section": "1.7 Deep Learning Is Not Just for Image Classification",
+ "text": "1.7 Deep Learning Is Not Just for Image Classification\nDeep learning’s effectiveness for classifying images has been widely discussed in recent years, even showing superhuman results on complex tasks like recognizing malignant tumors in CT scans. But it can do a lot more than this, as we will show here.\nFor instance, let’s talk about something that is critically important for autonomous vehicles: localizing objects in a picture. If a self-driving car doesn’t know where a pedestrian is, then it doesn’t know how to avoid one! Creating a model that can recognize the content of every individual pixel in an image is called segmentation. Here is how we can train a segmentation model with fastai, using a subset of the Camvid dataset from the paper “Semantic Object Classes in Video: A High-Definition Ground Truth Database” by Gabruel J. Brostow, Julien Fauqueur, and Roberto Cipolla:\n\npath = untar_data(URLs.CAMVID_TINY)\ndls = SegmentationDataLoaders.from_label_func(\n path, bs=8, fnames = get_image_files(path/\"images\"),\n label_func = lambda o: path/'labels'/f'{o.stem}_P{o.suffix}',\n codes = np.loadtxt(path/'codes.txt', dtype=str)\n)\n\nlearn = unet_learner(dls, resnet34)\nlearn.fine_tune(8)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\ntime\n\n\n\n\n0\n2.641862\n2.140568\n00:02\n\n\n\n\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\ntime\n\n\n\n\n0\n1.624964\n1.464210\n00:02\n\n\n1\n1.454148\n1.284032\n00:02\n\n\n2\n1.342955\n1.048562\n00:02\n\n\n3\n1.199765\n0.852787\n00:02\n\n\n4\n1.078090\n0.838206\n00:02\n\n\n5\n0.975496\n0.746806\n00:02\n\n\n6\n0.892793\n0.725384\n00:02\n\n\n7\n0.827645\n0.726778\n00:02\n\n\n\n\n\nWe are not even going to walk through this code line by line, because it is nearly identical to our previous example! (Although we will be doing a deep dive into segmentation models in Chapter 15, along with all of the other models that we are briefly introducing in this chapter, and many, many more.)\nWe can visualize how well it achieved its task, by asking the model to color-code each pixel of an image. As you can see, it nearly perfectly classifies every pixel in every object. For instance, notice that all of the cars are overlaid with the same color and all of the trees are overlaid with the same color (in each pair of images, the lefthand image is the ground truth label and the right is the prediction from the model):\n\nlearn.show_results(max_n=6, figsize=(7,8))\n\n\n\n\n\n\n\n\n\n\n\nOne other area where deep learning has dramatically improved in the last couple of years is natural language processing (NLP). Computers can now generate text, translate automatically from one language to another, analyze comments, label words in sentences, and much more. Here is all of the code necessary to train a model that can classify the sentiment of a movie review better than anything that existed in the world just five years ago:\n\nfrom fastai.text.all import *\n\ndls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')\nlearn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)\nlearn.fine_tune(4, 1e-2)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n0.878776\n0.748753\n0.500400\n01:27\n\n\n\n\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n0.679118\n0.674778\n0.584040\n02:45\n\n\n1\n0.653671\n0.670396\n0.618040\n02:55\n\n\n2\n0.598665\n0.551815\n0.718920\n05:28\n\n\n3\n0.556812\n0.507450\n0.752480\n03:11\n\n\n\n\n\n#clean If you hit a “CUDA out of memory error” after running this cell, click on the menu Kernel, then restart. Instead of executing the cell above, copy and paste the following code in it:\nfrom fastai.text.all import *\n\ndls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test', bs=32)\nlearn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)\nlearn.fine_tune(4, 1e-2)\nThis reduces the batch size to 32 (we will explain this later). If you keep hitting the same error, change 32 to 16.\nThis model is using the “IMDb Large Movie Review dataset” from the paper “Learning Word Vectors for Sentiment Analysis” by Andrew Maas et al. It works well with movie reviews of many thousands of words, but let’s test it out on a very short one to see how it does its thing:\n\nlearn.predict(\"I really liked that movie!\")\n\n\n\n\n('neg', tensor(0), tensor([0.8786, 0.1214]))\n\n\nHere we can see the model has considered the review to be positive. The second part of the result is the index of “pos” in our data vocabulary and the last part is the probabilities attributed to each class (99.6% for “pos” and 0.4% for “neg”).\nNow it’s your turn! Write your own mini movie review, or copy one from the internet, and you can see what this model thinks about it.\n\nSidebar: The Order Matters\nIn a Jupyter notebook, the order in which you execute each cell is very important. It’s not like Excel, where everything gets updated as soon as you type something anywhere—it has an inner state that gets updated each time you execute a cell. For instance, when you run the first cell of the notebook (with the “CLICK ME” comment), you create an object called learn that contains a model and data for an image classification problem. If we were to run the cell just shown in the text (the one that predicts if a review is good or not) straight after, we would get an error as this learn object does not contain a text classification model. This cell needs to be run after the one containing:\nfrom fastai.text.all import *\n\ndls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')\nlearn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, \n metrics=accuracy)\nlearn.fine_tune(4, 1e-2)\nThe outputs themselves can be deceiving, because they include the results of the last time the cell was executed; if you change the code inside a cell without executing it, the old (misleading) results will remain.\nExcept when we mention it explicitly, the notebooks provided on the book website are meant to be run in order, from top to bottom. In general, when experimenting, you will find yourself executing cells in any order to go fast (which is a super neat feature of Jupyter Notebook), but once you have explored and arrived at the final version of your code, make sure you can run the cells of your notebooks in order (your future self won’t necessarily remember the convoluted path you took otherwise!).\nIn command mode, pressing 0 twice will restart the kernel (which is the engine powering your notebook). This will wipe your state clean and make it as if you had just started in the notebook. Choose Run All Above from the Cell menu to run all cells above the point where you are. We have found this to be very useful when developing the fastai library.\n\n\nEnd sidebar\nIf you ever have any questions about a fastai method, you should use the function doc, passing it the method name:\ndoc(learn.predict)\nThis will make a small window pop up with content like this:\n\nA brief one-line explanation is provided by doc. The “Show in docs” link takes you to the full documentation, where you’ll find all the details and lots of examples. Also, most of fastai’s methods are just a handful of lines, so you can click the “source” link to see exactly what’s going on behind the scenes.\nLet’s move on to something much less sexy, but perhaps significantly more widely commercially useful: building models from plain tabular data.\n\njargon: Tabular: Data that is in the form of a table, such as from a spreadsheet, database, or CSV file. A tabular model is a model that tries to predict one column of a table based on information in other columns of the table.\n\nIt turns out that looks very similar too. Here is the code necessary to train a model that will predict whether a person is a high-income earner, based on their socioeconomic background:\n\nfrom fastai.tabular.all import *\npath = untar_data(URLs.ADULT_SAMPLE)\n\ndls = TabularDataLoaders.from_csv(path/'adult.csv', path=path, y_names=\"salary\",\n cat_names = ['workclass', 'education', 'marital-status', 'occupation',\n 'relationship', 'race'],\n cont_names = ['age', 'fnlwgt', 'education-num'],\n procs = [Categorify, FillMissing, Normalize])\n\nlearn = tabular_learner(dls, metrics=accuracy)\n\nAs you see, we had to tell fastai which columns are categorical (that is, contain values that are one of a discrete set of choices, such as occupation) and which are continuous (that is, contain a number that represents a quantity, such as age).\nThere is no pretrained model available for this task (in general, pretrained models are not widely available for any tabular modeling tasks, although some organizations have created them for internal use), so we don’t use fine_tune in this case. Instead we use fit_one_cycle, the most commonly used method for training fastai models from scratch (i.e. without transfer learning):\n\nlearn.fit_one_cycle(3)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n0.372397\n0.357177\n0.832463\n00:08\n\n\n1\n0.351544\n0.341505\n0.841523\n00:08\n\n\n2\n0.338763\n0.339184\n0.845670\n00:08\n\n\n\n\n\nThis model is using the Adult dataset, from the paper “Scaling Up the Accuracy of Naive-Bayes Classifiers: a Decision-Tree Hybrid” by Rob Kohavi, which contains some demographic data about individuals (like their education, marital status, race, sex, and whether or not they have an annual income greater than $50k). The model is over 80% accurate, and took around 30 seconds to train.\nLet’s look at one more. Recommendation systems are very important, particularly in e-commerce. Companies like Amazon and Netflix try hard to recommend products or movies that users might like. Here’s how to train a model that will predict movies people might like, based on their previous viewing habits, using the MovieLens dataset:\n\nfrom fastai.collab import *\npath = untar_data(URLs.ML_SAMPLE)\ndls = CollabDataLoaders.from_csv(path/'ratings.csv')\nlearn = collab_learner(dls, y_range=(0.5,5.5))\nlearn.fine_tune(10)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\ntime\n\n\n\n\n0\n1.510897\n1.410028\n00:00\n\n\n\n\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\ntime\n\n\n\n\n0\n1.375435\n1.350930\n00:00\n\n\n1\n1.270062\n1.173962\n00:00\n\n\n2\n1.023159\n0.879298\n00:00\n\n\n3\n0.797398\n0.739787\n00:00\n\n\n4\n0.685500\n0.700903\n00:00\n\n\n5\n0.646508\n0.686387\n00:00\n\n\n6\n0.623985\n0.681087\n00:00\n\n\n7\n0.606319\n0.676885\n00:00\n\n\n8\n0.606975\n0.675833\n00:00\n\n\n9\n0.602670\n0.675682\n00:00\n\n\n\n\n\nThis model is predicting movie ratings on a scale of 0.5 to 5.0 to within around 0.6 average error. Since we’re predicting a continuous number, rather than a category, we have to tell fastai what range our target has, using the y_range parameter.\nAlthough we’re not actually using a pretrained model (for the same reason that we didn’t for the tabular model), this example shows that fastai lets us use fine_tune anyway in this case (you’ll learn how and why this works in Chapter 5). Sometimes it’s best to experiment with fine_tune versus fit_one_cycle to see which works best for your dataset.\nWe can use the same show_results call we saw earlier to view a few examples of user and movie IDs, actual ratings, and predictions:\n\nlearn.show_results()\n\n\n\n\n\n\n\n\nuserId\nmovieId\nrating\nrating_pred\n\n\n\n\n0\n66.0\n79.0\n4.0\n3.978900\n\n\n1\n97.0\n15.0\n4.0\n3.851795\n\n\n2\n55.0\n79.0\n3.5\n3.945623\n\n\n3\n98.0\n91.0\n4.0\n4.458704\n\n\n4\n53.0\n7.0\n5.0\n4.670005\n\n\n5\n26.0\n69.0\n5.0\n4.319870\n\n\n6\n81.0\n16.0\n4.5\n4.426761\n\n\n7\n80.0\n7.0\n4.0\n4.046183\n\n\n8\n51.0\n94.0\n5.0\n3.499996\n\n\n\n\n\n\n\nSidebar: Datasets: Food for Models\nYou’ve already seen quite a few models in this section, each one trained using a different dataset to do a different task. In machine learning and deep learning, we can’t do anything without data. So, the people that create datasets for us to train our models on are the (often underappreciated) heroes. Some of the most useful and important datasets are those that become important academic baselines; that is, datasets that are widely studied by researchers and used to compare algorithmic changes. Some of these become household names (at least, among households that train models!), such as MNIST, CIFAR-10, and ImageNet.\nThe datasets used in this book have been selected because they provide great examples of the kinds of data that you are likely to encounter, and the academic literature has many examples of model results using these datasets to which you can compare your work.\nMost datasets used in this book took the creators a lot of work to build. For instance, later in the book we’ll be showing you how to create a model that can translate between French and English. The key input to this is a French/English parallel text corpus prepared back in 2009 by Professor Chris Callison-Burch of the University of Pennsylvania. This dataset contains over 20 million sentence pairs in French and English. He built the dataset in a really clever way: by crawling millions of Canadian web pages (which are often multilingual) and then using a set of simple heuristics to transform URLs of French content onto URLs pointing to the same content in English.\nAs you look at datasets throughout this book, think about where they might have come from, and how they might have been curated. Then think about what kinds of interesting datasets you could create for your own projects. (We’ll even take you step by step through the process of creating your own image dataset soon.)\nfast.ai has spent a lot of time creating cut-down versions of popular datasets that are specially designed to support rapid prototyping and experimentation, and to be easier to learn with. In this book we will often start by using one of the cut-down versions and later scale up to the full-size version (just as we’re doing in this chapter!). In fact, this is how the world’s top practitioners do their modeling in practice; they do most of their experimentation and prototyping with subsets of their data, and only use the full dataset when they have a good understanding of what they have to do.\n\n\nEnd sidebar\nEach of the models we trained showed a training and validation loss. A good validation set is one of the most important pieces of the training process. Let’s see why and learn how to create one.",
+ "crumbs": [
+ "Getting Started",
+ "1Your Deep Learning Journey"
+ ]
+ },
+ {
+ "objectID": "intro.html#validation-sets-and-test-sets",
+ "href": "intro.html#validation-sets-and-test-sets",
+ "title": "1 Your Deep Learning Journey",
+ "section": "1.8 Validation Sets and Test Sets",
+ "text": "1.8 Validation Sets and Test Sets\nAs we’ve discussed, the goal of a model is to make predictions about data. But the model training process is fundamentally dumb. If we trained a model with all our data, and then evaluated the model using that same data, we would not be able to tell how well our model can perform on data it hasn’t seen. Without this very valuable piece of information to guide us in training our model, there is a very good chance it would become good at making predictions about that data but would perform poorly on new data.\nTo avoid this, our first step was to split our dataset into two sets: the training set (which our model sees in training) and the validation set, also known as the development set (which is used only for evaluation). This lets us test that the model learns lessons from the training data that generalize to new data, the validation data.\nOne way to understand this situation is that, in a sense, we don’t want our model to get good results by “cheating.” If it makes an accurate prediction for a data item, that should be because it has learned characteristics of that kind of item, and not because the model has been shaped by actually having seen that particular item.\nSplitting off our validation data means our model never sees it in training and so is completely untainted by it, and is not cheating in any way. Right?\nIn fact, not necessarily. The situation is more subtle. This is because in realistic scenarios we rarely build a model just by training its weight parameters once. Instead, we are likely to explore many versions of a model through various modeling choices regarding network architecture, learning rates, data augmentation strategies, and other factors we will discuss in upcoming chapters. Many of these choices can be described as choices of hyperparameters. The word reflects that they are parameters about parameters, since they are the higher-level choices that govern the meaning of the weight parameters.\nThe problem is that even though the ordinary training process is only looking at predictions on the training data when it learns values for the weight parameters, the same is not true of us. We, as modelers, are evaluating the model by looking at predictions on the validation data when we decide to explore new hyperparameter values! So subsequent versions of the model are, indirectly, shaped by us having seen the validation data. Just as the automatic training process is in danger of overfitting the training data, we are in danger of overfitting the validation data through human trial and error and exploration.\nThe solution to this conundrum is to introduce another level of even more highly reserved data, the test set. Just as we hold back the validation data from the training process, we must hold back the test set data even from ourselves. It cannot be used to improve the model; it can only be used to evaluate the model at the very end of our efforts. In effect, we define a hierarchy of cuts of our data, based on how fully we want to hide it from training and modeling processes: training data is fully exposed, the validation data is less exposed, and test data is totally hidden. This hierarchy parallels the different kinds of modeling and evaluation processes themselves—the automatic training process with back propagation, the more manual process of trying different hyper-parameters between training sessions, and the assessment of our final result.\nThe test and validation sets should have enough data to ensure that you get a good estimate of your accuracy. If you’re creating a cat detector, for instance, you generally want at least 30 cats in your validation set. That means that if you have a dataset with thousands of items, using the default 20% validation set size may be more than you need. On the other hand, if you have lots of data, using some of it for validation probably doesn’t have any downsides.\nHaving two levels of “reserved data”—a validation set and a test set, with one level representing data that you are virtually hiding from yourself—may seem a bit extreme. But the reason it is often necessary is because models tend to gravitate toward the simplest way to do good predictions (memorization), and we as fallible humans tend to gravitate toward fooling ourselves about how well our models are performing. The discipline of the test set helps us keep ourselves intellectually honest. That doesn’t mean we always need a separate test set—if you have very little data, you may need to just have a validation set—but generally it’s best to use one if at all possible.\nThis same discipline can be critical if you intend to hire a third party to perform modeling work on your behalf. A third party might not understand your requirements accurately, or their incentives might even encourage them to misunderstand them. A good test set can greatly mitigate these risks and let you evaluate whether their work solves your actual problem.\nTo put it bluntly, if you’re a senior decision maker in your organization (or you’re advising senior decision makers), the most important takeaway is this: if you ensure that you really understand what test and validation sets are and why they’re important, then you’ll avoid the single biggest source of failures we’ve seen when organizations decide to use AI. For instance, if you’re considering bringing in an external vendor or service, make sure that you hold out some test data that the vendor never gets to see. Then you check their model on your test data, using a metric that you choose based on what actually matters to you in practice, and you decide what level of performance is adequate. (It’s also a good idea for you to try out some simple baseline yourself, so you know what a really simple model can achieve. Often it’ll turn out that your simple model performs just as well as one produced by an external “expert”!)\n\nUse Judgment in Defining Test Sets\nTo do a good job of defining a validation set (and possibly a test set), you will sometimes want to do more than just randomly grab a fraction of your original dataset. Remember: a key property of the validation and test sets is that they must be representative of the new data you will see in the future. This may sound like an impossible order! By definition, you haven’t seen this data yet. But you usually still do know some things.\nIt’s instructive to look at a few example cases. Many of these examples come from predictive modeling competitions on the Kaggle platform, which is a good representation of problems and methods you might see in practice.\nOne case might be if you are looking at time series data. For a time series, choosing a random subset of the data will be both too easy (you can look at the data both before and after the dates you are trying to predict) and not representative of most business use cases (where you are using historical data to build a model for use in the future). If your data includes the date and you are building a model to use in the future, you will want to choose a continuous section with the latest dates as your validation set (for instance, the last two weeks or last month of available data).\nSuppose you want to split the time series data in Figure 1.20 into training and validation sets.\n\n\n\n\n\n\nFigure 1.20: A time series\n\n\n\nA random subset is a poor choice (too easy to fill in the gaps, and not indicative of what you’ll need in production), as we can see in Figure 1.21.\n\n\n\n\n\n\nFigure 1.21: A poor training subset\n\n\n\nInstead, use the earlier data as your training set (and the later data for the validation set), as shown in Figure 1.22.\n\n\n\n\n\n\nFigure 1.22: A good training subset\n\n\n\nFor example, Kaggle had a competition to predict the sales in a chain of Ecuadorian grocery stores. Kaggle’s training data ran from Jan 1 2013 to Aug 15 2017, and the test data spanned Aug 16 2017 to Aug 31 2017. That way, the competition organizer ensured that entrants were making predictions for a time period that was in the future, from the perspective of their model. This is similar to the way quant hedge fund traders do back-testing to check whether their models are predictive of future periods, based on past data.\nA second common case is when you can easily anticipate ways the data you will be making predictions for in production may be qualitatively different from the data you have to train your model with.\nIn the Kaggle distracted driver competition, the independent variables are pictures of drivers at the wheel of a car, and the dependent variables are categories such as texting, eating, or safely looking ahead. Lots of pictures are of the same drivers in different positions, as we can see in Figure 1.23. If you were an insurance company building a model from this data, note that you would be most interested in how the model performs on drivers it hasn’t seen before (since you would likely have training data only for a small group of people). In recognition of this, the test data for the competition consists of images of people that don’t appear in the training set.\n\n\n\n\n\n\nFigure 1.23: Two pictures from the training data\n\n\n\nIf you put one of the images in Figure 1.23 in your training set and one in the validation set, your model will have an easy time making a prediction for the one in the validation set, so it will seem to be performing better than it would on new people. Another perspective is that if you used all the people in training your model, your model might be overfitting to particularities of those specific people, and not just learning the states (texting, eating, etc.).\nA similar dynamic was at work in the Kaggle fisheries competition to identify the species of fish caught by fishing boats in order to reduce illegal fishing of endangered populations. The test set consisted of boats that didn’t appear in the training data. This means that you’d want your validation set to include boats that are not in the training set.\nSometimes it may not be clear how your validation data will differ. For instance, for a problem using satellite imagery, you’d need to gather more information on whether the training set just contained certain geographic locations, or if it came from geographically scattered data.\nNow that you have gotten a taste of how to build a model, you can decide what you want to dig into next.",
+ "crumbs": [
+ "Getting Started",
+ "1Your Deep Learning Journey"
+ ]
+ },
+ {
+ "objectID": "intro.html#a-choose-your-own-adventure-moment",
+ "href": "intro.html#a-choose-your-own-adventure-moment",
+ "title": "1 Your Deep Learning Journey",
+ "section": "1.9 A Choose Your Own Adventure moment",
+ "text": "1.9 A Choose Your Own Adventure moment\nIf you would like to learn more about how to use deep learning models in practice, including how to identify and fix errors, create a real working web application, and avoid your model causing unexpected harm to your organization or society more generally, then keep reading the next two chapters. If you would like to start learning the foundations of how deep learning works under the hood, skip to Chapter 4. (Did you ever read Choose Your Own Adventure books as a kid? Well, this is kind of like that… except with more deep learning than that book series contained.)\nYou will need to read all these chapters to progress further in the book, but it is totally up to you which order you read them in. They don’t depend on each other. If you skip ahead to Chapter 4, we will remind you at the end to come back and read the chapters you skipped over before you go any further.",
+ "crumbs": [
+ "Getting Started",
+ "1Your Deep Learning Journey"
+ ]
+ },
+ {
+ "objectID": "intro.html#questionnaire",
+ "href": "intro.html#questionnaire",
+ "title": "1 Your Deep Learning Journey",
+ "section": "1.10 Questionnaire",
+ "text": "1.10 Questionnaire\nIt can be hard to know in pages and pages of prose what the key things are that you really need to focus on and remember. So, we’ve prepared a list of questions and suggested steps to complete at the end of each chapter. All the answers are in the text of the chapter, so if you’re not sure about anything here, reread that part of the text and make sure you understand it. Answers to all these questions are also available on the book’s website. You can also visit the forums if you get stuck to get help from other folks studying this material.\nFor more questions, including detailed answers and links to the video timeline, have a look at Radek Osmulski’s aiquizzes.\n\nDo you need these for deep learning?\n\nLots of math T / F\nLots of data T / F\nLots of expensive computers T / F\nA PhD T / F\n\nName five areas where deep learning is now the best in the world.\nWhat was the name of the first device that was based on the principle of the artificial neuron?\nBased on the book of the same name, what are the requirements for parallel distributed processing (PDP)?\nWhat were the two theoretical misunderstandings that held back the field of neural networks?\nWhat is a GPU?\nOpen a notebook and execute a cell containing: 1+1. What happens?\nFollow through each cell of the stripped version of the notebook for this chapter. Before executing each cell, guess what will happen.\nComplete the Jupyter Notebook online appendix.\nWhy is it hard to use a traditional computer program to recognize images in a photo?\nWhat did Samuel mean by “weight assignment”?\nWhat term do we normally use in deep learning for what Samuel called “weights”?\nDraw a picture that summarizes Samuel’s view of a machine learning model.\nWhy is it hard to understand why a deep learning model makes a particular prediction?\nWhat is the name of the theorem that shows that a neural network can solve any mathematical problem to any level of accuracy?\nWhat do you need in order to train a model?\nHow could a feedback loop impact the rollout of a predictive policing model?\nDo we always have to use 224×224-pixel images with the cat recognition model?\nWhat is the difference between classification and regression?\nWhat is a validation set? What is a test set? Why do we need them?\nWhat will fastai do if you don’t provide a validation set?\nCan we always use a random sample for a validation set? Why or why not?\nWhat is overfitting? Provide an example.\nWhat is a metric? How does it differ from “loss”?\nHow can pretrained models help?\nWhat is the “head” of a model?\nWhat kinds of features do the early layers of a CNN find? How about the later layers?\nAre image models only useful for photos?\nWhat is an “architecture”?\nWhat is segmentation?\nWhat is y_range used for? When do we need it?\nWhat are “hyperparameters”?\nWhat’s the best way to avoid failures when using AI in an organization?\n\n\nFurther Research\nEach chapter also has a “Further Research” section that poses questions that aren’t fully answered in the text, or gives more advanced assignments. Answers to these questions aren’t on the book’s website; you’ll need to do your own research!\n\nWhy is a GPU useful for deep learning? How is a CPU different, and why is it less effective for deep learning?\nTry to think of three areas where feedback loops might impact the use of machine learning. See if you can find documented examples of that happening in practice.",
+ "crumbs": [
+ "Getting Started",
+ "1Your Deep Learning Journey"
+ ]
+ },
+ {
+ "objectID": "mnist_basics.html",
+ "href": "mnist_basics.html",
+ "title": "4 Under the Hood: Training a Digit Classifier",
+ "section": "",
+ "text": "4.1 Pixels: The Foundations of Computer Vision\nIn order to understand what happens in a computer vision model, we first have to understand how computers handle images. We’ll use one of the most famous datasets in computer vision, MNIST, for our experiments. MNIST contains images of handwritten digits, collected by the National Institute of Standards and Technology and collated into a machine learning dataset by Yann Lecun and his colleagues. Lecun used MNIST in 1998 in Lenet-5, the first computer system to demonstrate practically useful recognition of handwritten digit sequences. This was one of the most important breakthroughs in the history of AI.",
+ "crumbs": [
+ "The Details",
+ "4Under the Hood: Training a Digit Classifier"
+ ]
+ },
+ {
+ "objectID": "mnist_basics.html#sidebar-tenacity-and-deep-learning",
+ "href": "mnist_basics.html#sidebar-tenacity-and-deep-learning",
+ "title": "4 Under the Hood: Training a Digit Classifier",
+ "section": "4.2 Sidebar: Tenacity and Deep Learning",
+ "text": "4.2 Sidebar: Tenacity and Deep Learning\nThe story of deep learning is one of tenacity and grit by a handful of dedicated researchers. After early hopes (and hype!) neural networks went out of favor in the 1990’s and 2000’s, and just a handful of researchers kept trying to make them work well. Three of them, Yann Lecun, Yoshua Bengio, and Geoffrey Hinton, were awarded the highest honor in computer science, the Turing Award (generally considered the “Nobel Prize of computer science”), in 2018 after triumphing despite the deep skepticism and disinterest of the wider machine learning and statistics community.\nGeoff Hinton has told of how even academic papers showing dramatically better results than anything previously published would be rejected by top journals and conferences, just because they used a neural network. Yann Lecun’s work on convolutional neural networks, which we will study in the next section, showed that these models could read handwritten text—something that had never been achieved before. However, his breakthrough was ignored by most researchers, even as it was used commercially to read 10% of the checks in the US!\nIn addition to these three Turing Award winners, there are many other researchers who have battled to get us to where we are today. For instance, Jurgen Schmidhuber (who many believe should have shared in the Turing Award) pioneered many important ideas, including working with his student Sepp Hochreiter on the long short-term memory (LSTM) architecture (widely used for speech recognition and other text modeling tasks, and used in the IMDb example in Chapter 1). Perhaps most important of all, Paul Werbos in 1974 invented back-propagation for neural networks, the technique shown in this chapter and used universally for training neural networks (Werbos 1994). His development was almost entirely ignored for decades, but today it is considered the most important foundation of modern AI.\nThere is a lesson here for all of us! On your deep learning journey you will face many obstacles, both technical, and (even more difficult) posed by people around you who don’t believe you’ll be successful. There’s one guaranteed way to fail, and that’s to stop trying. We’ve seen that the only consistent trait amongst every fast.ai student that’s gone on to be a world-class practitioner is that they are all very tenacious.",
+ "crumbs": [
+ "The Details",
+ "4Under the Hood: Training a Digit Classifier"
+ ]
+ },
+ {
+ "objectID": "mnist_basics.html#end-sidebar",
+ "href": "mnist_basics.html#end-sidebar",
+ "title": "4 Under the Hood: Training a Digit Classifier",
+ "section": "4.3 End sidebar",
+ "text": "4.3 End sidebar\nFor this initial tutorial we are just going to try to create a model that can classify any image as a 3 or a 7. So let’s download a sample of MNIST that contains images of just these digits:\n\npath = untar_data(URLs.MNIST_SAMPLE)\n\nWe can see what’s in this directory by using ls, a method added by fastai. This method returns an object of a special fastai class called L, which has all the same functionality of Python’s built-in list, plus a lot more. One of its handy features is that, when printed, it displays the count of items, before listing the items themselves (if there are more than 10 items, it just shows the first few):\n\npath.ls()\n\n(#3) [Path('valid'),Path('labels.csv'),Path('train')]\n\n\nThe MNIST dataset follows a common layout for machine learning datasets: separate folders for the training set and the validation set (and/or test set). Let’s see what’s inside the training set:\n\n(path/'train').ls()\n\n(#2) [Path('train/7'),Path('train/3')]\n\n\nThere’s a folder of 3s, and a folder of 7s. In machine learning parlance, we say that “3” and “7” are the labels (or targets) in this dataset. Let’s take a look in one of these folders (using sorted to ensure we all get the same order of files):\n\nthrees = (path/'train'/'3').ls().sorted()\nsevens = (path/'train'/'7').ls().sorted()\nthrees\n\n(#6131) [Path('train/3/10.png'),Path('train/3/10000.png'),Path('train/3/10011.png'),Path('train/3/10031.png'),Path('train/3/10034.png'),Path('train/3/10042.png'),Path('train/3/10052.png'),Path('train/3/1007.png'),Path('train/3/10074.png'),Path('train/3/10091.png')...]\n\n\nAs we might expect, it’s full of image files. Let’s take a look at one now. Here’s an image of a handwritten number 3, taken from the famous MNIST dataset of handwritten numbers:\n\nim3_path = threes[1]\nim3 = Image.open(im3_path)\nim3\n\n\n\n\n\n\n\n\nHere we are using the Image class from the Python Imaging Library (PIL), which is the most widely used Python package for opening, manipulating, and viewing images. Jupyter knows about PIL images, so it displays the image for us automatically.\nIn a computer, everything is represented as a number. To view the numbers that make up this image, we have to convert it to a NumPy array or a PyTorch tensor. For instance, here’s what a section of the image looks like, converted to a NumPy array:\n\narray(im3)[4:10,4:10]\n\narray([[ 0, 0, 0, 0, 0, 0],\n [ 0, 0, 0, 0, 0, 29],\n [ 0, 0, 0, 48, 166, 224],\n [ 0, 93, 244, 249, 253, 187],\n [ 0, 107, 253, 253, 230, 48],\n [ 0, 3, 20, 20, 15, 0]], dtype=uint8)\n\n\nThe 4:10 indicates we requested the rows from index 4 (included) to 10 (not included) and the same for the columns. NumPy indexes from top to bottom and left to right, so this section is located in the top-left corner of the image. Here’s the same thing as a PyTorch tensor:\n\ntensor(im3)[4:10,4:10]\n\ntensor([[ 0, 0, 0, 0, 0, 0],\n [ 0, 0, 0, 0, 0, 29],\n [ 0, 0, 0, 48, 166, 224],\n [ 0, 93, 244, 249, 253, 187],\n [ 0, 107, 253, 253, 230, 48],\n [ 0, 3, 20, 20, 15, 0]], dtype=torch.uint8)\n\n\nWe can slice the array to pick just the part with the top of the digit in it, and then use a Pandas DataFrame to color-code the values using a gradient, which shows us clearly how the image is created from the pixel values:\n\nim3_t = tensor(im3)\ndf = pd.DataFrame(im3_t[4:15,4:22])\ndf.style.set_properties(**{'font-size':'6pt'}).background_gradient('Greys')\n\n\n\n\n\n\n\n\n\n\nYou can see that the background white pixels are stored as the number 0, black is the number 255, and shades of gray are between the two. The entire image contains 28 pixels across and 28 pixels down, for a total of 784 pixels. (This is much smaller than an image that you would get from a phone camera, which has millions of pixels, but is a convenient size for our initial learning and experiments. We will build up to bigger, full-color images soon.)\nSo, now you’ve seen what an image looks like to a computer, let’s recall our goal: create a model that can recognize 3s and 7s. How might you go about getting a computer to do that?\n\nWarning: Stop and Think!: Before you read on, take a moment to think about how a computer might be able to recognize these two different digits. What kinds of features might it be able to look at? How might it be able to identify these features? How could it combine them together? Learning works best when you try to solve problems yourself, rather than just reading somebody else’s answers; so step away from this book for a few minutes, grab a piece of paper and pen, and jot some ideas down…",
+ "crumbs": [
+ "The Details",
+ "4Under the Hood: Training a Digit Classifier"
+ ]
+ },
+ {
+ "objectID": "mnist_basics.html#first-try-pixel-similarity",
+ "href": "mnist_basics.html#first-try-pixel-similarity",
+ "title": "4 Under the Hood: Training a Digit Classifier",
+ "section": "4.4 First Try: Pixel Similarity",
+ "text": "4.4 First Try: Pixel Similarity\nSo, here is a first idea: how about we find the average pixel value for every pixel of the 3s, then do the same for the 7s. This will give us two group averages, defining what we might call the “ideal” 3 and 7. Then, to classify an image as one digit or the other, we see which of these two ideal digits the image is most similar to. This certainly seems like it should be better than nothing, so it will make a good baseline.\n\njargon: Baseline: A simple model which you are confident should perform reasonably well. It should be very simple to implement, and very easy to test, so that you can then test each of your improved ideas, and make sure they are always better than your baseline. Without starting with a sensible baseline, it is very difficult to know whether your super-fancy models are actually any good. One good approach to creating a baseline is doing what we have done here: think of a simple, easy-to-implement model. Another good approach is to search around to find other people that have solved similar problems to yours, and download and run their code on your dataset. Ideally, try both of these!\n\nStep one for our simple model is to get the average of pixel values for each of our two groups. In the process of doing this, we will learn a lot of neat Python numeric programming tricks!\nLet’s create a tensor containing all of our 3s stacked together. We already know how to create a tensor containing a single image. To create a tensor containing all the images in a directory, we will first use a Python list comprehension to create a plain list of the single image tensors.\nWe will use Jupyter to do some little checks of our work along the way—in this case, making sure that the number of returned items seems reasonable:\n\nseven_tensors = [tensor(Image.open(o)) for o in sevens]\nthree_tensors = [tensor(Image.open(o)) for o in threes]\nlen(three_tensors),len(seven_tensors)\n\n(6131, 6265)\n\n\n\nnote: List Comprehensions: List and dictionary comprehensions are a wonderful feature of Python. Many Python programmers use them every day, including the authors of this book—they are part of “idiomatic Python.” But programmers coming from other languages may have never seen them before. There are a lot of great tutorials just a web search away, so we won’t spend a long time discussing them now. Here is a quick explanation and example to get you started. A list comprehension looks like this: new_list = [f(o) for o in a_list if o>0]. This will return every element of a_list that is greater than 0, after passing it to the function f. There are three parts here: the collection you are iterating over (a_list), an optional filter (if o>0), and something to do to each element (f(o)). It’s not only shorter to write but way faster than the alternative ways of creating the same list with a loop.\n\nWe’ll also check that one of the images looks okay. Since we now have tensors (which Jupyter by default will print as values), rather than PIL images (which Jupyter by default will display as images), we need to use fastai’s show_image function to display it:\n\nshow_image(three_tensors[1]);\n\n\n\n\n\n\n\n\nFor every pixel position, we want to compute the average over all the images of the intensity of that pixel. To do this we first combine all the images in this list into a single three-dimensional tensor. The most common way to describe such a tensor is to call it a rank-3 tensor. We often need to stack up individual tensors in a collection into a single tensor. Unsurprisingly, PyTorch comes with a function called stack that we can use for this purpose.\nSome operations in PyTorch, such as taking a mean, require us to cast our integer types to float types. Since we’ll be needing this later, we’ll also cast our stacked tensor to float now. Casting in PyTorch is as simple as typing the name of the type you wish to cast to, and treating it as a method.\nGenerally when images are floats, the pixel values are expected to be between 0 and 1, so we will also divide by 255 here:\n\nstacked_sevens = torch.stack(seven_tensors).float()/255\nstacked_threes = torch.stack(three_tensors).float()/255\nstacked_threes.shape\n\ntorch.Size([6131, 28, 28])\n\n\nPerhaps the most important attribute of a tensor is its shape. This tells you the length of each axis. In this case, we can see that we have 6,131 images, each of size 28×28 pixels. There is nothing specifically about this tensor that says that the first axis is the number of images, the second is the height, and the third is the width—the semantics of a tensor are entirely up to us, and how we construct it. As far as PyTorch is concerned, it is just a bunch of numbers in memory.\nThe length of a tensor’s shape is its rank:\n\nlen(stacked_threes.shape)\n\n3\n\n\nIt is really important for you to commit to memory and practice these bits of tensor jargon: rank is the number of axes or dimensions in a tensor; shape is the size of each axis of a tensor.\n\nA: Watch out because the term “dimension” is sometimes used in two ways. Consider that we live in “three-dimensonal space” where a physical position can be described by a 3-vector v. But according to PyTorch, the attribute v.ndim (which sure looks like the “number of dimensions” of v) equals one, not three! Why? Because v is a vector, which is a tensor of rank one, meaning that it has only one axis (even if that axis has a length of three). In other words, sometimes dimension is used for the size of an axis (“space is three-dimensional”); other times, it is used for the rank, or the number of axes (“a matrix has two dimensions”). When confused, I find it helpful to translate all statements into terms of rank, axis, and length, which are unambiguous terms.\n\nWe can also get a tensor’s rank directly with ndim:\n\nstacked_threes.ndim\n\n3\n\n\nFinally, we can compute what the ideal 3 looks like. We calculate the mean of all the image tensors by taking the mean along dimension 0 of our stacked, rank-3 tensor. This is the dimension that indexes over all the images.\nIn other words, for every pixel position, this will compute the average of that pixel over all images. The result will be one value for every pixel position, or a single image. Here it is:\n\nmean3 = stacked_threes.mean(0)\nshow_image(mean3);\n\n\n\n\n\n\n\n\nAccording to this dataset, this is the ideal number 3! (You may not like it, but this is what peak number 3 performance looks like.) You can see how it’s very dark where all the images agree it should be dark, but it becomes wispy and blurry where the images disagree.\nLet’s do the same thing for the 7s, but put all the steps together at once to save some time:\n\nmean7 = stacked_sevens.mean(0)\nshow_image(mean7);\n\n\n\n\n\n\n\n\nLet’s now pick an arbitrary 3 and measure its distance from our “ideal digits.”\n\nstop: Stop and Think!: How would you calculate how similar a particular image is to each of our ideal digits? Remember to step away from this book and jot down some ideas before you move on! Research shows that recall and understanding improves dramatically when you are engaged with the learning process by solving problems, experimenting, and trying new ideas yourself\n\nHere’s a sample 3:\n\na_3 = stacked_threes[1]\nshow_image(a_3);\n\n\n\n\n\n\n\n\nHow can we determine its distance from our ideal 3? We can’t just add up the differences between the pixels of this image and the ideal digit. Some differences will be positive while others will be negative, and these differences will cancel out, resulting in a situation where an image that is too dark in some places and too light in others might be shown as having zero total differences from the ideal. That would be misleading!\nTo avoid this, there are two main ways data scientists measure distance in this context:\n\nTake the mean of the absolute value of differences (absolute value is the function that replaces negative values with positive values). This is called the mean absolute difference or L1 norm\nTake the mean of the square of differences (which makes everything positive) and then take the square root (which undoes the squaring). This is called the root mean squared error (RMSE) or L2 norm.\n\n\nimportant: It’s Okay to Have Forgotten Your Math: In this book we generally assume that you have completed high school math, and remember at least some of it… But everybody forgets some things! It all depends on what you happen to have had reason to practice in the meantime. Perhaps you have forgotten what a square root is, or exactly how they work. No problem! Any time you come across a maths concept that is not explained fully in this book, don’t just keep moving on; instead, stop and look it up. Make sure you understand the basic idea, how it works, and why we might be using it. One of the best places to refresh your understanding is Khan Academy. For instance, Khan Academy has a great introduction to square roots.\n\nLet’s try both of these now:\n\ndist_3_abs = (a_3 - mean3).abs().mean()\ndist_3_sqr = ((a_3 - mean3)**2).mean().sqrt()\ndist_3_abs,dist_3_sqr\n\n(tensor(0.1114), tensor(0.2021))\n\n\n\ndist_7_abs = (a_3 - mean7).abs().mean()\ndist_7_sqr = ((a_3 - mean7)**2).mean().sqrt()\ndist_7_abs,dist_7_sqr\n\n(tensor(0.1586), tensor(0.3021))\n\n\nIn both cases, the distance between our 3 and the “ideal” 3 is less than the distance to the ideal 7. So our simple model will give the right prediction in this case.\nPyTorch already provides both of these as loss functions. You’ll find these inside torch.nn.functional, which the PyTorch team recommends importing as F (and is available by default under that name in fastai):\n\nF.l1_loss(a_3.float(),mean7), F.mse_loss(a_3,mean7).sqrt()\n\n(tensor(0.1586), tensor(0.3021))\n\n\nHere mse stands for mean squared error, and l1 refers to the standard mathematical jargon for mean absolute value (in math it’s called the L1 norm).\n\nS: Intuitively, the difference between L1 norm and mean squared error (MSE) is that the latter will penalize bigger mistakes more heavily than the former (and be more lenient with small mistakes).\n\n\nJ: When I first came across this “L1” thingie, I looked it up to see what on earth it meant. I found on Google that it is a vector norm using absolute value, so looked up vector norm and started reading: Given a vector space V over a field F of the real or complex numbers, a norm on V is a nonnegative-valued any function p: V → [0,+∞) with the following properties: For all a ∈ F and all u, v ∈ V, p(u + v) ≤ p(u) + p(v)… Then I stopped reading. “Ugh, I’ll never understand math!” I thought, for the thousandth time. Since then I’ve learned that every time these complex mathy bits of jargon come up in practice, it turns out I can replace them with a tiny bit of code! Like, the L1 loss is just equal to (a-b).abs().mean(), where a and b are tensors. I guess mathy folks just think differently than me… I’ll make sure in this book that every time some mathy jargon comes up, I’ll give you the little bit of code it’s equal to as well, and explain in common-sense terms what’s going on.\n\nWe just completed various mathematical operations on PyTorch tensors. If you’ve done some numeric programming in NumPy before, you may recognize these as being similar to NumPy arrays. Let’s have a look at those two very important data structures.\n\nNumPy Arrays and PyTorch Tensors\nNumPy is the most widely used library for scientific and numeric programming in Python. It provides very similar functionality and a very similar API to that provided by PyTorch; however, it does not support using the GPU or calculating gradients, which are both critical for deep learning. Therefore, in this book we will generally use PyTorch tensors instead of NumPy arrays, where possible.\n(Note that fastai adds some features to NumPy and PyTorch to make them a bit more similar to each other. If any code in this book doesn’t work on your computer, it’s possible that you forgot to include a line like this at the start of your notebook: from fastai.vision.all import *.)\nBut what are arrays and tensors, and why should you care?\nPython is slow compared to many languages. Anything fast in Python, NumPy, or PyTorch is likely to be a wrapper for a compiled object written (and optimized) in another language—specifically C. In fact, NumPy arrays and PyTorch tensors can finish computations many thousands of times faster than using pure Python.\nA NumPy array is a multidimensional table of data, with all items of the same type. Since that can be any type at all, they can even be arrays of arrays, with the innermost arrays potentially being different sizes—this is called a “jagged array.” By “multidimensional table” we mean, for instance, a list (dimension of one), a table or matrix (dimension of two), a “table of tables” or “cube” (dimension of three), and so forth. If the items are all of some simple type such as integer or float, then NumPy will store them as a compact C data structure in memory. This is where NumPy shines. NumPy has a wide variety of operators and methods that can run computations on these compact structures at the same speed as optimized C, because they are written in optimized C.\nA PyTorch tensor is nearly the same thing as a NumPy array, but with an additional restriction that unlocks some additional capabilities. It’s the same in that it, too, is a multidimensional table of data, with all items of the same type. However, the restriction is that a tensor cannot use just any old type—it has to use a single basic numeric type for all components. For example, a PyTorch tensor cannot be jagged. It is always a regularly shaped multidimensional rectangular structure.\nThe vast majority of methods and operators supported by NumPy on these structures are also supported by PyTorch, but PyTorch tensors have additional capabilities. One major capability is that these structures can live on the GPU, in which case their computation will be optimized for the GPU and can run much faster (given lots of values to work on). In addition, PyTorch can automatically calculate derivatives of these operations, including combinations of operations. As you’ll see, it would be impossible to do deep learning in practice without this capability.\n\nS: If you don’t know what C is, don’t worry as you won’t need it at all. In a nutshell, it’s a low-level (low-level means more similar to the language that computers use internally) language that is very fast compared to Python. To take advantage of its speed while programming in Python, try to avoid as much as possible writing loops, and replace them by commands that work directly on arrays or tensors.\n\nPerhaps the most important new coding skill for a Python programmer to learn is how to effectively use the array/tensor APIs. We will be showing lots more tricks later in this book, but here’s a summary of the key things you need to know for now.\nTo create an array or tensor, pass a list (or list of lists, or list of lists of lists, etc.) to array() or tensor():\n\ndata = [[1,2,3],[4,5,6]]\narr = array (data)\ntns = tensor(data)\n\n\narr # numpy\n\narray([[1, 2, 3],\n [4, 5, 6]])\n\n\n\ntns # pytorch\n\ntensor([[1, 2, 3],\n [4, 5, 6]])\n\n\nAll the operations that follow are shown on tensors, but the syntax and results for NumPy arrays is identical.\nYou can select a row (note that, like lists in Python, tensors are 0-indexed so 1 refers to the second row/column):\n\ntns[1]\n\ntensor([4, 5, 6])\n\n\nor a column, by using : to indicate all of the first axis (we sometimes refer to the dimensions of tensors/arrays as axes):\n\ntns[:,1]\n\ntensor([2, 5])\n\n\nYou can combine these with Python slice syntax ([start:end] with end being excluded) to select part of a row or column:\n\ntns[1,1:3]\n\ntensor([5, 6])\n\n\nAnd you can use the standard operators such as +, -, *, /:\n\ntns+1\n\ntensor([[2, 3, 4],\n [5, 6, 7]])\n\n\nTensors have a type:\n\ntns.type()\n\n'torch.LongTensor'\n\n\nAnd will automatically change type as needed, for example from int to float:\n\ntns*1.5\n\ntensor([[1.5000, 3.0000, 4.5000],\n [6.0000, 7.5000, 9.0000]])\n\n\nSo, is our baseline model any good? To quantify this, we must define a metric.",
+ "crumbs": [
+ "The Details",
+ "4Under the Hood: Training a Digit Classifier"
+ ]
+ },
+ {
+ "objectID": "mnist_basics.html#computing-metrics-using-broadcasting",
+ "href": "mnist_basics.html#computing-metrics-using-broadcasting",
+ "title": "4 Under the Hood: Training a Digit Classifier",
+ "section": "4.5 Computing Metrics Using Broadcasting",
+ "text": "4.5 Computing Metrics Using Broadcasting\nRecall that a metric is a number that is calculated based on the predictions of our model, and the correct labels in our dataset, in order to tell us how good our model is. For instance, we could use either of the functions we saw in the previous section, mean squared error, or mean absolute error, and take the average of them over the whole dataset. However, neither of these are numbers that are very understandable to most people; in practice, we normally use accuracy as the metric for classification models.\nAs we’ve discussed, we want to calculate our metric over a validation set. This is so that we don’t inadvertently overfit—that is, train a model to work well only on our training data. This is not really a risk with the pixel similarity model we’re using here as a first try, since it has no trained components, but we’ll use a validation set anyway to follow normal practices and to be ready for our second try later.\nTo get a validation set we need to remove some of the data from training entirely, so it is not seen by the model at all. As it turns out, the creators of the MNIST dataset have already done this for us. Do you remember how there was a whole separate directory called valid? That’s what this directory is for!\nSo to start with, let’s create tensors for our 3s and 7s from that directory. These are the tensors we will use to calculate a metric measuring the quality of our first-try model, which measures distance from an ideal image:\n\nvalid_3_tens = torch.stack([tensor(Image.open(o)) \n for o in (path/'valid'/'3').ls()])\nvalid_3_tens = valid_3_tens.float()/255\nvalid_7_tens = torch.stack([tensor(Image.open(o)) \n for o in (path/'valid'/'7').ls()])\nvalid_7_tens = valid_7_tens.float()/255\nvalid_3_tens.shape,valid_7_tens.shape\n\n(torch.Size([1010, 28, 28]), torch.Size([1028, 28, 28]))\n\n\nIt’s good to get in the habit of checking shapes as you go. Here we see two tensors, one representing the 3s validation set of 1,010 images of size 28×28, and one representing the 7s validation set of 1,028 images of size 28×28.\nWe ultimately want to write a function, is_3, that will decide if an arbitrary image is a 3 or a 7. It will do this by deciding which of our two “ideal digits” this arbitrary image is closer to. For that we need to define a notion of distance—that is, a function that calculates the distance between two images.\nWe can write a simple function that calculates the mean absolute error using an expression very similar to the one we wrote in the last section:\n\ndef mnist_distance(a,b): return (a-b).abs().mean((-1,-2))\nmnist_distance(a_3, mean3)\n\ntensor(0.1114)\n\n\nThis is the same value we previously calculated for the distance between these two images, the ideal 3 mean3 and the arbitrary sample 3 a_3, which are both single-image tensors with a shape of [28,28].\nBut in order to calculate a metric for overall accuracy, we will need to calculate the distance to the ideal 3 for every image in the validation set. How do we do that calculation? We could write a loop over all of the single-image tensors that are stacked within our validation set tensor, valid_3_tens, which has a shape of [1010,28,28] representing 1,010 images. But there is a better way.\nSomething very interesting happens when we take this exact same distance function, designed for comparing two single images, but pass in as an argument valid_3_tens, the tensor that represents the 3s validation set:\n\nvalid_3_dist = mnist_distance(valid_3_tens, mean3)\nvalid_3_dist, valid_3_dist.shape\n\n(tensor([0.1634, 0.1145, 0.1363, ..., 0.1105, 0.1111, 0.1640]),\n torch.Size([1010]))\n\n\nInstead of complaining about shapes not matching, it returned the distance for every single image as a vector (i.e., a rank-1 tensor) of length 1,010 (the number of 3s in our validation set). How did that happen?\nTake another look at our function mnist_distance, and you’ll see we have there the subtraction (a-b). The magic trick is that PyTorch, when it tries to perform a simple subtraction operation between two tensors of different ranks, will use broadcasting. That is, it will automatically expand the tensor with the smaller rank to have the same size as the one with the larger rank. Broadcasting is an important capability that makes tensor code much easier to write.\nAfter broadcasting so the two argument tensors have the same rank, PyTorch applies its usual logic for two tensors of the same rank: it performs the operation on each corresponding element of the two tensors, and returns the tensor result. For instance:\n\ntensor([1,2,3]) + tensor(1)\n\ntensor([2, 3, 4])\n\n\nSo in this case, PyTorch treats mean3, a rank-2 tensor representing a single image, as if it were 1,010 copies of the same image, and then subtracts each of those copies from each 3 in our validation set. What shape would you expect this tensor to have? Try to figure it out yourself before you look at the answer below:\n\n(valid_3_tens-mean3).shape\n\ntorch.Size([1010, 28, 28])\n\n\nWe are calculating the difference between our “ideal 3” and each of the 1,010 3s in the validation set, for each of 28×28 images, resulting in the shape [1010,28,28].\nThere are a couple of important points about how broadcasting is implemented, which make it valuable not just for expressivity but also for performance:\n\nPyTorch doesn’t actually copy mean3 1,010 times. It pretends it were a tensor of that shape, but doesn’t actually allocate any additional memory\nIt does the whole calculation in C (or, if you’re using a GPU, in CUDA, the equivalent of C on the GPU), tens of thousands of times faster than pure Python (up to millions of times faster on a GPU!).\n\nThis is true of all broadcasting and elementwise operations and functions done in PyTorch. It’s the most important technique for you to know to create efficient PyTorch code.\nNext in mnist_distance we see abs. You might be able to guess now what this does when applied to a tensor. It applies the method to each individual element in the tensor, and returns a tensor of the results (that is, it applies the method “elementwise”). So in this case, we’ll get back 1,010 matrices of absolute values.\nFinally, our function calls mean((-1,-2)). The tuple (-1,-2) represents a range of axes. In Python, -1 refers to the last element, and -2 refers to the second-to-last. So in this case, this tells PyTorch that we want to take the mean ranging over the values indexed by the last two axes of the tensor. The last two axes are the horizontal and vertical dimensions of an image. After taking the mean over the last two axes, we are left with just the first tensor axis, which indexes over our images, which is why our final size was (1010). In other words, for every image, we averaged the intensity of all the pixels in that image.\nWe’ll be learning lots more about broadcasting throughout this book, especially in Chapter 17, and will be practicing it regularly too.\nWe can use mnist_distance to figure out whether an image is a 3 or not by using the following logic: if the distance between the digit in question and the ideal 3 is less than the distance to the ideal 7, then it’s a 3. This function will automatically do broadcasting and be applied elementwise, just like all PyTorch functions and operators:\n\ndef is_3(x): return mnist_distance(x,mean3) < mnist_distance(x,mean7)\n\nLet’s test it on our example case:\n\nis_3(a_3), is_3(a_3).float()\n\n(tensor(True), tensor(1.))\n\n\nNote that when we convert the Boolean response to a float, we get 1.0 for True and 0.0 for False. Thanks to broadcasting, we can also test it on the full validation set of 3s:\n\nis_3(valid_3_tens)\n\ntensor([True, True, True, ..., True, True, True])\n\n\nNow we can calculate the accuracy for each of the 3s and 7s by taking the average of that function for all 3s and its inverse for all 7s:\n\naccuracy_3s = is_3(valid_3_tens).float() .mean()\naccuracy_7s = (1 - is_3(valid_7_tens).float()).mean()\n\naccuracy_3s,accuracy_7s,(accuracy_3s+accuracy_7s)/2\n\n(tensor(0.9168), tensor(0.9854), tensor(0.9511))\n\n\nThis looks like a pretty good start! We’re getting over 90% accuracy on both 3s and 7s, and we’ve seen how to define a metric conveniently using broadcasting.\nBut let’s be honest: 3s and 7s are very different-looking digits. And we’re only classifying 2 out of the 10 possible digits so far. So we’re going to need to do better!\nTo do better, perhaps it is time to try a system that does some real learning—that is, that can automatically modify itself to improve its performance. In other words, it’s time to talk about the training process, and SGD.",
+ "crumbs": [
+ "The Details",
+ "4Under the Hood: Training a Digit Classifier"
+ ]
+ },
+ {
+ "objectID": "mnist_basics.html#stochastic-gradient-descent-sgd",
+ "href": "mnist_basics.html#stochastic-gradient-descent-sgd",
+ "title": "4 Under the Hood: Training a Digit Classifier",
+ "section": "4.6 Stochastic Gradient Descent (SGD)",
+ "text": "4.6 Stochastic Gradient Descent (SGD)\nDo you remember the way that Arthur Samuel described machine learning, which we quoted in Chapter 1?\n\nSuppose we arrange for some automatic means of testing the effectiveness of any current weight assignment in terms of actual performance and provide a mechanism for altering the weight assignment so as to maximize the performance. We need not go into the details of such a procedure to see that it could be made entirely automatic and to see that a machine so programmed would “learn” from its experience.\n\nAs we discussed, this is the key to allowing us to have a model that can get better and better—that can learn. But our pixel similarity approach does not really do this. We do not have any kind of weight assignment, or any way of improving based on testing the effectiveness of a weight assignment. In other words, we can’t really improve our pixel similarity approach by modifying a set of parameters. In order to take advantage of the power of deep learning, we will first have to represent our task in the way that Arthur Samuel described it.\nInstead of trying to find the similarity between an image and an “ideal image,” we could instead look at each individual pixel and come up with a set of weights for each one, such that the highest weights are associated with those pixels most likely to be black for a particular category. For instance, pixels toward the bottom right are not very likely to be activated for a 7, so they should have a low weight for a 7, but they are likely to be activated for an 8, so they should have a high weight for an 8. This can be represented as a function and set of weight values for each possible category—for instance the probability of being the number 8:\ndef pr_eight(x,w): return (x*w).sum()\nHere we are assuming that x is the image, represented as a vector—in other words, with all of the rows stacked up end to end into a single long line. And we are assuming that the weights are a vector w. If we have this function, then we just need some way to update the weights to make them a little bit better. With such an approach, we can repeat that step a number of times, making the weights better and better, until they are as good as we can make them.\nWe want to find the specific values for the vector w that causes the result of our function to be high for those images that are actually 8s, and low for those images that are not. Searching for the best vector w is a way to search for the best function for recognising 8s. (Because we are not yet using a deep neural network, we are limited by what our function can actually do—we are going to fix that constraint later in this chapter.)\nTo be more specific, here are the steps that we are going to require, to turn this function into a machine learning classifier:\n\nInitialize the weights.\nFor each image, use these weights to predict whether it appears to be a 3 or a 7.\nBased on these predictions, calculate how good the model is (its loss).\nCalculate the gradient, which measures for each weight, how changing that weight would change the loss\nStep (that is, change) all the weights based on that calculation.\nGo back to the step 2, and repeat the process.\nIterate until you decide to stop the training process (for instance, because the model is good enough or you don’t want to wait any longer).\n\nThese seven steps, illustrated in Figure 4.2, are the key to the training of all deep learning models. That deep learning turns out to rely entirely on these steps is extremely surprising and counterintuitive. It’s amazing that this process can solve such complex problems. But, as you’ll see, it really does!\n\n\n\n\n\n\n\n\nFigure 4.2: The gradient descent process\n\n\n\n\n\nThere are many different ways to do each of these seven steps, and we will be learning about them throughout the rest of this book. These are the details that make a big difference for deep learning practitioners, but it turns out that the general approach to each one generally follows some basic principles. Here are a few guidelines:\n\nInitialize:: We initialize the parameters to random values. This may sound surprising. There are certainly other choices we could make, such as initializing them to the percentage of times that pixel is activated for that category—but since we already know that we have a routine to improve these weights, it turns out that just starting with random weights works perfectly well.\nLoss:: This is what Samuel referred to when he spoke of testing the effectiveness of any current weight assignment in terms of actual performance. We need some function that will return a number that is small if the performance of the model is good (the standard approach is to treat a small loss as good, and a large loss as bad, although this is just a convention).\nStep:: A simple way to figure out whether a weight should be increased a bit, or decreased a bit, would be just to try it: increase the weight by a small amount, and see if the loss goes up or down. Once you find the correct direction, you could then change that amount by a bit more, and a bit less, until you find an amount that works well. However, this is slow! As we will see, the magic of calculus allows us to directly figure out in which direction, and by roughly how much, to change each weight, without having to try all these small changes. The way to do this is by calculating gradients. This is just a performance optimization, we would get exactly the same results by using the slower manual process as well.\nStop:: Once we’ve decided how many epochs to train the model for (a few suggestions for this were given in the earlier list), we apply that decision. This is where that decision is applied. For our digit classifier, we would keep training until the accuracy of the model started getting worse, or we ran out of time.\n\nBefore applying these steps to our image classification problem, let’s illustrate what they look like in a simpler case. First we will define a very simple function, the quadratic—let’s pretend that this is our loss function, and x is a weight parameter of the function:\n\ndef f(x): return x**2\n\nHere is a graph of that function:\n\nplot_function(f, 'x', 'x**2')\n\n\n\n\n\n\n\n\nThe sequence of steps we described earlier starts by picking some random value for a parameter, and calculating the value of the loss:\n\nplot_function(f, 'x', 'x**2')\nplt.scatter(-1.5, f(-1.5), color='red');\n\n\n\n\n\n\n\n\nNow we look to see what would happen if we increased or decreased our parameter by a little bit—the adjustment. This is simply the slope at a particular point:\n\nWe can change our weight by a little in the direction of the slope, calculate our loss and adjustment again, and repeat this a few times. Eventually, we will get to the lowest point on our curve:\n\nThis basic idea goes all the way back to Isaac Newton, who pointed out that we can optimize arbitrary functions in this way. Regardless of how complicated our functions become, this basic approach of gradient descent will not significantly change. The only minor changes we will see later in this book are some handy ways we can make it faster, by finding better steps.\n\nCalculating Gradients\nThe one magic step is the bit where we calculate the gradients. As we mentioned, we use calculus as a performance optimization; it allows us to more quickly calculate whether our loss will go up or down when we adjust our parameters up or down. In other words, the gradients will tell us how much we have to change each weight to make our model better.\nYou may remember from your high school calculus class that the derivative of a function tells you how much a change in its parameters will change its result. If not, don’t worry, lots of us forget calculus once high school is behind us! But you will have to have some intuitive understanding of what a derivative is before you continue, so if this is all very fuzzy in your head, head over to Khan Academy and complete the lessons on basic derivatives. You won’t have to know how to calculate them yourselves, you just have to know what a derivative is.\nThe key point about a derivative is this: for any function, such as the quadratic function we saw in the previous section, we can calculate its derivative. The derivative is another function. It calculates the change, rather than the value. For instance, the derivative of the quadratic function at the value 3 tells us how rapidly the function changes at the value 3. More specifically, you may recall that gradient is defined as rise/run, that is, the change in the value of the function, divided by the change in the value of the parameter. When we know how our function will change, then we know what we need to do to make it smaller. This is the key to machine learning: having a way to change the parameters of a function to make it smaller. Calculus provides us with a computational shortcut, the derivative, which lets us directly calculate the gradients of our functions.\nOne important thing to be aware of is that our function has lots of weights that we need to adjust, so when we calculate the derivative we won’t get back one number, but lots of them—a gradient for every weight. But there is nothing mathematically tricky here; you can calculate the derivative with respect to one weight, and treat all the other ones as constant, then repeat that for each other weight. This is how all of the gradients are calculated, for every weight.\nWe mentioned just now that you won’t have to calculate any gradients yourself. How can that be? Amazingly enough, PyTorch is able to automatically compute the derivative of nearly any function! What’s more, it does it very fast. Most of the time, it will be at least as fast as any derivative function that you can create by hand. Let’s see an example.\nFirst, let’s pick a tensor value which we want gradients at:\n\nxt = tensor(3.).requires_grad_()\n\nNotice the special method requires_grad_? That’s the magical incantation we use to tell PyTorch that we want to calculate gradients with respect to that variable at that value. It is essentially tagging the variable, so PyTorch will remember to keep track of how to compute gradients of the other, direct calculations on it that you will ask for.\n\na: This API might throw you off if you’re coming from math or physics. In those contexts the “gradient” of a function is just another function (i.e., its derivative), so you might expect gradient-related APIs to give you a new function. But in deep learning, “gradients” usually means the value of a function’s derivative at a particular argument value. The PyTorch API also puts the focus on the argument, not the function you’re actually computing the gradients of. It may feel backwards at first, but it’s just a different perspective.\n\nNow we calculate our function with that value. Notice how PyTorch prints not just the value calculated, but also a note that it has a gradient function it’ll be using to calculate our gradients when needed:\n\nyt = f(xt)\nyt\n\ntensor(9., grad_fn=<PowBackward0>)\n\n\nFinally, we tell PyTorch to calculate the gradients for us:\n\nyt.backward()\n\nThe “backward” here refers to backpropagation, which is the name given to the process of calculating the derivative of each layer. We’ll see how this is done exactly in chapter Chapter 17, when we calculate the gradients of a deep neural net from scratch. This is called the “backward pass” of the network, as opposed to the “forward pass,” which is where the activations are calculated. Life would probably be easier if backward was just called calculate_grad, but deep learning folks really do like to add jargon everywhere they can!\nWe can now view the gradients by checking the grad attribute of our tensor:\n\nxt.grad\n\ntensor(6.)\n\n\nIf you remember your high school calculus rules, the derivative of x**2 is 2*x, and we have x=3, so the gradients should be 2*3=6, which is what PyTorch calculated for us!\nNow we’ll repeat the preceding steps, but with a vector argument for our function:\n\nxt = tensor([3.,4.,10.]).requires_grad_()\nxt\n\ntensor([ 3., 4., 10.], requires_grad=True)\n\n\nAnd we’ll add sum to our function so it can take a vector (i.e., a rank-1 tensor), and return a scalar (i.e., a rank-0 tensor):\n\ndef f(x): return (x**2).sum()\n\nyt = f(xt)\nyt\n\ntensor(125., grad_fn=<SumBackward0>)\n\n\nOur gradients are 2*xt, as we’d expect!\n\nyt.backward()\nxt.grad\n\ntensor([ 6., 8., 20.])\n\n\nThe gradients only tell us the slope of our function, they don’t actually tell us exactly how far to adjust the parameters. But it gives us some idea of how far; if the slope is very large, then that may suggest that we have more adjustments to do, whereas if the slope is very small, that may suggest that we are close to the optimal value.\n\n\nStepping With a Learning Rate\nDeciding how to change our parameters based on the values of the gradients is an important part of the deep learning process. Nearly all approaches start with the basic idea of multiplying the gradient by some small number, called the learning rate (LR). The learning rate is often a number between 0.001 and 0.1, although it could be anything. Often, people select a learning rate just by trying a few, and finding which results in the best model after training (we’ll show you a better approach later in this book, called the learning rate finder). Once you’ve picked a learning rate, you can adjust your parameters using this simple function:\nw -= gradient(w) * lr\nThis is known as stepping your parameters, using an optimizer step. Notice how we subtract the gradient * lr from the parameter to update it. This allows us to adjust the parameter in the direction of the slope by increasing the parameter when the slope is negative and decreasing the parameter when the slope is positive. We want to adjust our parameters in the direction of the slope because our goal in deep learning is to minimize the loss.\nIf you pick a learning rate that’s too low, it can mean having to do a lot of steps. Figure 4.3 illustrates that.\n\n\n\n\n\n\nFigure 4.3: Gradient descent with low LR\n\n\n\nBut picking a learning rate that’s too high is even worse—it can actually result in the loss getting worse, as we see in Figure 4.4!\n\n\n\n\n\n\nFigure 4.4: Gradient descent with high LR\n\n\n\nIf the learning rate is too high, it may also “bounce” around, rather than actually diverging; Figure 4.5 shows how this has the result of taking many steps to train successfully.\n\n\n\n\n\n\nFigure 4.5: Gradient descent with bouncy LR\n\n\n\nNow let’s apply all of this in an end-to-end example.\n\n\nAn End-to-End SGD Example\nWe’ve seen how to use gradients to find a minimum. Now it’s time to look at an SGD example and see how finding a minimum can be used to train a model to fit data better.\nLet’s start with a simple, synthetic, example model. Imagine you were measuring the speed of a roller coaster as it went over the top of a hump. It would start fast, and then get slower as it went up the hill; it would be slowest at the top, and it would then speed up again as it went downhill. You want to build a model of how the speed changes over time. If you were measuring the speed manually every second for 20 seconds, it might look something like this:\n\ntime = torch.arange(0,20).float(); time\n\ntensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19.])\n\n\n\nspeed = torch.randn(20)*3 + 0.75*(time-9.5)**2 + 1\nplt.scatter(time,speed);\n\n\n\n\n\n\n\n\nWe’ve added a bit of random noise, since measuring things manually isn’t precise. This means it’s not that easy to answer the question: what was the roller coaster’s speed? Using SGD we can try to find a function that matches our observations. We can’t consider every possible function, so let’s use a guess that it will be quadratic; i.e., a function of the form a*(time**2)+(b*time)+c.\nWe want to distinguish clearly between the function’s input (the time when we are measuring the coaster’s speed) and its parameters (the values that define which quadratic we’re trying). So, let’s collect the parameters in one argument and thus separate the input, t, and the parameters, params, in the function’s signature:\n\ndef f(t, params):\n a,b,c = params\n return a*(t**2) + (b*t) + c\n\nIn other words, we’ve restricted the problem of finding the best imaginable function that fits the data, to finding the best quadratic function. This greatly simplifies the problem, since every quadratic function is fully defined by the three parameters a, b, and c. Thus, to find the best quadratic function, we only need to find the best values for a, b, and c.\nIf we can solve this problem for the three parameters of a quadratic function, we’ll be able to apply the same approach for other, more complex functions with more parameters—such as a neural net. Let’s find the parameters for f first, and then we’ll come back and do the same thing for the MNIST dataset with a neural net.\nWe need to define first what we mean by “best.” We define this precisely by choosing a loss function, which will return a value based on a prediction and a target, where lower values of the function correspond to “better” predictions. It is important for loss functions to return lower values when predictions are more accurate, as the SGD procedure we defined earlier will try to minimize this loss. For continuous data, it’s common to use mean squared error:\n\ndef mse(preds, targets): return ((preds-targets)**2).mean()\n\nNow, let’s work through our 7 step process.\n\nStep 1: Initialize the parameters\nFirst, we initialize the parameters to random values, and tell PyTorch that we want to track their gradients, using requires_grad_:\n\nparams = torch.randn(3).requires_grad_()\n\n\n\nStep 2: Calculate the predictions\nNext, we calculate the predictions:\n\npreds = f(time, params)\n\nLet’s create a little function to see how close our predictions are to our targets, and take a look:\n\ndef show_preds(preds, ax=None):\n if ax is None: ax=plt.subplots()[1]\n ax.scatter(time, speed)\n ax.scatter(time, to_np(preds), color='red')\n ax.set_ylim(-300,100)\n\n\nshow_preds(preds)\n\n\n\n\n\n\n\n\nThis doesn’t look very close—our random parameters suggest that the roller coaster will end up going backwards, since we have negative speeds!\n\n\nStep 3: Calculate the loss\nWe calculate the loss as follows:\n\nloss = mse(preds, speed)\nloss\n\ntensor(25823.8086, grad_fn=<MeanBackward0>)\n\n\nOur goal is now to improve this. To do that, we’ll need to know the gradients.\n\n\nStep 4: Calculate the gradients\nThe next step is to calculate the gradients. In other words, calculate an approximation of how the parameters need to change:\n\nloss.backward()\nparams.grad\n\ntensor([-53195.8633, -3419.7148, -253.8908])\n\n\n\nparams.grad * 1e-5\n\ntensor([-0.5320, -0.0342, -0.0025])\n\n\nWe can use these gradients to improve our parameters. We’ll need to pick a learning rate (we’ll discuss how to do that in practice in the next chapter; for now we’ll just use 1e-5, or 0.00001):\n\nparams\n\ntensor([-0.7658, -0.7506, 1.3525], requires_grad=True)\n\n\n\n\nStep 5: Step the weights.\nNow we need to update the parameters based on the gradients we just calculated:\n\nlr = 1e-5\nparams.data -= lr * params.grad.data\nparams.grad = None\n\n\na: Understanding this bit depends on remembering recent history. To calculate the gradients we call backward on the loss. But this loss was itself calculated by mse, which in turn took preds as an input, which was calculated using f taking as an input params, which was the object on which we originally called requires_grad_—which is the original call that now allows us to call backward on loss. This chain of function calls represents the mathematical composition of functions, which enables PyTorch to use calculus’s chain rule under the hood to calculate these gradients.\n\nLet’s see if the loss has improved:\n\npreds = f(time,params)\nmse(preds, speed)\n\ntensor(5435.5356, grad_fn=<MeanBackward0>)\n\n\nAnd take a look at the plot:\n\nshow_preds(preds)\n\n\n\n\n\n\n\n\nWe need to repeat this a few times, so we’ll create a function to apply one step:\n\ndef apply_step(params, prn=True):\n preds = f(time, params)\n loss = mse(preds, speed)\n loss.backward()\n params.data -= lr * params.grad.data\n params.grad = None\n if prn: print(loss.item())\n return preds\n\n\n\nStep 6: Repeat the process\nNow we iterate. By looping and performing many improvements, we hope to reach a good result:\n\nfor i in range(10): apply_step(params)\n\n5435.53564453125\n1577.44921875\n847.3778076171875\n709.2225341796875\n683.0758056640625\n678.1243896484375\n677.1838989257812\n677.0023193359375\n676.9645385742188\n676.9537353515625\n\n\nThe loss is going down, just as we hoped! But looking only at these loss numbers disguises the fact that each iteration represents an entirely different quadratic function being tried, on the way to finding the best possible quadratic function. We can see this process visually if, instead of printing out the loss function, we plot the function at every step. Then we can see how the shape is approaching the best possible quadratic function for our data:\n\n_,axs = plt.subplots(1,4,figsize=(12,3))\nfor ax in axs: show_preds(apply_step(params, False), ax)\nplt.tight_layout()\n\n\n\n\n\n\n\n\n\n\nStep 7: stop\nWe just decided to stop after 10 epochs arbitrarily. In practice, we would watch the training and validation losses and our metrics to decide when to stop, as we’ve discussed.\n\n\n\nSummarizing Gradient Descent\n\n\n\n\n\n\n\n\nFigure 4.6: The gradient descent process\n\n\n\n\n\nTo summarize, at the beginning, the weights of our model can be random (training from scratch) or come from a pretrained model (transfer learning). In the first case, the output we will get from our inputs won’t have anything to do with what we want, and even in the second case, it’s very likely the pretrained model won’t be very good at the specific task we are targeting. So the model will need to learn better weights.\nWe begin by comparing the outputs the model gives us with our targets (we have labeled data, so we know what result the model should give) using a loss function, which returns a number that we want to make as low as possible by improving our weights. To do this, we take a few data items (such as images) from the training set and feed them to our model. We compare the corresponding targets using our loss function, and the score we get tells us how wrong our predictions were. We then change the weights a little bit to make it slightly better.\nTo find how to change the weights to make the loss a bit better, we use calculus to calculate the gradients. (Actually, we let PyTorch do it for us!) Let’s consider an analogy. Imagine you are lost in the mountains with your car parked at the lowest point. To find your way back to it, you might wander in a random direction, but that probably wouldn’t help much. Since you know your vehicle is at the lowest point, you would be better off going downhill. By always taking a step in the direction of the steepest downward slope, you should eventually arrive at your destination. We use the magnitude of the gradient (i.e., the steepness of the slope) to tell us how big a step to take; specifically, we multiply the gradient by a number we choose called the learning rate to decide on the step size. We then iterate until we have reached the lowest point, which will be our parking lot, then we can stop.\nAll of that we just saw can be transposed directly to the MNIST dataset, except for the loss function. Let’s now see how we can define a good training objective.",
+ "crumbs": [
+ "The Details",
+ "4Under the Hood: Training a Digit Classifier"
+ ]
+ },
+ {
+ "objectID": "mnist_basics.html#the-mnist-loss-function",
+ "href": "mnist_basics.html#the-mnist-loss-function",
+ "title": "4 Under the Hood: Training a Digit Classifier",
+ "section": "4.7 The MNIST Loss Function",
+ "text": "4.7 The MNIST Loss Function\nWe already have our independent variables x—these are the images themselves. We’ll concatenate them all into a single tensor, and also change them from a list of matrices (a rank-3 tensor) to a list of vectors (a rank-2 tensor). We can do this using view, which is a PyTorch method that changes the shape of a tensor without changing its contents. -1 is a special parameter to view that means “make this axis as big as necessary to fit all the data”:\n\ntrain_x = torch.cat([stacked_threes, stacked_sevens]).view(-1, 28*28)\n\nWe need a label for each image. We’ll use 1 for 3s and 0 for 7s:\n\ntrain_y = tensor([1]*len(threes) + [0]*len(sevens)).unsqueeze(1)\ntrain_x.shape,train_y.shape\n\n(torch.Size([12396, 784]), torch.Size([12396, 1]))\n\n\nA Dataset in PyTorch is required to return a tuple of (x,y) when indexed. Python provides a zip function which, when combined with list, provides a simple way to get this functionality:\n\ndset = list(zip(train_x,train_y))\nx,y = dset[0]\nx.shape,y\n\n(torch.Size([784]), tensor([1]))\n\n\n\nvalid_x = torch.cat([valid_3_tens, valid_7_tens]).view(-1, 28*28)\nvalid_y = tensor([1]*len(valid_3_tens) + [0]*len(valid_7_tens)).unsqueeze(1)\nvalid_dset = list(zip(valid_x,valid_y))\n\nNow we need an (initially random) weight for every pixel (this is the initialize step in our seven-step process):\n\ndef init_params(size, std=1.0): return (torch.randn(size)*std).requires_grad_()\n\n\nweights = init_params((28*28,1))\n\nThe function weights*pixels won’t be flexible enough—it is always equal to 0 when the pixels are equal to 0 (i.e., its intercept is 0). You might remember from high school math that the formula for a line is y=w*x+b; we still need the b. We’ll initialize it to a random number too:\n\nbias = init_params(1)\n\nIn neural networks, the w in the equation y=w*x+b is called the weights, and the b is called the bias. Together, the weights and bias make up the parameters.\n\njargon: Parameters: The weights and biases of a model. The weights are the w in the equation w*x+b, and the biases are the b in that equation.\n\nWe can now calculate a prediction for one image:\n\n(train_x[0]*weights.T).sum() + bias\n\ntensor([20.2336], grad_fn=<AddBackward0>)\n\n\nWhile we could use a Python for loop to calculate the prediction for each image, that would be very slow. Because Python loops don’t run on the GPU, and because Python is a slow language for loops in general, we need to represent as much of the computation in a model as possible using higher-level functions.\nIn this case, there’s an extremely convenient mathematical operation that calculates w*x for every row of a matrix—it’s called matrix multiplication. Figure 4.7 shows what matrix multiplication looks like.\n\n\n\n\n\n\nFigure 4.7: Matrix multiplication\n\n\n\nThis image shows two matrices, A and B, being multiplied together. Each item of the result, which we’ll call AB, contains each item of its corresponding row of A multiplied by each item of its corresponding column of B, added together. For instance, row 1, column 2 (the yellow dot with a red border) is calculated as \\(a_{1,1} * b_{1,2} + a_{1,2} * b_{2,2}\\). If you need a refresher on matrix multiplication, we suggest you take a look at the Intro to Matrix Multiplication on Khan Academy, since this is the most important mathematical operation in deep learning.\nIn Python, matrix multiplication is represented with the @ operator. Let’s try it:\n\ndef linear1(xb): return xb@weights + bias\npreds = linear1(train_x)\npreds\n\ntensor([[20.2336],\n [17.0644],\n [15.2384],\n ...,\n [18.3804],\n [23.8567],\n [28.6816]], grad_fn=<AddBackward0>)\n\n\nThe first element is the same as we calculated before, as we’d expect. This equation, batch@weights + bias, is one of the two fundamental equations of any neural network (the other one is the activation function, which we’ll see in a moment).\nLet’s check our accuracy. To decide if an output represents a 3 or a 7, we can just check whether it’s greater than 0.0, so our accuracy for each item can be calculated (using broadcasting, so no loops!) with:\n\ncorrects = (preds>0.0).float() == train_y\ncorrects\n\ntensor([[ True],\n [ True],\n [ True],\n ...,\n [False],\n [False],\n [False]])\n\n\n\ncorrects.float().mean().item()\n\n0.4912068545818329\n\n\nNow let’s see what the change in accuracy is for a small change in one of the weights (note that we have to ask PyTorch not to calculate gradients as we do this, which is what with torch.no_grad() is doing here):\n\nwith torch.no_grad(): weights[0] *= 1.0001\n\n\npreds = linear1(train_x)\n((preds>0.0).float() == train_y).float().mean().item()\n\n0.4912068545818329\n\n\nAs we’ve seen, we need gradients in order to improve our model using SGD, and in order to calculate gradients we need some loss function that represents how good our model is. That is because the gradients are a measure of how that loss function changes with small tweaks to the weights.\nSo, we need to choose a loss function. The obvious approach would be to use accuracy, which is our metric, as our loss function as well. In this case, we would calculate our prediction for each image, collect these values to calculate an overall accuracy, and then calculate the gradients of each weight with respect to that overall accuracy.\nUnfortunately, we have a significant technical problem here. The gradient of a function is its slope, or its steepness, which can be defined as rise over run—that is, how much the value of the function goes up or down, divided by how much we changed the input. We can write this in mathematically as: (y_new - y_old) / (x_new - x_old). This gives us a good approximation of the gradient when x_new is very similar to x_old, meaning that their difference is very small. But accuracy only changes at all when a prediction changes from a 3 to a 7, or vice versa. The problem is that a small change in weights from x_old to x_new isn’t likely to cause any prediction to change, so (y_new - y_old) will almost always be 0. In other words, the gradient is 0 almost everywhere.\nA very small change in the value of a weight will often not actually change the accuracy at all. This means it is not useful to use accuracy as a loss function—if we do, most of the time our gradients will actually be 0, and the model will not be able to learn from that number.\n\nS: In mathematical terms, accuracy is a function that is constant almost everywhere (except at the threshold, 0.5), so its derivative is nil almost everywhere (and infinity at the threshold). This then gives gradients that are 0 or infinite, which are useless for updating the model.\n\nInstead, we need a loss function which, when our weights result in slightly better predictions, gives us a slightly better loss. So what does a “slightly better prediction” look like, exactly? Well, in this case, it means that if the correct answer is a 3 the score is a little higher, or if the correct answer is a 7 the score is a little lower.\nLet’s write such a function now. What form does it take?\nThe loss function receives not the images themselves, but the predictions from the model. Let’s make one argument, prds, of values between 0 and 1, where each value is the prediction that an image is a 3. It is a vector (i.e., a rank-1 tensor), indexed over the images.\nThe purpose of the loss function is to measure the difference between predicted values and the true values — that is, the targets (aka labels). Let’s make another argument, trgts, with values of 0 or 1 which tells whether an image actually is a 3 or not. It is also a vector (i.e., another rank-1 tensor), indexed over the images.\nSo, for instance, suppose we had three images which we knew were a 3, a 7, and a 3. And suppose our model predicted with high confidence (0.9) that the first was a 3, with slight confidence (0.4) that the second was a 7, and with fair confidence (0.2), but incorrectly, that the last was a 7. This would mean our loss function would receive these values as its inputs:\n\ntrgts = tensor([1,0,1])\nprds = tensor([0.9, 0.4, 0.2])\n\nHere’s a first try at a loss function that measures the distance between predictions and targets:\n\ndef mnist_loss(predictions, targets):\n return torch.where(targets==1, 1-predictions, predictions).mean()\n\nWe’re using a new function, torch.where(a,b,c). This is the same as running the list comprehension [b[i] if a[i] else c[i] for i in range(len(a))], except it works on tensors, at C/CUDA speed. In plain English, this function will measure how distant each prediction is from 1 if it should be 1, and how distant it is from 0 if it should be 0, and then it will take the mean of all those distances.\n\nnote: Read the Docs: It’s important to learn about PyTorch functions like this, because looping over tensors in Python performs at Python speed, not C/CUDA speed! Try running help(torch.where) now to read the docs for this function, or, better still, look it up on the PyTorch documentation site.\n\nLet’s try it on our prds and trgts:\n\ntorch.where(trgts==1, 1-prds, prds)\n\ntensor([0.1000, 0.4000, 0.8000])\n\n\nYou can see that this function returns a lower number when predictions are more accurate, when accurate predictions are more confident (higher absolute values), and when inaccurate predictions are less confident. In PyTorch, we always assume that a lower value of a loss function is better. Since we need a scalar for the final loss, mnist_loss takes the mean of the previous tensor:\n\nmnist_loss(prds,trgts)\n\ntensor(0.4333)\n\n\nFor instance, if we change our prediction for the one “false” target from 0.2 to 0.8 the loss will go down, indicating that this is a better prediction:\n\nmnist_loss(tensor([0.9, 0.4, 0.8]),trgts)\n\ntensor(0.2333)\n\n\nOne problem with mnist_loss as currently defined is that it assumes that predictions are always between 0 and 1. We need to ensure, then, that this is actually the case! As it happens, there is a function that does exactly that—let’s take a look.\n\nSigmoid\nThe sigmoid function always outputs a number between 0 and 1. It’s defined as follows:\n\ndef sigmoid(x): return 1/(1+torch.exp(-x))\n\nPytorch defines an accelerated version for us, so we don’t really need our own. This is an important function in deep learning, since we often want to ensure values are between 0 and 1. This is what it looks like:\n\nplot_function(torch.sigmoid, title='Sigmoid', min=-4, max=4)\n\n\n\n\n\n\n\n\nAs you can see, it takes any input value, positive or negative, and smooshes it onto an output value between 0 and 1. It’s also a smooth curve that only goes up, which makes it easier for SGD to find meaningful gradients.\nLet’s update mnist_loss to first apply sigmoid to the inputs:\n\ndef mnist_loss(predictions, targets):\n predictions = predictions.sigmoid()\n return torch.where(targets==1, 1-predictions, predictions).mean()\n\nNow we can be confident our loss function will work, even if the predictions are not between 0 and 1. All that is required is that a higher prediction corresponds to higher confidence an image is a 3.\nHaving defined a loss function, now is a good moment to recapitulate why we did this. After all, we already had a metric, which was overall accuracy. So why did we define a loss?\nThe key difference is that the metric is to drive human understanding and the loss is to drive automated learning. To drive automated learning, the loss must be a function that has a meaningful derivative. It can’t have big flat sections and large jumps, but instead must be reasonably smooth. This is why we designed a loss function that would respond to small changes in confidence level. This requirement means that sometimes it does not really reflect exactly what we are trying to achieve, but is rather a compromise between our real goal and a function that can be optimized using its gradient. The loss function is calculated for each item in our dataset, and then at the end of an epoch the loss values are all averaged and the overall mean is reported for the epoch.\nMetrics, on the other hand, are the numbers that we really care about. These are the values that are printed at the end of each epoch that tell us how our model is really doing. It is important that we learn to focus on these metrics, rather than the loss, when judging the performance of a model.\n\n\nSGD and Mini-Batches\nNow that we have a loss function that is suitable for driving SGD, we can consider some of the details involved in the next phase of the learning process, which is to change or update the weights based on the gradients. This is called an optimization step.\nIn order to take an optimization step we need to calculate the loss over one or more data items. How many should we use? We could calculate it for the whole dataset, and take the average, or we could calculate it for a single data item. But neither of these is ideal. Calculating it for the whole dataset would take a very long time. Calculating it for a single item would not use much information, so it would result in a very imprecise and unstable gradient. That is, you’d be going to the trouble of updating the weights, but taking into account only how that would improve the model’s performance on that single item.\nSo instead we take a compromise between the two: we calculate the average loss for a few data items at a time. This is called a mini-batch. The number of data items in the mini-batch is called the batch size. A larger batch size means that you will get a more accurate and stable estimate of your dataset’s gradients from the loss function, but it will take longer, and you will process fewer mini-batches per epoch. Choosing a good batch size is one of the decisions you need to make as a deep learning practitioner to train your model quickly and accurately. We will talk about how to make this choice throughout this book.\nAnother good reason for using mini-batches rather than calculating the gradient on individual data items is that, in practice, we nearly always do our training on an accelerator such as a GPU. These accelerators only perform well if they have lots of work to do at a time, so it’s helpful if we can give them lots of data items to work on. Using mini-batches is one of the best ways to do this. However, if you give them too much data to work on at once, they run out of memory—making GPUs happy is also tricky!\nAs we saw in our discussion of data augmentation in Chapter 2, we get better generalization if we can vary things during training. One simple and effective thing we can vary is what data items we put in each mini-batch. Rather than simply enumerating our dataset in order for every epoch, instead what we normally do is randomly shuffle it on every epoch, before we create mini-batches. PyTorch and fastai provide a class that will do the shuffling and mini-batch collation for you, called DataLoader.\nA DataLoader can take any Python collection and turn it into an iterator over mini-batches, like so:\n\ncoll = range(15)\ndl = DataLoader(coll, batch_size=5, shuffle=True)\nlist(dl)\n\n[tensor([ 3, 12, 8, 10, 2]),\n tensor([ 9, 4, 7, 14, 5]),\n tensor([ 1, 13, 0, 6, 11])]\n\n\nFor training a model, we don’t just want any Python collection, but a collection containing independent and dependent variables (that is, the inputs and targets of the model). A collection that contains tuples of independent and dependent variables is known in PyTorch as a Dataset. Here’s an example of an extremely simple Dataset:\n\nds = L(enumerate(string.ascii_lowercase))\nds\n\n(#26) [(0, 'a'),(1, 'b'),(2, 'c'),(3, 'd'),(4, 'e'),(5, 'f'),(6, 'g'),(7, 'h'),(8, 'i'),(9, 'j')...]\n\n\nWhen we pass a Dataset to a DataLoader we will get back mini-batches which are themselves tuples of tensors representing batches of independent and dependent variables:\n\ndl = DataLoader(ds, batch_size=6, shuffle=True)\nlist(dl)\n\n[(tensor([17, 18, 10, 22, 8, 14]), ('r', 's', 'k', 'w', 'i', 'o')),\n (tensor([20, 15, 9, 13, 21, 12]), ('u', 'p', 'j', 'n', 'v', 'm')),\n (tensor([ 7, 25, 6, 5, 11, 23]), ('h', 'z', 'g', 'f', 'l', 'x')),\n (tensor([ 1, 3, 0, 24, 19, 16]), ('b', 'd', 'a', 'y', 't', 'q')),\n (tensor([2, 4]), ('c', 'e'))]\n\n\nWe are now ready to write our first training loop for a model using SGD!",
+ "crumbs": [
+ "The Details",
+ "4Under the Hood: Training a Digit Classifier"
+ ]
+ },
+ {
+ "objectID": "mnist_basics.html#putting-it-all-together",
+ "href": "mnist_basics.html#putting-it-all-together",
+ "title": "4 Under the Hood: Training a Digit Classifier",
+ "section": "4.8 Putting It All Together",
+ "text": "4.8 Putting It All Together\nIt’s time to implement the process we saw in Figure 4.2. In code, our process will be implemented something like this for each epoch:\nfor x,y in dl:\n pred = model(x)\n loss = loss_func(pred, y)\n loss.backward()\n parameters -= parameters.grad * lr\nFirst, let’s re-initialize our parameters:\n\nweights = init_params((28*28,1))\nbias = init_params(1)\n\nA DataLoader can be created from a Dataset:\n\ndl = DataLoader(dset, batch_size=256)\nxb,yb = first(dl)\nxb.shape,yb.shape\n\n(torch.Size([256, 784]), torch.Size([256, 1]))\n\n\nWe’ll do the same for the validation set:\n\nvalid_dl = DataLoader(valid_dset, batch_size=256)\n\nLet’s create a mini-batch of size 4 for testing:\n\nbatch = train_x[:4]\nbatch.shape\n\ntorch.Size([4, 784])\n\n\n\npreds = linear1(batch)\npreds\n\ntensor([[-11.1002],\n [ 5.9263],\n [ 9.9627],\n [ -8.1484]], grad_fn=<AddBackward0>)\n\n\n\nloss = mnist_loss(preds, train_y[:4])\nloss\n\ntensor(0.5006, grad_fn=<MeanBackward0>)\n\n\nNow we can calculate the gradients:\n\nloss.backward()\nweights.grad.shape,weights.grad.mean(),bias.grad\n\n(torch.Size([784, 1]), tensor(-0.0001), tensor([-0.0008]))\n\n\nLet’s put that all in a function:\n\ndef calc_grad(xb, yb, model):\n preds = model(xb)\n loss = mnist_loss(preds, yb)\n loss.backward()\n\nand test it:\n\ncalc_grad(batch, train_y[:4], linear1)\nweights.grad.mean(),bias.grad\n\n(tensor(-0.0002), tensor([-0.0015]))\n\n\nBut look what happens if we call it twice:\n\ncalc_grad(batch, train_y[:4], linear1)\nweights.grad.mean(),bias.grad\n\n(tensor(-0.0003), tensor([-0.0023]))\n\n\nThe gradients have changed! The reason for this is that loss.backward actually adds the gradients of loss to any gradients that are currently stored. So, we have to set the current gradients to 0 first:\n\nweights.grad.zero_()\nbias.grad.zero_();\n\n\nnote: Inplace Operations: Methods in PyTorch whose names end in an underscore modify their objects in place. For instance, bias.zero_() sets all elements of the tensor bias to 0.\n\nOur only remaining step is to update the weights and biases based on the gradient and learning rate. When we do so, we have to tell PyTorch not to take the gradient of this step too—otherwise things will get very confusing when we try to compute the derivative at the next batch! If we assign to the data attribute of a tensor then PyTorch will not take the gradient of that step. Here’s our basic training loop for an epoch:\n\ndef train_epoch(model, lr, params):\n for xb,yb in dl:\n calc_grad(xb, yb, model)\n for p in params:\n p.data -= p.grad*lr\n p.grad.zero_()\n\nWe also want to check how we’re doing, by looking at the accuracy of the validation set. To decide if an output represents a 3 or a 7, we can just check whether it’s greater than 0. So our accuracy for each item can be calculated (using broadcasting, so no loops!) with:\n\n(preds>0.0).float() == train_y[:4]\n\ntensor([[False],\n [ True],\n [ True],\n [False]])\n\n\nThat gives us this function to calculate our validation accuracy:\n\ndef batch_accuracy(xb, yb):\n preds = xb.sigmoid()\n correct = (preds>0.5) == yb\n return correct.float().mean()\n\nWe can check it works:\n\nbatch_accuracy(linear1(batch), train_y[:4])\n\ntensor(0.5000)\n\n\nand then put the batches together:\n\ndef validate_epoch(model):\n accs = [batch_accuracy(model(xb), yb) for xb,yb in valid_dl]\n return round(torch.stack(accs).mean().item(), 4)\n\n\nvalidate_epoch(linear1)\n\n0.5219\n\n\nThat’s our starting point. Let’s train for one epoch, and see if the accuracy improves:\n\nlr = 1.\nparams = weights,bias\ntrain_epoch(linear1, lr, params)\nvalidate_epoch(linear1)\n\n0.6883\n\n\nThen do a few more:\n\nfor i in range(20):\n train_epoch(linear1, lr, params)\n print(validate_epoch(linear1), end=' ')\n\n0.8314 0.9017 0.9227 0.9349 0.9438 0.9501 0.9535 0.9564 0.9594 0.9618 0.9613 0.9638 0.9643 0.9652 0.9662 0.9677 0.9687 0.9691 0.9691 0.9696 \n\n\nLooking good! We’re already about at the same accuracy as our “pixel similarity” approach, and we’ve created a general-purpose foundation we can build on. Our next step will be to create an object that will handle the SGD step for us. In PyTorch, it’s called an optimizer.\n\nCreating an Optimizer\nBecause this is such a general foundation, PyTorch provides some useful classes to make it easier to implement. The first thing we can do is replace our linear1 function with PyTorch’s nn.Linear module. A module is an object of a class that inherits from the PyTorch nn.Module class. Objects of this class behave identically to standard Python functions, in that you can call them using parentheses and they will return the activations of a model.\nnn.Linear does the same thing as our init_params and linear together. It contains both the weights and biases in a single class. Here’s how we replicate our model from the previous section:\n\nlinear_model = nn.Linear(28*28,1)\n\nEvery PyTorch module knows what parameters it has that can be trained; they are available through the parameters method:\n\nw,b = linear_model.parameters()\nw.shape,b.shape\n\n(torch.Size([1, 784]), torch.Size([1]))\n\n\nWe can use this information to create an optimizer:\n\nclass BasicOptim:\n def __init__(self,params,lr): self.params,self.lr = list(params),lr\n\n def step(self, *args, **kwargs):\n for p in self.params: p.data -= p.grad.data * self.lr\n\n def zero_grad(self, *args, **kwargs):\n for p in self.params: p.grad = None\n\nWe can create our optimizer by passing in the model’s parameters:\n\nopt = BasicOptim(linear_model.parameters(), lr)\n\nOur training loop can now be simplified to:\n\ndef train_epoch(model):\n for xb,yb in dl:\n calc_grad(xb, yb, model)\n opt.step()\n opt.zero_grad()\n\nOur validation function doesn’t need to change at all:\n\nvalidate_epoch(linear_model)\n\n0.4157\n\n\nLet’s put our little training loop in a function, to make things simpler:\n\ndef train_model(model, epochs):\n for i in range(epochs):\n train_epoch(model)\n print(validate_epoch(model), end=' ')\n\nThe results are the same as in the previous section:\n\ntrain_model(linear_model, 20)\n\n0.4932 0.8618 0.8203 0.9102 0.9331 0.9468 0.9555 0.9629 0.9658 0.9673 0.9687 0.9707 0.9726 0.9751 0.9761 0.9761 0.9775 0.978 0.9785 0.9785 \n\n\nfastai provides the SGD class which, by default, does the same thing as our BasicOptim:\n\nlinear_model = nn.Linear(28*28,1)\nopt = SGD(linear_model.parameters(), lr)\ntrain_model(linear_model, 20)\n\n0.4932 0.852 0.8335 0.9116 0.9326 0.9473 0.9555 0.9624 0.9648 0.9668 0.9692 0.9712 0.9731 0.9746 0.9761 0.9765 0.9775 0.978 0.9785 0.9785 \n\n\nfastai also provides Learner.fit, which we can use instead of train_model. To create a Learner we first need to create a DataLoaders, by passing in our training and validation DataLoaders:\n\ndls = DataLoaders(dl, valid_dl)\n\nTo create a Learner without using an application (such as vision_learner) we need to pass in all the elements that we’ve created in this chapter: the DataLoaders, the model, the optimization function (which will be passed the parameters), the loss function, and optionally any metrics to print:\n\nlearn = Learner(dls, nn.Linear(28*28,1), opt_func=SGD,\n loss_func=mnist_loss, metrics=batch_accuracy)\n\nNow we can call fit:\n\nlearn.fit(10, lr=lr)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\nbatch_accuracy\ntime\n\n\n\n\n0\n0.636857\n0.503549\n0.495584\n00:00\n\n\n1\n0.545725\n0.170281\n0.866045\n00:00\n\n\n2\n0.199223\n0.184893\n0.831207\n00:00\n\n\n3\n0.086580\n0.107836\n0.911187\n00:00\n\n\n4\n0.045185\n0.078481\n0.932777\n00:00\n\n\n5\n0.029108\n0.062792\n0.946516\n00:00\n\n\n6\n0.022560\n0.053017\n0.955348\n00:00\n\n\n7\n0.019687\n0.046500\n0.962218\n00:00\n\n\n8\n0.018252\n0.041929\n0.965162\n00:00\n\n\n9\n0.017402\n0.038573\n0.967615\n00:00\n\n\n\n\n\nAs you can see, there’s nothing magic about the PyTorch and fastai classes. They are just convenient pre-packaged pieces that make your life a bit easier! (They also provide a lot of extra functionality we’ll be using in future chapters.)\nWith these classes, we can now replace our linear model with a neural network.",
+ "crumbs": [
+ "The Details",
+ "4Under the Hood: Training a Digit Classifier"
+ ]
+ },
+ {
+ "objectID": "mnist_basics.html#adding-a-nonlinearity",
+ "href": "mnist_basics.html#adding-a-nonlinearity",
+ "title": "4 Under the Hood: Training a Digit Classifier",
+ "section": "4.9 Adding a Nonlinearity",
+ "text": "4.9 Adding a Nonlinearity\nSo far we have a general procedure for optimizing the parameters of a function, and we have tried it out on a very boring function: a simple linear classifier. A linear classifier is very constrained in terms of what it can do. To make it a bit more complex (and able to handle more tasks), we need to add something nonlinear between two linear classifiers—this is what gives us a neural network.\nHere is the entire definition of a basic neural network:\n\ndef simple_net(xb): \n res = xb@w1 + b1\n res = res.max(tensor(0.0))\n res = res@w2 + b2\n return res\n\nThat’s it! All we have in simple_net is two linear classifiers with a max function between them.\nHere, w1 and w2 are weight tensors, and b1 and b2 are bias tensors; that is, parameters that are initially randomly initialized, just like we did in the previous section:\n\nw1 = init_params((28*28,30))\nb1 = init_params(30)\nw2 = init_params((30,1))\nb2 = init_params(1)\n\nThe key point about this is that w1 has 30 output activations (which means that w2 must have 30 input activations, so they match). That means that the first layer can construct 30 different features, each representing some different mix of pixels. You can change that 30 to anything you like, to make the model more or less complex.\nThat little function res.max(tensor(0.0)) is called a rectified linear unit, also known as ReLU. We think we can all agree that rectified linear unit sounds pretty fancy and complicated… But actually, there’s nothing more to it than res.max(tensor(0.0))—in other words, replace every negative number with a zero. This tiny function is also available in PyTorch as F.relu:\n\nplot_function(F.relu)\n\n\n\n\n\n\n\n\n\nJ: There is an enormous amount of jargon in deep learning, including terms like rectified linear unit. The vast vast majority of this jargon is no more complicated than can be implemented in a short line of code, as we saw in this example. The reality is that for academics to get their papers published they need to make them sound as impressive and sophisticated as possible. One of the ways that they do that is to introduce jargon. Unfortunately, this has the result that the field ends up becoming far more intimidating and difficult to get into than it should be. You do have to learn the jargon, because otherwise papers and tutorials are not going to mean much to you. But that doesn’t mean you have to find the jargon intimidating. Just remember, when you come across a word or phrase that you haven’t seen before, it will almost certainly turn out to be referring to a very simple concept.\n\nThe basic idea is that by using more linear layers, we can have our model do more computation, and therefore model more complex functions. But there’s no point just putting one linear layer directly after another one, because when we multiply things together and then add them up multiple times, that could be replaced by multiplying different things together and adding them up just once! That is to say, a series of any number of linear layers in a row can be replaced with a single linear layer with a different set of parameters.\nBut if we put a nonlinear function between them, such as max, then this is no longer true. Now each linear layer is actually somewhat decoupled from the other ones, and can do its own useful work. The max function is particularly interesting, because it operates as a simple if statement.\n\nS: Mathematically, we say the composition of two linear functions is another linear function. So, we can stack as many linear classifiers as we want on top of each other, and without nonlinear functions between them, it will just be the same as one linear classifier.\n\nAmazingly enough, it can be mathematically proven that this little function can solve any computable problem to an arbitrarily high level of accuracy, if you can find the right parameters for w1 and w2 and if you make these matrices big enough. For any arbitrarily wiggly function, we can approximate it as a bunch of lines joined together; to make it closer to the wiggly function, we just have to use shorter lines. This is known as the universal approximation theorem. The three lines of code that we have here are known as layers. The first and third are known as linear layers, and the second line of code is known variously as a nonlinearity, or activation function.\nJust like in the previous section, we can replace this code with something a bit simpler, by taking advantage of PyTorch:\n\nsimple_net = nn.Sequential(\n nn.Linear(28*28,30),\n nn.ReLU(),\n nn.Linear(30,1)\n)\n\nnn.Sequential creates a module that will call each of the listed layers or functions in turn.\nnn.ReLU is a PyTorch module that does exactly the same thing as the F.relu function. Most functions that can appear in a model also have identical forms that are modules. Generally, it’s just a case of replacing F with nn and changing the capitalization. When using nn.Sequential, PyTorch requires us to use the module version. Since modules are classes, we have to instantiate them, which is why you see nn.ReLU() in this example.\nBecause nn.Sequential is a module, we can get its parameters, which will return a list of all the parameters of all the modules it contains. Let’s try it out! As this is a deeper model, we’ll use a lower learning rate and a few more epochs.\n\nlearn = Learner(dls, simple_net, opt_func=SGD,\n loss_func=mnist_loss, metrics=batch_accuracy)\n\n\nlearn.fit(40, 0.1)\n\nWe’re not showing the 40 lines of output here to save room; the training process is recorded in learn.recorder, with the table of output stored in the values attribute, so we can plot the accuracy over training as:\n\nplt.plot(L(learn.recorder.values).itemgot(2));\n\n\n\n\n\n\n\n\nAnd we can view the final accuracy:\n\nlearn.recorder.values[-1][2]\n\n0.982826292514801\n\n\nAt this point we have something that is rather magical:\n\nA function that can solve any problem to any level of accuracy (the neural network) given the correct set of parameters\nA way to find the best set of parameters for any function (stochastic gradient descent)\n\nThis is why deep learning can do things which seem rather magical, such fantastic things. Believing that this combination of simple techniques can really solve any problem is one of the biggest steps that we find many students have to take. It seems too good to be true—surely things should be more difficult and complicated than this? Our recommendation: try it out! We just tried it on the MNIST dataset and you have seen the results. And since we are doing everything from scratch ourselves (except for calculating the gradients) you know that there is no special magic hiding behind the scenes.\n\nGoing Deeper\nThere is no need to stop at just two linear layers. We can add as many as we want, as long as we add a nonlinearity between each pair of linear layers. As you will learn, however, the deeper the model gets, the harder it is to optimize the parameters in practice. Later in this book you will learn about some simple but brilliantly effective techniques for training deeper models.\nWe already know that a single nonlinearity with two linear layers is enough to approximate any function. So why would we use deeper models? The reason is performance. With a deeper model (that is, one with more layers) we do not need to use as many parameters; it turns out that we can use smaller matrices with more layers, and get better results than we would get with larger matrices, and few layers.\nThat means that we can train the model more quickly, and it will take up less memory. In the 1990s researchers were so focused on the universal approximation theorem that very few were experimenting with more than one nonlinearity. This theoretical but not practical foundation held back the field for years. Some researchers, however, did experiment with deep models, and eventually were able to show that these models could perform much better in practice. Eventually, theoretical results were developed which showed why this happens. Today, it is extremely unusual to find anybody using a neural network with just one nonlinearity.\nHere is what happens when we train an 18-layer model using the same approach we saw in Chapter 1:\n\ndls = ImageDataLoaders.from_folder(path)\nlearn = vision_learner(dls, resnet18, pretrained=False,\n loss_func=F.cross_entropy, metrics=accuracy)\nlearn.fit_one_cycle(1, 0.1)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n0.082089\n0.009578\n0.997056\n00:11\n\n\n\n\n\nNearly 100% accuracy! That’s a big difference compared to our simple neural net. But as you’ll learn in the remainder of this book, there are just a few little tricks you need to use to get such great results from scratch yourself. You already know the key foundational pieces. (Of course, even once you know all the tricks, you’ll nearly always want to work with the pre-built classes provided by PyTorch and fastai, because they save you having to think about all the little details yourself.)",
+ "crumbs": [
+ "The Details",
+ "4Under the Hood: Training a Digit Classifier"
+ ]
+ },
+ {
+ "objectID": "mnist_basics.html#jargon-recap",
+ "href": "mnist_basics.html#jargon-recap",
+ "title": "4 Under the Hood: Training a Digit Classifier",
+ "section": "4.10 Jargon Recap",
+ "text": "4.10 Jargon Recap\nCongratulations: you now know how to create and train a deep neural network from scratch! We’ve gone through quite a few steps to get to this point, but you might be surprised at how simple it really is.\nNow that we are at this point, it is a good opportunity to define, and review, some jargon and key concepts.\nA neural network contains a lot of numbers, but they are only of two types: numbers that are calculated, and the parameters that these numbers are calculated from. This gives us the two most important pieces of jargon to learn:\n\nActivations:: Numbers that are calculated (both by linear and nonlinear layers)\nParameters:: Numbers that are randomly initialized, and optimized (that is, the numbers that define the model)\n\nWe will often talk in this book about activations and parameters. Remember that they have very specific meanings. They are numbers. They are not abstract concepts, but they are actual specific numbers that are in your model. Part of becoming a good deep learning practitioner is getting used to the idea of actually looking at your activations and parameters, and plotting them and testing whether they are behaving correctly.\nOur activations and parameters are all contained in tensors. These are simply regularly shaped arrays—for example, a matrix. Matrices have rows and columns; we call these the axes or dimensions. The number of dimensions of a tensor is its rank. There are some special tensors:\n\nRank zero: scalar\nRank one: vector\nRank two: matrix\n\nA neural network contains a number of layers. Each layer is either linear or nonlinear. We generally alternate between these two kinds of layers in a neural network. Sometimes people refer to both a linear layer and its subsequent nonlinearity together as a single layer. Yes, this is confusing. Sometimes a nonlinearity is referred to as an activation function.\nTable 4.1 summarizes the key concepts related to SGD.\n\n\n\nTable 4.1: Deep learning vocabulary\n\n\n\n\n\n\n\n\n\nTerm\nMeaning\n\n\n\n\nReLU\nFunction that returns 0 for negative numbers and doesn’t change positive numbers.\n\n\nMini-batch\nA small group of inputs and labels gathered together in two arrays. A gradient descent step is updated on this batch (rather than a whole epoch).\n\n\nForward pass\nApplying the model to some input and computing the predictions.\n\n\nLoss\nA value that represents how well (or badly) our model is doing.\n\n\nGradient\nThe derivative of the loss with respect to some parameter of the model.\n\n\nBackward pass\nComputing the gradients of the loss with respect to all model parameters.\n\n\nGradient descent\nTaking a step in the directions opposite to the gradients to make the model parameters a little bit better.\n\n\nLearning rate\nThe size of the step we take when applying SGD to update the parameters of the model.\n\n\n\n\n\n\n\nnote: Choose Your Own Adventure Reminder: Did you choose to skip over chapters 2 & 3, in your excitement to peek under the hood? Well, here’s your reminder to head back to chapter 2 now, because you’ll be needing to know that stuff very soon!",
+ "crumbs": [
+ "The Details",
+ "4Under the Hood: Training a Digit Classifier"
+ ]
+ },
+ {
+ "objectID": "mnist_basics.html#questionnaire",
+ "href": "mnist_basics.html#questionnaire",
+ "title": "4 Under the Hood: Training a Digit Classifier",
+ "section": "4.11 Questionnaire",
+ "text": "4.11 Questionnaire\n\nHow is a grayscale image represented on a computer? How about a color image?\nHow are the files and folders in the MNIST_SAMPLE dataset structured? Why?\nExplain how the “pixel similarity” approach to classifying digits works.\nWhat is a list comprehension? Create one now that selects odd numbers from a list and doubles them.\nWhat is a “rank-3 tensor”?\nWhat is the difference between tensor rank and shape? How do you get the rank from the shape?\nWhat are RMSE and L1 norm?\nHow can you apply a calculation on thousands of numbers at once, many thousands of times faster than a Python loop?\nCreate a 3×3 tensor or array containing the numbers from 1 to 9. Double it. Select the bottom-right four numbers.\nWhat is broadcasting?\nAre metrics generally calculated using the training set, or the validation set? Why?\nWhat is SGD?\nWhy does SGD use mini-batches?\nWhat are the seven steps in SGD for machine learning?\nHow do we initialize the weights in a model?\nWhat is “loss”?\nWhy can’t we always use a high learning rate?\nWhat is a “gradient”?\nDo you need to know how to calculate gradients yourself?\nWhy can’t we use accuracy as a loss function?\nDraw the sigmoid function. What is special about its shape?\nWhat is the difference between a loss function and a metric?\nWhat is the function to calculate new weights using a learning rate?\nWhat does the DataLoader class do?\nWrite pseudocode showing the basic steps taken in each epoch for SGD.\nCreate a function that, if passed two arguments [1,2,3,4] and 'abcd', returns [(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')]. What is special about that output data structure?\nWhat does view do in PyTorch?\nWhat are the “bias” parameters in a neural network? Why do we need them?\nWhat does the @ operator do in Python?\nWhat does the backward method do?\nWhy do we have to zero the gradients?\nWhat information do we have to pass to Learner?\nShow Python or pseudocode for the basic steps of a training loop.\nWhat is “ReLU”? Draw a plot of it for values from -2 to +2.\nWhat is an “activation function”?\nWhat’s the difference between F.relu and nn.ReLU?\nThe universal approximation theorem shows that any function can be approximated as closely as needed using just one nonlinearity. So why do we normally use more?\n\n\nFurther Research\n\nCreate your own implementation of Learner from scratch, based on the training loop shown in this chapter.\nComplete all the steps in this chapter using the full MNIST datasets (that is, for all digits, not just 3s and 7s). This is a significant project and will take you quite a bit of time to complete! You’ll need to do some of your own research to figure out how to overcome some obstacles you’ll meet on the way.",
+ "crumbs": [
+ "The Details",
+ "4Under the Hood: Training a Digit Classifier"
+ ]
+ },
+ {
+ "objectID": "convolutions.html",
+ "href": "convolutions.html",
+ "title": "13 Convolutional Neural Networks",
+ "section": "",
+ "text": "13.1 The Magic of Convolutions\nOne of the most powerful tools that machine learning practitioners have at their disposal is feature engineering. A feature is a transformation of the data which is designed to make it easier to model. For instance, the add_datepart function that we used for our tabular dataset preprocessing in Chapter 9 added date features to the Bulldozers dataset. What kinds of features might we be able to create from images?\nIn the context of an image, a feature is a visually distinctive attribute. For example, the number 7 is characterized by a horizontal edge near the top of the digit, and a top-right to bottom-left diagonal edge underneath that. On the other hand, the number 3 is characterized by a diagonal edge in one direction at the top left and bottom right of the digit, the opposite diagonal at the bottom left and top right, horizontal edges at the middle, top, and bottom, and so forth. So what if we could extract information about where the edges occur in each image, and then use that information as our features, instead of raw pixels?\nIt turns out that finding the edges in an image is a very common task in computer vision, and is surprisingly straightforward. To do it, we use something called a convolution. A convolution requires nothing more than multiplication, and addition—two operations that are responsible for the vast majority of work that we will see in every single deep learning model in this book!\nA convolution applies a kernel across an image. A kernel is a little matrix, such as the 3×3 matrix in the top right of Figure 13.1.\nThe 7×7 grid to the left is the image we’re going to apply the kernel to. The convolution operation multiplies each element of the kernel by each element of a 3×3 block of the image. The results of these multiplications are then added together. The diagram in Figure 13.1 shows an example of applying a kernel to a single location in the image, the 3×3 block around cell 18.\nLet’s do this with code. First, we create a little 3×3 matrix like so:\ntop_edge = tensor([[-1,-1,-1],\n [ 0, 0, 0],\n [ 1, 1, 1]]).float()\nWe’re going to call this our kernel (because that’s what fancy computer vision researchers call these). And we’ll need an image, of course:\npath = untar_data(URLs.MNIST_SAMPLE)\nim3 = Image.open(path/'train'/'3'/'12.png')\nshow_image(im3);\nNow we’re going to take the top 3×3-pixel square of our image, and multiply each of those values by each item in our kernel. Then we’ll add them up, like so:\nim3_t = tensor(im3)\nim3_t[0:3,0:3] * top_edge\n\ntensor([[-0., -0., -0.],\n [0., 0., 0.],\n [0., 0., 0.]])\n(im3_t[0:3,0:3] * top_edge).sum()\n\ntensor(0.)\nNot very interesting so far—all the pixels in the top-left corner are white. But let’s pick a couple of more interesting spots:\ndf = pd.DataFrame(im3_t[:10,:20])\ndf.style.set_properties(**{'font-size':'6pt'}).background_gradient('Greys')\nThere’s a top edge at cell 5,8. Let’s repeat our calculation there:\n(im3_t[4:7,6:9] * top_edge).sum()\n\ntensor(762.)\nThere’s a right edge at cell 8,18. What does that give us?:\n(im3_t[7:10,17:20] * top_edge).sum()\n\ntensor(-29.)\nAs you can see, this little calculation is returning a high number where the 3×3-pixel square represents a top edge (i.e., where there are low values at the top of the square, and high values immediately underneath). That’s because the -1 values in our kernel have little impact in that case, but the 1 values have a lot.\nLet’s look a tiny bit at the math. The filter will take any window of size 3×3 in our images, and if we name the pixel values like this:\n\\[\\begin{matrix} a1 & a2 & a3 \\\\ a4 & a5 & a6 \\\\ a7 & a8 & a9 \\end{matrix}\\]\nit will return \\(-a1-a2-a3+a7+a8+a9\\). If we are in a part of the image where \\(a1\\), \\(a2\\), and \\(a3\\) add up to the same as \\(a7\\), \\(a8\\), and \\(a9\\), then the terms will cancel each other out and we will get 0. However, if \\(a7\\) is greater than \\(a1\\), \\(a8\\) is greater than \\(a2\\), and \\(a9\\) is greater than \\(a3\\), we will get a bigger number as a result. So this filter detects horizontal edges—more precisely, edges where we go from bright parts of the image at the top to darker parts at the bottom.\nChanging our filter to have the row of 1s at the top and the -1s at the bottom would detect horizontal edges that go from dark to light. Putting the 1s and -1s in columns versus rows would give us filters that detect vertical edges. Each set of weights will produce a different kind of outcome.\nLet’s create a function to do this for one location, and check it matches our result from before:\ndef apply_kernel(row, col, kernel):\n return (im3_t[row-1:row+2,col-1:col+2] * kernel).sum()\napply_kernel(5,7,top_edge)\n\ntensor(762.)\nBut note that we can’t apply it to the corner (e.g., location 0,0), since there isn’t a complete 3×3 square there.",
+ "crumbs": [
+ "From the Foundations",
+ "13Convolutional Neural Networks"
+ ]
+ },
+ {
+ "objectID": "convolutions.html#the-magic-of-convolutions",
+ "href": "convolutions.html#the-magic-of-convolutions",
+ "title": "13 Convolutional Neural Networks",
+ "section": "",
+ "text": "jargon: Feature engineering: Creating new transformations of the input data in order to make it easier to model.\n\n\n\n\n\n\n\n\n\n\nFigure 13.1: Applying a kernel to one location\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nMapping a Convolution Kernel\nWe can map apply_kernel() across the coordinate grid. That is, we’ll be taking our 3×3 kernel, and applying it to each 3×3 section of our image. For instance, Figure 13.2 shows the positions a 3×3 kernel can be applied to in the first row of a 5×5 image.\n\n\n\n\n\n\nFigure 13.2: Applying a kernel across a grid\n\n\n\nTo get a grid of coordinates we can use a nested list comprehension, like so:\n\n[[(i,j) for j in range(1,5)] for i in range(1,5)]\n\n[[(1, 1), (1, 2), (1, 3), (1, 4)],\n [(2, 1), (2, 2), (2, 3), (2, 4)],\n [(3, 1), (3, 2), (3, 3), (3, 4)],\n [(4, 1), (4, 2), (4, 3), (4, 4)]]\n\n\n\nnote: Nested List Comprehensions: Nested list comprehensions are used a lot in Python, so if you haven’t seen them before, take a few minutes to make sure you understand what’s happening here, and experiment with writing your own nested list comprehensions.\n\nHere’s the result of applying our kernel over a coordinate grid:\n\nrng = range(1,27)\ntop_edge3 = tensor([[apply_kernel(i,j,top_edge) for j in rng] for i in rng])\n\nshow_image(top_edge3);\n\n\n\n\n\n\n\n\nLooking good! Our top edges are black, and bottom edges are white (since they are the opposite of top edges). Now that our image contains negative numbers too, matplotlib has automatically changed our colors so that white is the smallest number in the image, black the highest, and zeros appear as gray.\nWe can try the same thing for left edges:\n\nleft_edge = tensor([[-1,1,0],\n [-1,1,0],\n [-1,1,0]]).float()\n\nleft_edge3 = tensor([[apply_kernel(i,j,left_edge) for j in rng] for i in rng])\n\nshow_image(left_edge3);\n\n\n\n\n\n\n\n\nAs we mentioned before, a convolution is the operation of applying such a kernel over a grid in this way. In the paper “A Guide to Convolution Arithmetic for Deep Learning” there are many great diagrams showing how image kernels can be applied. Here’s an example from the paper showing (at the bottom) a light blue 4×4 image, with a dark blue 3×3 kernel being applied, creating a 2×2 green output activation map at the top.\n\n\n\n\n\n\nFigure 13.3: Result of applying a 3×3 kernel to a 4×4 image (courtesy of Vincent Dumoulin and Francesco Visin)\n\n\n\nLook at the shape of the result. If the original image has a height of h and a width of w, how many 3×3 windows can we find? As you can see from the example, there are h-2 by w-2 windows, so the image we get has a result as a height of h-2 and a width of w-2.\nWe won’t implement this convolution function from scratch, but use PyTorch’s implementation instead (it is way faster than anything we could do in Python).\n\n\nConvolutions in PyTorch\nConvolution is such an important and widely used operation that PyTorch has it built in. It’s called F.conv2d (recall that F is a fastai import from torch.nn.functional, as recommended by PyTorch). The PyTorch docs tell us that it includes these parameters:\n\ninput:: input tensor of shape (minibatch, in_channels, iH, iW)\nweight:: filters of shape (out_channels, in_channels, kH, kW)\n\nHere iH,iW is the height and width of the image (i.e., 28,28), and kH,kW is the height and width of our kernel (3,3). But apparently PyTorch is expecting rank-4 tensors for both these arguments, whereas currently we only have rank-2 tensors (i.e., matrices, or arrays with two axes).\nThe reason for these extra axes is that PyTorch has a few tricks up its sleeve. The first trick is that PyTorch can apply a convolution to multiple images at the same time. That means we can call it on every item in a batch at once!\nThe second trick is that PyTorch can apply multiple kernels at the same time. So let’s create the diagonal-edge kernels too, and then stack all four of our edge kernels into a single tensor:\n\ndiag1_edge = tensor([[ 0,-1, 1],\n [-1, 1, 0],\n [ 1, 0, 0]]).float()\ndiag2_edge = tensor([[ 1,-1, 0],\n [ 0, 1,-1],\n [ 0, 0, 1]]).float()\n\nedge_kernels = torch.stack([left_edge, top_edge, diag1_edge, diag2_edge])\nedge_kernels.shape\n\ntorch.Size([4, 3, 3])\n\n\nTo test this, we’ll need a DataLoader and a sample mini-batch. Let’s use the data block API:\n\nmnist = DataBlock((ImageBlock(cls=PILImageBW), CategoryBlock), \n get_items=get_image_files, \n splitter=GrandparentSplitter(),\n get_y=parent_label)\n\ndls = mnist.dataloaders(path)\nxb,yb = first(dls.valid)\nxb.shape\n\ntorch.Size([64, 1, 28, 28])\n\n\nBy default, fastai puts data on the GPU when using data blocks. Let’s move it to the CPU for our examples:\n\nxb,yb = to_cpu(xb),to_cpu(yb)\n\nOne batch contains 64 images, each of 1 channel, with 28×28 pixels. F.conv2d can handle multichannel (i.e., color) images too. A channel is a single basic color in an image—for regular full-color images there are three channels, red, green, and blue. PyTorch represents an image as a rank-3 tensor, with dimensions [channels, rows, columns].\nWe’ll see how to handle more than one channel later in this chapter. Kernels passed to F.conv2d need to be rank-4 tensors: [channels_in, features_out, rows, columns]. edge_kernels is currently missing one of these. We need to tell PyTorch that the number of input channels in the kernel is one, which we can do by inserting an axis of size one (this is known as a unit axis) in the first location, where the PyTorch docs show in_channels is expected. To insert a unit axis into a tensor, we use the unsqueeze method:\n\nedge_kernels.shape,edge_kernels.unsqueeze(1).shape\n\n(torch.Size([4, 3, 3]), torch.Size([4, 1, 3, 3]))\n\n\nThis is now the correct shape for edge_kernels. Let’s pass this all to conv2d:\n\nedge_kernels = edge_kernels.unsqueeze(1)\n\n\nbatch_features = F.conv2d(xb, edge_kernels)\nbatch_features.shape\n\ntorch.Size([64, 4, 26, 26])\n\n\nThe output shape shows we gave 64 images in the mini-batch, 4 kernels, and 26×26 edge maps (we started with 28×28 images, but lost one pixel from each side as discussed earlier). We can see we get the same results as when we did this manually:\n\nshow_image(batch_features[0,0]);\n\n\n\n\n\n\n\n\nThe most important trick that PyTorch has up its sleeve is that it can use the GPU to do all this work in parallel—that is, applying multiple kernels, to multiple images, across multiple channels. Doing lots of work in parallel is critical to getting GPUs to work efficiently; if we did each of these operations one at a time, we’d often run hundreds of times slower (and if we used our manual convolution loop from the previous section, we’d be millions of times slower!). Therefore, to become a strong deep learning practitioner, one skill to practice is giving your GPU plenty of work to do at a time.\nIt would be nice to not lose those two pixels on each axis. The way we do that is to add padding, which is simply additional pixels added around the outside of our image. Most commonly, pixels of zeros are added.\n\n\nStrides and Padding\nWith appropriate padding, we can ensure that the output activation map is the same size as the original image, which can make things a lot simpler when we construct our architectures. Figure 13.4 shows how adding padding allows us to apply the kernels in the image corners.\n\n\n\n\n\n\nFigure 13.4: A convolution with padding\n\n\n\nWith a 5×5 input, 4×4 kernel, and 2 pixels of padding, we end up with a 6×6 activation map, as we can see in Figure 13.5.\n\n\n\n\n\n\nFigure 13.5: A 4×4 kernel with 5×5 input and 2 pixels of padding (courtesy of Vincent Dumoulin and Francesco Visin)\n\n\n\nIf we add a kernel of size ks by ks (with ks an odd number), the necessary padding on each side to keep the same shape is ks//2. An even number for ks would require a different amount of padding on the top/bottom and left/right, but in practice we almost never use an even filter size.\nSo far, when we have applied the kernel to the grid, we have moved it one pixel over at a time. But we can jump further; for instance, we could move over two pixels after each kernel application, as in Figure 13.6. This is known as a stride-2 convolution. The most common kernel size in practice is 3×3, and the most common padding is 1. As you’ll see, stride-2 convolutions are useful for decreasing the size of our outputs, and stride-1 convolutions are useful for adding layers without changing the output size.\n\n\n\n\n\n\nFigure 13.6: A 3×3 kernel with 5×5 input, stride-2 convolution, and 1 pixel of padding (courtesy of Vincent Dumoulin and Francesco Visin)\n\n\n\nIn an image of size h by w, using a padding of 1 and a stride of 2 will give us a result of size (h+1)//2 by (w+1)//2. The general formula for each dimension is (n + 2*pad - ks)//stride + 1, where pad is the padding, ks, the size of our kernel, and stride is the stride.\nLet’s now take a look at how the pixel values of the result of our convolutions are computed.\n\n\nUnderstanding the Convolution Equations\nTo explain the math behind convolutions, fast.ai student Matt Kleinsmith came up with the very clever idea of showing CNNs from different viewpoints. In fact, it’s so clever, and so helpful, we’re going to show it here too!\nHere’s our 3×3 pixel image, with each pixel labeled with a letter:\n\nAnd here’s our kernel, with each weight labeled with a Greek letter:\n\nSince the filter fits in the image four times, we have four results:\n\nFigure 13.7 shows how we applied the kernel to each section of the image to yield each result.\n\n\n\n\n\n\nFigure 13.7: Applying the kernel\n\n\n\nThe equation view is in Figure 13.8.\n\n\n\n\n\n\nFigure 13.8: The equation\n\n\n\nNotice that the bias term, b, is the same for each section of the image. You can consider the bias as part of the filter, just like the weights (α, β, γ, δ) are part of the filter.\nHere’s an interesting insight—a convolution can be represented as a special kind of matrix multiplication, as illustrated in Figure 13.9. The weight matrix is just like the ones from traditional neural networks. However, this weight matrix has two special properties:\n\nThe zeros shown in gray are untrainable. This means that they’ll stay zero throughout the optimization process.\nSome of the weights are equal, and while they are trainable (i.e., changeable), they must remain equal. These are called shared weights.\n\nThe zeros correspond to the pixels that the filter can’t touch. Each row of the weight matrix corresponds to one application of the filter.\n\n\n\n\n\n\nFigure 13.9: Convolution as matrix multiplication\n\n\n\nNow that we understand what a convolution is, let’s use them to build a neural net.",
+ "crumbs": [
+ "From the Foundations",
+ "13Convolutional Neural Networks"
+ ]
+ },
+ {
+ "objectID": "convolutions.html#our-first-convolutional-neural-network",
+ "href": "convolutions.html#our-first-convolutional-neural-network",
+ "title": "13 Convolutional Neural Networks",
+ "section": "13.2 Our First Convolutional Neural Network",
+ "text": "13.2 Our First Convolutional Neural Network\nThere is no reason to believe that some particular edge filters are the most useful kernels for image recognition. Furthermore, we’ve seen that in later layers convolutional kernels become complex transformations of features from lower levels, but we don’t have a good idea of how to manually construct these.\nInstead, it would be best to learn the values of the kernels. We already know how to do this—SGD! In effect, the model will learn the features that are useful for classification.\nWhen we use convolutions instead of (or in addition to) regular linear layers we create a convolutional neural network (CNN).\n\nCreating the CNN\nLet’s go back to the basic neural network we had in Chapter 4. It was defined like this:\n\nsimple_net = nn.Sequential(\n nn.Linear(28*28,30),\n nn.ReLU(),\n nn.Linear(30,1)\n)\n\nWe can view a model’s definition:\n\nsimple_net\n\nSequential(\n (0): Linear(in_features=784, out_features=30, bias=True)\n (1): ReLU()\n (2): Linear(in_features=30, out_features=1, bias=True)\n)\n\n\nWe now want to create a similar architecture to this linear model, but using convolutional layers instead of linear. nn.Conv2d is the module equivalent of F.conv2d. It’s more convenient than F.conv2d when creating an architecture, because it creates the weight matrix for us automatically when we instantiate it.\nHere’s a possible architecture:\n\nbroken_cnn = sequential(\n nn.Conv2d(1,30, kernel_size=3, padding=1),\n nn.ReLU(),\n nn.Conv2d(30,1, kernel_size=3, padding=1)\n)\n\nOne thing to note here is that we didn’t need to specify 28×28 as the input size. That’s because a linear layer needs a weight in the weight matrix for every pixel, so it needs to know how many pixels there are, but a convolution is applied over each pixel automatically. The weights only depend on the number of input and output channels and the kernel size, as we saw in the previous section.\nThink about what the output shape is going to be, then let’s try it and see:\n\nbroken_cnn(xb).shape\n\ntorch.Size([64, 1, 28, 28])\n\n\nThis is not something we can use to do classification, since we need a single output activation per image, not a 28×28 map of activations. One way to deal with this is to use enough stride-2 convolutions such that the final layer is size 1. That is, after one stride-2 convolution the size will be 14×14, after two it will be 7×7, then 4×4, 2×2, and finally size 1.\nLet’s try that now. First, we’ll define a function with the basic parameters we’ll use in each convolution:\n\ndef conv(ni, nf, ks=3, act=True):\n res = nn.Conv2d(ni, nf, stride=2, kernel_size=ks, padding=ks//2)\n if act: res = nn.Sequential(res, nn.ReLU())\n return res\n\n\nimportant: Refactoring: Refactoring parts of your neural networks like this makes it much less likely you’ll get errors due to inconsistencies in your architectures, and makes it more obvious to the reader which parts of your layers are actually changing.\n\nWhen we use a stride-2 convolution, we often increase the number of features at the same time. This is because we’re decreasing the number of activations in the activation map by a factor of 4; we don’t want to decrease the capacity of a layer by too much at a time.\n\njargon: channels and features: These two terms are largely used interchangeably, and refer to the size of the second axis of a weight matrix, which is, the number of activations per grid cell after a convolution. Features is never used to refer to the input data, but channels can refer to either the input data (generally channels are colors) or activations inside the network.\n\nHere is how we can build a simple CNN:\n\nsimple_cnn = sequential(\n conv(1 ,4), #14x14\n conv(4 ,8), #7x7\n conv(8 ,16), #4x4\n conv(16,32), #2x2\n conv(32,2, act=False), #1x1\n Flatten(),\n)\n\n\nj: I like to add comments like the ones here after each convolution to show how large the activation map will be after each layer. These comments assume that the input size is 28*28\n\nNow the network outputs two activations, which map to the two possible levels in our labels:\n\nsimple_cnn(xb).shape\n\ntorch.Size([64, 2])\n\n\nWe can now create our Learner:\n\nlearn = Learner(dls, simple_cnn, loss_func=F.cross_entropy, metrics=accuracy)\n\nTo see exactly what’s going on in the model, we can use summary:\n\nlearn.summary()\n\nSequential (Input shape: ['64 x 1 x 28 x 28'])\n================================================================\nLayer (type) Output Shape Param # Trainable \n================================================================\nConv2d 64 x 4 x 14 x 14 40 True \n________________________________________________________________\nReLU 64 x 4 x 14 x 14 0 False \n________________________________________________________________\nConv2d 64 x 8 x 7 x 7 296 True \n________________________________________________________________\nReLU 64 x 8 x 7 x 7 0 False \n________________________________________________________________\nConv2d 64 x 16 x 4 x 4 1,168 True \n________________________________________________________________\nReLU 64 x 16 x 4 x 4 0 False \n________________________________________________________________\nConv2d 64 x 32 x 2 x 2 4,640 True \n________________________________________________________________\nReLU 64 x 32 x 2 x 2 0 False \n________________________________________________________________\nConv2d 64 x 2 x 1 x 1 578 True \n________________________________________________________________\nFlatten 64 x 2 0 False \n________________________________________________________________\n\nTotal params: 6,722\nTotal trainable params: 6,722\nTotal non-trainable params: 0\n\nOptimizer used: <function Adam>\nLoss function: <function cross_entropy>\n\nCallbacks:\n - TrainEvalCallback\n - Recorder\n - ProgressCallback\n\n\nNote that the output of the final Conv2d layer is 64x2x1x1. We need to remove those extra 1x1 axes; that’s what Flatten does. It’s basically the same as PyTorch’s squeeze method, but as a module.\nLet’s see if this trains! Since this is a deeper network than we’ve built from scratch before, we’ll use a lower learning rate and more epochs:\n\nlearn.fit_one_cycle(2, 0.01)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n0.072684\n0.045110\n0.990186\n00:05\n\n\n1\n0.022580\n0.030775\n0.990186\n00:05\n\n\n\n\n\nSuccess! It’s getting closer to the resnet18 result we had, although it’s not quite there yet, and it’s taking more epochs, and we’re needing to use a lower learning rate. We still have a few more tricks to learn, but we’re getting closer and closer to being able to create a modern CNN from scratch.\n\n\nUnderstanding Convolution Arithmetic\nWe can see from the summary that we have an input of size 64x1x28x28. The axes are batch,channel,height,width. This is often represented as NCHW (where N refers to batch size). Tensorflow, on the other hand, uses NHWC axis order. The first layer is:\n\nm = learn.model[0]\nm\n\nSequential(\n (0): Conv2d(1, 4, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n (1): ReLU()\n)\n\n\nSo we have 1 input channel, 4 output channels, and a 3×3 kernel. Let’s check the weights of the first convolution:\n\nm[0].weight.shape\n\ntorch.Size([4, 1, 3, 3])\n\n\nThe summary shows we have 40 parameters, and 4*1*3*3 is 36. What are the other four parameters? Let’s see what the bias contains:\n\nm[0].bias.shape\n\ntorch.Size([4])\n\n\nWe can now use this information to clarify our statement in the previous section: “When we use a stride-2 convolution, we often increase the number of features because we’re decreasing the number of activations in the activation map by a factor of 4; we don’t want to decrease the capacity of a layer by too much at a time.”\nThere is one bias for each channel. (Sometimes channels are called features or filters when they are not input channels.) The output shape is 64x4x14x14, and this will therefore become the input shape to the next layer. The next layer, according to summary, has 296 parameters. Let’s ignore the batch axis to keep things simple. So for each of 14*14=196 locations we are multiplying 296-8=288 weights (ignoring the bias for simplicity), so that’s 196*288=56_448 multiplications at this layer. The next layer will have 7*7*(1168-16)=56_448 multiplications.\nWhat happened here is that our stride-2 convolution halved the grid size from 14x14 to 7x7, and we doubled the number of filters from 8 to 16, resulting in no overall change in the amount of computation. If we left the number of channels the same in each stride-2 layer, the amount of computation being done in the net would get less and less as it gets deeper. But we know that the deeper layers have to compute semantically rich features (such as eyes or fur), so we wouldn’t expect that doing less computation would make sense.\nAnother way to think of this is based on receptive fields.\n\n\nReceptive Fields\nThe receptive field is the area of an image that is involved in the calculation of a layer. On the book’s website, you’ll find an Excel spreadsheet called conv-example.xlsx that shows the calculation of two stride-2 convolutional layers using an MNIST digit. Each layer has a single kernel. Figure 13.10 shows what we see if we click on one of the cells in the conv2 section, which shows the output of the second convolutional layer, and click trace precedents.\n\n\n\n\n\n\nFigure 13.10: Immediate precedents of Conv2 layer\n\n\n\nHere, the cell with the green border is the cell we clicked on, and the blue highlighted cells are its precedents—that is, the cells used to calculate its value. These cells are the corresponding 3×3 area of cells from the input layer (on the left), and the cells from the filter (on the right). Let’s now click trace precedents again, to see what cells are used to calculate these inputs. Figure 13.11 shows what happens.\n\n\n\n\n\n\nFigure 13.11: Secondary precedents of Conv2 layer\n\n\n\nIn this example, we have just two convolutional layers, each of stride 2, so this is now tracing right back to the input image. We can see that a 7×7 area of cells in the input layer is used to calculate the single green cell in the Conv2 layer. This 7×7 area is the receptive field in the input of the green activation in Conv2. We can also see that a second filter kernel is needed now, since we have two layers.\nAs you see from this example, the deeper we are in the network (specifically, the more stride-2 convs we have before a layer), the larger the receptive field for an activation in that layer. A large receptive field means that a large amount of the input image is used to calculate each activation in that layer is. We now know that in the deeper layers of the network we have semantically rich features, corresponding to larger receptive fields. Therefore, we’d expect that we’d need more weights for each of our features to handle this increasing complexity. This is another way of saying the same thing we mentioned in the previous section: when we introduce a stride-2 conv in our network, we should also increase the number of channels.\nWhen writing this particular chapter, we had a lot of questions we needed answers for, to be able to explain CNNs to you as best we could. Believe it or not, we found most of the answers on Twitter. We’re going to take a quick break to talk to you about that now, before we move on to color images.\n\n\nA Note About Twitter\nWe are not, to say the least, big users of social networks in general. But our goal in writing this book is to help you become the best deep learning practitioner you can, and we would be remiss not to mention how important Twitter has been in our own deep learning journeys.\nYou see, there’s another part of Twitter, far away from Donald Trump and the Kardashians, which is the part of Twitter where deep learning researchers and practitioners talk shop every day. As we were writing this section, Jeremy wanted to double-check that what we were saying about stride-2 convolutions was accurate, so he asked on Twitter:\n\nA few minutes later, this answer popped up:\n\nChristian Szegedy is the first author of Inception, the 2014 ImageNet winner and source of many key insights used in modern neural networks. Two hours later, this appeared:\n\nDo you recognize that name? You saw it in Chapter 2, when we were talking about the Turing Award winners who established the foundations of deep learning today!\nJeremy also asked on Twitter for help checking our description of label smoothing in Chapter 7 was accurate, and got a response again from directly from Christian Szegedy (label smoothing was originally introduced in the Inception paper):\n\nMany of the top people in deep learning today are Twitter regulars, and are very open about interacting with the wider community. One good way to get started is to look at a list of Jeremy’s recent Twitter likes, or Sylvain’s. That way, you can see a list of Twitter users that we think have interesting and useful things to say.\nTwitter is the main way we both stay up to date with interesting papers, software releases, and other deep learning news. For making connections with the deep learning community, we recommend getting involved both in the fast.ai forums and on Twitter.\nThat said, let’s get back to the meat of this chapter. Up until now, we have only shown you examples of pictures in black and white, with one value per pixel. In practice, most colored images have three values per pixel to define their color. We’ll look at working with color images next.",
+ "crumbs": [
+ "From the Foundations",
+ "13Convolutional Neural Networks"
+ ]
+ },
+ {
+ "objectID": "convolutions.html#color-images",
+ "href": "convolutions.html#color-images",
+ "title": "13 Convolutional Neural Networks",
+ "section": "13.3 Color Images",
+ "text": "13.3 Color Images\nA colour picture is a rank-3 tensor:\n\nim = image2tensor(Image.open(image_bear()))\nim.shape\n\ntorch.Size([3, 1000, 846])\n\n\n\nshow_image(im);\n\n\n\n\n\n\n\n\nThe first axis contains the channels, red, green, and blue:\n\n_,axs = subplots(1,3)\nfor bear,ax,color in zip(im,axs,('Reds','Greens','Blues')):\n show_image(255-bear, ax=ax, cmap=color)\n\n\n\n\n\n\n\n\nWe saw what the convolution operation was for one filter on one channel of the image (our examples were done on a square). A convolutional layer will take an image with a certain number of channels (three for the first layer for regular RGB color images) and output an image with a different number of channels. Like our hidden size that represented the numbers of neurons in a linear layer, we can decide to have as many filters as we want, and each of them will be able to specialize, some to detect horizontal edges, others to detect vertical edges and so forth, to give something like we studied in Chapter 2.\nIn one sliding window, we have a certain number of channels and we need as many filters (we don’t use the same kernel for all the channels). So our kernel doesn’t have a size of 3 by 3, but ch_in (for channels in) is 3 by 3. On each channel, we multiply the elements of our window by the elements of the coresponding filter, then sum the results (as we saw before) and sum over all the filters. In the example given in Figure 13.12, the result of our conv layer on that window is red + green + blue.\n\n\n\n\n\n\nFigure 13.12: Convolution over an RGB image\n\n\n\nSo, in order to apply a convolution to a color picture we require a kernel tensor with a size that matches the first axis. At each location, the corresponding parts of the kernel and the image patch are multiplied together.\nThese are then all added together, to produce a single number, for each grid location, for each output feature, as shown in Figure 13.13.\n\n\n\n\n\n\nFigure 13.13: Adding the RGB filters\n\n\n\nThen we have ch_out filters like this, so in the end, the result of our convolutional layer will be a batch of images with ch_out channels and a height and width given by the formula outlined earlier. This give us ch_out tensors of size ch_in x ks x ks that we represent in one big tensor of four dimensions. In PyTorch, the order of the dimensions for those weights is ch_out x ch_in x ks x ks.\nAdditionally, we may want to have a bias for each filter. In the preceding example, the final result for our convolutional layer would be \\(y_{R} + y_{G} + y_{B} + b\\) in that case. Like in a linear layer, there are as many bias as we have kernels, so the biases is a vector of size ch_out.\nThere are no special mechanisms required when setting up a CNN for training with color images. Just make sure your first layer has three inputs.\nThere are lots of ways of processing color images. For instance, you can change them to black and white, change from RGB to HSV (hue, saturation, and value) color space, and so forth. In general, it turns out experimentally that changing the encoding of colors won’t make any difference to your model results, as long as you don’t lose information in the transformation. So, transforming to black and white is a bad idea, since it removes the color information entirely (and this can be critical; for instance, a pet breed may have a distinctive color); but converting to HSV generally won’t make any difference.\nNow you know what those pictures in Chapter 1 of “what a neural net learns” from the Zeiler and Fergus paper mean! This is their picture of some of the layer 1 weights which we showed:\n\nThis is taking the three slices of the convolutional kernel, for each output feature, and displaying them as images. We can see that even though the creators of the neural net never explicitly created kernels to find edges, for instance, the neural net automatically discovered these features using SGD.\nNow let’s see how we can train these CNNs, and show you all the techniques fastai uses under the hood for efficient training.",
+ "crumbs": [
+ "From the Foundations",
+ "13Convolutional Neural Networks"
+ ]
+ },
+ {
+ "objectID": "convolutions.html#improving-training-stability",
+ "href": "convolutions.html#improving-training-stability",
+ "title": "13 Convolutional Neural Networks",
+ "section": "13.4 Improving Training Stability",
+ "text": "13.4 Improving Training Stability\nSince we are so good at recognizing 3s from 7s, let’s move on to something harder—recognizing all 10 digits. That means we’ll need to use MNIST instead of MNIST_SAMPLE:\n\npath = untar_data(URLs.MNIST)\n\n\npath.ls()\n\n(#2) [Path('testing'),Path('training')]\n\n\nThe data is in two folders named training and testing, so we have to tell GrandparentSplitter about that (it defaults to train and valid). We did do that in the get_dls function, which we create to make it easy to change our batch size later:\n\ndef get_dls(bs=64):\n return DataBlock(\n blocks=(ImageBlock(cls=PILImageBW), CategoryBlock), \n get_items=get_image_files, \n splitter=GrandparentSplitter('training','testing'),\n get_y=parent_label,\n batch_tfms=Normalize()\n ).dataloaders(path, bs=bs)\n\ndls = get_dls()\n\nRemember, it’s always a good idea to look at your data before you use it:\n\ndls.show_batch(max_n=9, figsize=(4,4))\n\n\n\n\n\n\n\n\nNow that we have our data ready, we can train a simple model on it.\n\nA Simple Baseline\nEarlier in this chapter, we built a model based on a conv function like this:\n\ndef conv(ni, nf, ks=3, act=True):\n res = nn.Conv2d(ni, nf, stride=2, kernel_size=ks, padding=ks//2)\n if act: res = nn.Sequential(res, nn.ReLU())\n return res\n\nLet’s start with a basic CNN as a baseline. We’ll use the same one as earlier, but with one tweak: we’ll use more activations. Since we have more numbers to differentiate, it’s likely we will need to learn more filters.\nAs we discussed, we generally want to double the number of filters each time we have a stride-2 layer. One way to increase the number of filters throughout our network is to double the number of activations in the first layer–then every layer after that will end up twice as big as in the previous version as well.\nBut there is a subtle problem with this. Consider the kernel that is being applied to each pixel. By default, we use a 3×3-pixel kernel. That means that there are a total of 3×3 = 9 pixels that the kernel is being applied to at each location. Previously, our first layer had four output filters. That meant that there were four values being computed from nine pixels at each location. Think about what happens if we double this output to eight filters. Then when we apply our kernel we will be using nine pixels to calculate eight numbers. That means it isn’t really learning much at all: the output size is almost the same as the input size. Neural networks will only create useful features if they’re forced to do so—that is, if the number of outputs from an operation is significantly smaller than the number of inputs.\nTo fix this, we can use a larger kernel in the first layer. If we use a kernel of 5×5 pixels then there are 25 pixels being used at each kernel application. Creating eight filters from this will mean the neural net will have to find some useful features:\n\ndef simple_cnn():\n return sequential(\n conv(1 ,8, ks=5), #14x14\n conv(8 ,16), #7x7\n conv(16,32), #4x4\n conv(32,64), #2x2\n conv(64,10, act=False), #1x1\n Flatten(),\n )\n\nAs you’ll see in a moment, we can look inside our models while they’re training in order to try to find ways to make them train better. To do this we use the ActivationStats callback, which records the mean, standard deviation, and histogram of activations of every trainable layer (as we’ve seen, callbacks are used to add behavior to the training loop; we’ll explore how they work in Chapter 16):\n\nfrom fastai.callback.hook import *\n\nWe want to train quickly, so that means training at a high learning rate. Let’s see how we go at 0.06:\n\ndef fit(epochs=1):\n learn = Learner(dls, simple_cnn(), loss_func=F.cross_entropy,\n metrics=accuracy, cbs=ActivationStats(with_hist=True))\n learn.fit(epochs, 0.06)\n return learn\n\n\nlearn = fit()\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n2.307071\n2.305865\n0.113500\n00:16\n\n\n\n\n\nThis didn’t train at all well! Let’s find out why.\nOne handy feature of the callbacks passed to Learner is that they are made available automatically, with the same name as the callback class, except in snake_case. So, our ActivationStats callback can be accessed through activation_stats. I’m sure you remember learn.recorder… can you guess how that is implemented? That’s right, it’s a callback called Recorder!\nActivationStats includes some handy utilities for plotting the activations during training. plot_layer_stats(idx) plots the mean and standard deviation of the activations of layer number idx, along with the percentage of activations near zero. Here’s the first layer’s plot:\n\nlearn.activation_stats.plot_layer_stats(0)\n\n\n\n\n\n\n\n\nGenerally our model should have a consistent, or at least smooth, mean and standard deviation of layer activations during training. Activations near zero are particularly problematic, because it means we have computation in the model that’s doing nothing at all (since multiplying by zero gives zero). When you have some zeros in one layer, they will therefore generally carry over to the next layer… which will then create more zeros. Here’s the penultimate layer of our network:\n\nlearn.activation_stats.plot_layer_stats(-2)\n\n\n\n\n\n\n\n\nAs expected, the problems get worse towards the end of the network, as the instability and zero activations compound over layers. Let’s look at what we can do to make training more stable.\n\n\nIncrease Batch Size\nOne way to make training more stable is to increase the batch size. Larger batches have gradients that are more accurate, since they’re calculated from more data. On the downside, though, a larger batch size means fewer batches per epoch, which means less opportunities for your model to update weights. Let’s see if a batch size of 512 helps:\n\ndls = get_dls(512)\n\n\nlearn = fit()\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n2.309385\n2.302744\n0.113500\n00:08\n\n\n\n\n\nLet’s see what the penultimate layer looks like:\n\nlearn.activation_stats.plot_layer_stats(-2)\n\n\n\n\n\n\n\n\nAgain, we’ve got most of our activations near zero. Let’s see what else we can do to improve training stability.\n\n\n1cycle Training\nOur initial weights are not well suited to the task we’re trying to solve. Therefore, it is dangerous to begin training with a high learning rate: we may very well make the training diverge instantly, as we’ve seen. We probably don’t want to end training with a high learning rate either, so that we don’t skip over a minimum. But we want to train at a high learning rate for the rest of the training period, because we’ll be able to train more quickly that way. Therefore, we should change the learning rate during training, from low, to high, and then back to low again.\nLeslie Smith (yes, the same guy that invented the learning rate finder!) developed this idea in his article “Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates”. He designed a schedule for learning rate separated into two phases: one where the learning rate grows from the minimum value to the maximum value (warmup), and one where it decreases back to the minimum value (annealing). Smith called this combination of approaches 1cycle training.\n1cycle training allows us to use a much higher maximum learning rate than other types of training, which gives two benefits:\n\nBy training with higher learning rates, we train faster—a phenomenon Smith named super-convergence.\nBy training with higher learning rates, we overfit less because we skip over the sharp local minima to end up in a smoother (and therefore more generalizable) part of the loss.\n\nThe second point is an interesting and subtle one; it is based on the observation that a model that generalizes well is one whose loss would not change very much if you changed the input by a small amount. If a model trains at a large learning rate for quite a while, and can find a good loss when doing so, it must have found an area that also generalizes well, because it is jumping around a lot from batch to batch (that is basically the definition of a high learning rate). The problem is that, as we have discussed, just jumping to a high learning rate is more likely to result in diverging losses, rather than seeing your losses improve. So we don’t jump straight to a high learning rate. Instead, we start at a low learning rate, where our losses do not diverge, and we allow the optimizer to gradually find smoother and smoother areas of our parameters by gradually going to higher and higher learning rates.\nThen, once we have found a nice smooth area for our parameters, we want to find the very best part of that area, which means we have to bring our learning rates down again. This is why 1cycle training has a gradual learning rate warmup, and a gradual learning rate cooldown. Many researchers have found that in practice this approach leads to more accurate models and trains more quickly. That is why it is the approach that is used by default for fine_tune in fastai.\nIn Chapter 16 we’ll learn all about momentum in SGD. Briefly, momentum is a technique where the optimizer takes a step not only in the direction of the gradients, but also that continues in the direction of previous steps. Leslie Smith introduced the idea of cyclical momentums in “A Disciplined Approach to Neural Network Hyper-Parameters: Part 1”. It suggests that the momentum varies in the opposite direction of the learning rate: when we are at high learning rates, we use less momentum, and we use more again in the annealing phase.\nWe can use 1cycle training in fastai by calling fit_one_cycle:\n\ndef fit(epochs=1, lr=0.06):\n learn = Learner(dls, simple_cnn(), loss_func=F.cross_entropy,\n metrics=accuracy, cbs=ActivationStats(with_hist=True))\n learn.fit_one_cycle(epochs, lr)\n return learn\n\n\nlearn = fit()\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n0.210838\n0.084827\n0.974300\n00:08\n\n\n\n\n\nWe’re finally making some progress! It’s giving us a reasonable accuracy now.\nWe can view the learning rate and momentum throughout training by calling plot_sched on learn.recorder. learn.recorder (as the name suggests) records everything that happens during training, including losses, metrics, and hyperparameters such as learning rate and momentum:\n\nlearn.recorder.plot_sched()\n\n\n\n\n\n\n\n\nSmith’s original 1cycle paper used a linear warmup and linear annealing. As you can see, we adapted the approach in fastai by combining it with another popular approach: cosine annealing. fit_one_cycle provides the following parameters you can adjust:\n\nlr_max:: The highest learning rate that will be used (this can also be a list of learning rates for each layer group, or a Python slice object containing the first and last layer group learning rates)\ndiv:: How much to divide lr_max by to get the starting learning rate\ndiv_final:: How much to divide lr_max by to get the ending learning rate\npct_start:: What percentage of the batches to use for the warmup\nmoms:: A tuple (mom1,mom2,mom3) where mom1 is the initial momentum, mom2 is the minimum momentum, and mom3 is the final momentum\n\nLet’s take a look at our layer stats again:\n\nlearn.activation_stats.plot_layer_stats(-2)\n\n\n\n\n\n\n\n\nThe percentage of near-zero weights is getting much better, although it’s still quite high.\nWe can see even more about what’s going on in our training using color_dim, passing it a layer index:\n\nlearn.activation_stats.color_dim(-2)\n\n\n\n\n\n\n\n\ncolor_dim was developed by fast.ai in conjunction with a student, Stefano Giomo. Stefano, who refers to the idea as the colorful dimension, provides an in-depth explanation of the history and details behind the method. The basic idea is to create a histogram of the activations of a layer, which we would hope would follow a smooth pattern such as the normal distribution (colorful_dist).\n\n\n\n\n\n\nFigure 13.14: Histogram in ‘colorful dimension’\n\n\n\nTo create color_dim, we take the histogram shown on the left here, and convert it into just the colored representation shown at the bottom. Then we flip it on its side, as shown on the right. We found that the distribution is clearer if we take the log of the histogram values. Then, Stefano describes:\n\nThe final plot for each layer is made by stacking the histogram of the activations from each batch along the horizontal axis. So each vertical slice in the visualisation represents the histogram of activations for a single batch. The color intensity corresponds to the height of the histogram, in other words the number of activations in each histogram bin.\n\nFigure 13.15 shows how this all fits together.\n\n\n\n\n\n\nFigure 13.15: Summary of the colorful dimension (courtesy of Stefano Giomo)\n\n\n\nThis illustrates why log(f) is more colorful than f when f follows a normal distribution because taking a log changes the Gaussian in a quadratic, which isn’t as narrow.\nSo with that in mind, let’s take another look at the result for the penultimate layer:\n\nlearn.activation_stats.color_dim(-2)\n\n\n\n\n\n\n\n\nThis shows a classic picture of “bad training.” We start with nearly all activations at zero—that’s what we see at the far left, with all the dark blue. The bright yellow at the bottom represents the near-zero activations. Then, over the first few batches we see the number of nonzero activations exponentially increasing. But it goes too far, and collapses! We see the dark blue return, and the bottom becomes bright yellow again. It almost looks like training restarts from scratch. Then we see the activations increase again, and collapse again. After repeating this a few times, eventually we see a spread of activations throughout the range.\nIt’s much better if training can be smooth from the start. The cycles of exponential increase and then collapse tend to result in a lot of near-zero activations, resulting in slow training and poor final results. One way to solve this problem is to use batch normalization.\n\n\nBatch Normalization\nTo fix the slow training and poor final results we ended up with in the previous section, we need to fix the initial large percentage of near-zero activations, and then try to maintain a good distribution of activations throughout training.\nSergey Ioffe and Christian Szegedy presented a solution to this problem in the 2015 paper “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift”. In the abstract, they describe just the problem that we’ve seen:\n\nTraining Deep Neural Networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization… We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs.\n\nTheir solution, they say is:\n\nMaking normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization.\n\nThe paper caused great excitement as soon as it was released, because it included the chart in Figure 13.16, which clearly demonstrated that batch normalization could train a model that was even more accurate than the current state of the art (the Inception architecture) and around 5x faster.\n\n\n\n\n\n\nFigure 13.16: Impact of batch normalization (courtesy of Sergey Ioffe and Christian Szegedy)\n\n\n\nBatch normalization (often just called batchnorm) works by taking an average of the mean and standard deviations of the activations of a layer and using those to normalize the activations. However, this can cause problems because the network might want some activations to be really high in order to make accurate predictions. So they also added two learnable parameters (meaning they will be updated in the SGD step), usually called gamma and beta. After normalizing the activations to get some new activation vector y, a batchnorm layer returns gamma*y + beta.\nThat’s why our activations can have any mean or variance, independent from the mean and standard deviation of the results of the previous layer. Those statistics are learned separately, making training easier on our model. The behavior is different during training and validation: during training, we use the mean and standard deviation of the batch to normalize the data, while during validation we instead use a running mean of the statistics calculated during training.\nLet’s add a batchnorm layer to conv:\n\ndef conv(ni, nf, ks=3, act=True):\n layers = [nn.Conv2d(ni, nf, stride=2, kernel_size=ks, padding=ks//2)]\n if act: layers.append(nn.ReLU())\n layers.append(nn.BatchNorm2d(nf))\n return nn.Sequential(*layers)\n\nand fit our model:\n\nlearn = fit()\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n0.130036\n0.055021\n0.986400\n00:10\n\n\n\n\n\nThat’s a great result! Let’s take a look at color_dim:\n\nlearn.activation_stats.color_dim(-4)\n\n\n\n\n\n\n\n\nThis is just what we hope to see: a smooth development of activations, with no “crashes.” Batchnorm has really delivered on its promise here! In fact, batchnorm has been so successful that we see it (or something very similar) in nearly all modern neural networks.\nAn interesting observation about models containing batch normalization layers is that they tend to generalize better than models that don’t contain them. Although we haven’t as yet seen a rigorous analysis of what’s going on here, most researchers believe that the reason for this is that batch normalization adds some extra randomness to the training process. Each mini-batch will have a somewhat different mean and standard deviation than other mini-batches. Therefore, the activations will be normalized by different values each time. In order for the model to make accurate predictions, it will have to learn to become robust to these variations. In general, adding additional randomization to the training process often helps.\nSince things are going so well, let’s train for a few more epochs and see how it goes. In fact, let’s increase the learning rate, since the abstract of the batchnorm paper claimed we should be able to “train at much higher learning rates”:\n\nlearn = fit(5, lr=0.1)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n0.191731\n0.121738\n0.960900\n00:11\n\n\n1\n0.083739\n0.055808\n0.981800\n00:10\n\n\n2\n0.053161\n0.044485\n0.987100\n00:10\n\n\n3\n0.034433\n0.030233\n0.990200\n00:10\n\n\n4\n0.017646\n0.025407\n0.991200\n00:10\n\n\n\n\n\nAt this point, I think it’s fair to say we know how to recognize digits! It’s time to move on to something harder…",
+ "crumbs": [
+ "From the Foundations",
+ "13Convolutional Neural Networks"
+ ]
+ },
+ {
+ "objectID": "convolutions.html#conclusions",
+ "href": "convolutions.html#conclusions",
+ "title": "13 Convolutional Neural Networks",
+ "section": "13.5 Conclusions",
+ "text": "13.5 Conclusions\nWe’ve seen that convolutions are just a type of matrix multiplication, with two constraints on the weight matrix: some elements are always zero, and some elements are tied (forced to always have the same value). In Chapter 1 we saw the eight requirements from the 1986 book Parallel Distributed Processing; one of them was “A pattern of connectivity among units.” That’s exactly what these constraints do: they enforce a certain pattern of connectivity.\nThese constraints allow us to use far fewer parameters in our model, without sacrificing the ability to represent complex visual features. That means we can train deeper models faster, with less overfitting. Although the universal approximation theorem shows that it should be possible to represent anything in a fully connected network in one hidden layer, we’ve seen now that in practice we can train much better models by being thoughtful about network architecture.\nConvolutions are by far the most common pattern of connectivity we see in neural nets (along with regular linear layers, which we refer to as fully connected), but it’s likely that many more will be discovered.\nWe’ve also seen how to interpret the activations of layers in the network to see whether training is going well or not, and how batchnorm helps regularize the training and makes it smoother. In the next chapter, we will use both of those layers to build the most popular architecture in computer vision: a residual network.",
+ "crumbs": [
+ "From the Foundations",
+ "13Convolutional Neural Networks"
+ ]
+ },
+ {
+ "objectID": "convolutions.html#questionnaire",
+ "href": "convolutions.html#questionnaire",
+ "title": "13 Convolutional Neural Networks",
+ "section": "13.6 Questionnaire",
+ "text": "13.6 Questionnaire\n\nWhat is a “feature”?\nWrite out the convolutional kernel matrix for a top edge detector.\nWrite out the mathematical operation applied by a 3×3 kernel to a single pixel in an image.\nWhat is the value of a convolutional kernel apply to a 3×3 matrix of zeros?\nWhat is “padding”?\nWhat is “stride”?\nCreate a nested list comprehension to complete any task that you choose.\nWhat are the shapes of the input and weight parameters to PyTorch’s 2D convolution?\nWhat is a “channel”?\nWhat is the relationship between a convolution and a matrix multiplication?\nWhat is a “convolutional neural network”?\nWhat is the benefit of refactoring parts of your neural network definition?\nWhat is Flatten? Where does it need to be included in the MNIST CNN? Why?\nWhat does “NCHW” mean?\nWhy does the third layer of the MNIST CNN have 7*7*(1168-16) multiplications?\nWhat is a “receptive field”?\nWhat is the size of the receptive field of an activation after two stride 2 convolutions? Why?\nRun conv-example.xlsx yourself and experiment with trace precedents.\nHave a look at Jeremy or Sylvain’s list of recent Twitter “like”s, and see if you find any interesting resources or ideas there.\nHow is a color image represented as a tensor?\nHow does a convolution work with a color input?\nWhat method can we use to see that data in DataLoaders?\nWhy do we double the number of filters after each stride-2 conv?\nWhy do we use a larger kernel in the first conv with MNIST (with simple_cnn)?\nWhat information does ActivationStats save for each layer?\nHow can we access a learner’s callback after training?\nWhat are the three statistics plotted by plot_layer_stats? What does the x-axis represent?\nWhy are activations near zero problematic?\nWhat are the upsides and downsides of training with a larger batch size?\nWhy should we avoid using a high learning rate at the start of training?\nWhat is 1cycle training?\nWhat are the benefits of training with a high learning rate?\nWhy do we want to use a low learning rate at the end of training?\nWhat is “cyclical momentum”?\nWhat callback tracks hyperparameter values during training (along with other information)?\nWhat does one column of pixels in the color_dim plot represent?\nWhat does “bad training” look like in color_dim? Why?\nWhat trainable parameters does a batch normalization layer contain?\nWhat statistics are used to normalize in batch normalization during training? How about during validation?\nWhy do models with batch normalization layers generalize better?\n\n\nFurther Research\n\nWhat features other than edge detectors have been used in computer vision (especially before deep learning became popular)?\nThere are other normalization layers available in PyTorch. Try them out and see what works best. Learn about why other normalization layers have been developed, and how they differ from batch normalization.\nTry moving the activation function after the batch normalization layer in conv. Does it make a difference? See what you can find out about what order is recommended, and why.",
+ "crumbs": [
+ "From the Foundations",
+ "13Convolutional Neural Networks"
+ ]
+ },
+ {
+ "objectID": "resnet.html",
+ "href": "resnet.html",
+ "title": "14 ResNets",
+ "section": "",
+ "text": "14.1 Going Back to Imagenette\nIt’s going to be tough to judge any improvements we make to our models when we are already at an accuracy that is as high as we saw on MNIST in the previous chapter, so we will tackle a tougher image classification problem by going back to Imagenette. We’ll stick with small images to keep things reasonably fast.\nLet’s grab the data—we’ll use the already-resized 160 px version to make things faster still, and will random crop to 128 px:\ndef get_data(url, presize, resize):\n path = untar_data(url)\n return DataBlock(\n blocks=(ImageBlock, CategoryBlock), get_items=get_image_files, \n splitter=GrandparentSplitter(valid_name='val'),\n get_y=parent_label, item_tfms=Resize(presize),\n batch_tfms=[*aug_transforms(min_scale=0.5, size=resize),\n Normalize.from_stats(*imagenet_stats)],\n ).dataloaders(path, bs=128)\ndls = get_data(URLs.IMAGENETTE_160, 160, 128)\ndls.show_batch(max_n=4)\nWhen we looked at MNIST we were dealing with 28×28-pixel images. For Imagenette we are going to be training with 128×128-pixel images. Later, we would like to be able to use larger images as well—at least as big as 224×224 pixels, the ImageNet standard. Do you recall how we managed to get a single vector of activations for each image out of the MNIST convolutional neural network?\nThe approach we used was to ensure that there were enough stride-2 convolutions such that the final layer would have a grid size of 1. Then we just flattened out the unit axes that we ended up with, to get a vector for each image (so, a matrix of activations for a mini-batch). We could do the same thing for Imagenette, but that would cause two problems:\nOne approach to dealing with the first of these issues would be to flatten the final convolutional layer in a way that handles a grid size other than 1×1. That is, we could simply flatten a matrix into a vector as we have done before, by laying out each row after the previous row. In fact, this is the approach that convolutional neural networks up until 2013 nearly always took. The most famous example is the 2013 ImageNet winner VGG, still sometimes used today. But there was another problem with this architecture: not only did it not work with images other than those of the same size used in the training set, but it required a lot of memory, because flattening out the convolutional layer resulted in many activations being fed into the final layers. Therefore, the weight matrices of the final layers were enormous.\nThis problem was solved through the creation of fully convolutional networks. The trick in fully convolutional networks is to take the average of activations across a convolutional grid. In other words, we can simply use this function:\ndef avg_pool(x): return x.mean((2,3))\nAs you see, it is taking the mean over the x- and y-axes. This function will always convert a grid of activations into a single activation per image. PyTorch provides a slightly more versatile module called nn.AdaptiveAvgPool2d, which averages a grid of activations into whatever sized destination you require (although we nearly always use a size of 1).\nA fully convolutional network, therefore, has a number of convolutional layers, some of which will be stride 2, at the end of which is an adaptive average pooling layer, a flatten layer to remove the unit axes, and finally a linear layer. Here is our first fully convolutional network:\ndef block(ni, nf): return ConvLayer(ni, nf, stride=2)\ndef get_model():\n return nn.Sequential(\n block(3, 16),\n block(16, 32),\n block(32, 64),\n block(64, 128),\n block(128, 256),\n nn.AdaptiveAvgPool2d(1),\n Flatten(),\n nn.Linear(256, dls.c))\nWe’re going to be replacing the implementation of block in the network with other variants in a moment, which is why we’re not calling it conv any more. We’re also saving some time by taking advantage of fastai’s ConvLayer, which that already provides the functionality of conv from the last chapter (plus a lot more!).\nOnce we are done with our convolutional layers, we will get activations of size bs x ch x h x w (batch size, a certain number of channels, height, and width). We want to convert this to a tensor of size bs x ch, so we take the average over the last two dimensions and flatten the trailing 1×1 dimension like we did in our previous model.\nThis is different from regular pooling in the sense that those layers will generally take the average (for average pooling) or the maximum (for max pooling) of a window of a given size. For instance, max pooling layers of size 2, which were very popular in older CNNs, reduce the size of our image by half on each dimension by taking the maximum of each 2×2 window (with a stride of 2).\nAs before, we can define a Learner with our custom model and then train it on the data we grabbed earlier:\ndef get_learner(m):\n return Learner(dls, m, loss_func=nn.CrossEntropyLoss(), metrics=accuracy\n ).to_fp16()\n\nlearn = get_learner(get_model())\nlearn.lr_find()\n3e-3 is often a good learning rate for CNNs, and that appears to be the case here too, so let’s try that:\nlearn.fit_one_cycle(5, 3e-3)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n1.901582\n2.155090\n0.325350\n00:07\n\n\n1\n1.559855\n1.586795\n0.507771\n00:07\n\n\n2\n1.296350\n1.295499\n0.571720\n00:07\n\n\n3\n1.144139\n1.139257\n0.639236\n00:07\n\n\n4\n1.049770\n1.092619\n0.659108\n00:07\nThat’s a pretty good start, considering we have to pick the correct one of 10 categories, and we’re training from scratch for just 5 epochs! We can do way better than this using a deeper mode, but just stacking new layers won’t really improve our results (you can try and see for yourself!). To work around this problem, ResNets introduce the idea of skip connections. We’ll explore those and other aspects of ResNets in the next section.",
+ "crumbs": [
+ "From the Foundations",
+ "14ResNets"
+ ]
+ },
+ {
+ "objectID": "resnet.html#going-back-to-imagenette",
+ "href": "resnet.html#going-back-to-imagenette",
+ "title": "14 ResNets",
+ "section": "",
+ "text": "We’d need lots of stride-2 layers to make our grid 1×1 at the end—perhaps more than we would otherwise choose.\nThe model would not work on images of any size other than the size we originally trained on.\n\n\n\n\n\n\n\n\n\nstop: Consider this question: would this approach makes sense for an optical character recognition (OCR) problem such as MNIST? The vast majority of practitioners tackling OCR and similar problems tend to use fully convolutional networks, because that’s what nearly everybody learns nowadays. But it really doesn’t make any sense! You can’t decide, for instance, whether a number is a 3 or an 8 by slicing it into small pieces, jumbling them up, and deciding whether on average each piece looks like a 3 or an 8. But that’s what adaptive average pooling effectively does! Fully convolutional networks are only really a good choice for objects that don’t have a single correct orientation or size (e.g., like most natural photos).",
+ "crumbs": [
+ "From the Foundations",
+ "14ResNets"
+ ]
+ },
+ {
+ "objectID": "resnet.html#building-a-modern-cnn-resnet",
+ "href": "resnet.html#building-a-modern-cnn-resnet",
+ "title": "14 ResNets",
+ "section": "14.2 Building a Modern CNN: ResNet",
+ "text": "14.2 Building a Modern CNN: ResNet\nWe now have all the pieces we need to build the models we have been using in our computer vision tasks since the beginning of this book: ResNets. We’ll introduce the main idea behind them and show how it improves accuracy on Imagenette compared to our previous model, before building a version with all the recent tweaks.\n\nSkip Connections\nIn 2015, the authors of the ResNet paper noticed something that they found curious. Even after using batchnorm, they saw that a network using more layers was doing less well than a network using fewer layers—and there were no other differences between the models. Most interestingly, the difference was observed not only in the validation set, but also in the training set; so, it wasn’t just a generalization issue, but a training issue. As the paper explains:\n\nUnexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as [previously reported] and thoroughly verified by our experiments.\n\nThis phenomenon was illustrated by the graph in Figure 14.1, with training error on the left and test error on the right.\n\n\n\n\n\n\nFigure 14.1: Training of networks of different depth (courtesy of Kaiming He et al.)\n\n\n\nAs the authors mention here, they are not the first people to have noticed this curious fact. But they were the first to make a very important leap:\n\nLet us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model.\n\nAs this is an academic paper this process is described in a rather inaccessible way, but the concept is actually very simple: start with a 20-layer neural network that is trained well, and add another 36 layers that do nothing at all (for instance, they could be linear layers with a single weight equal to 1, and bias equal to 0). The result will be a 56-layer network that does exactly the same thing as the 20-layer network, proving that there are always deep networks that should be at least as good as any shallow network. But for some reason, SGD does not seem able to find them.\n\njargon: Identity mapping: Returning the input without changing it at all. This process is performed by an identity function.\n\nActually, there is another way to create those extra 36 layers, which is much more interesting. What if we replaced every occurrence of conv(x) with x + conv(x), where conv is the function from the previous chapter that adds a second convolution, then a batchnorm layer, then a ReLU. Furthermore, recall that batchnorm does gamma*y + beta. What if we initialized gamma to zero for every one of those final batchnorm layers? Then our conv(x) for those extra 36 layers will always be equal to zero, which means x+conv(x) will always be equal to x.\nWhat has that gained us? The key thing is that those 36 extra layers, as they stand, are an identity mapping, but they have parameters, which means they are trainable. So, we can start with our best 20-layer model, add these 36 extra layers which initially do nothing at all, and then fine-tune the whole 56-layer model. Those extra 36 layers can then learn the parameters that make them most useful.\nThe ResNet paper actually proposed a variant of this, which is to instead “skip over” every second convolution, so effectively we get x+conv2(conv1(x)). This is shown by the diagram in Figure 14.2 (from the paper).\n\n\n\n\n\n\nFigure 14.2: A simple ResNet block (courtesy of Kaiming He et al.)\n\n\n\nThat arrow on the right is just the x part of x+conv2(conv1(x)), and is known as the identity branch or skip connection. The path on the left is the conv2(conv1(x)) part. You can think of the identity path as providing a direct route from the input to the output.\nIn a ResNet, we don’t actually proceed by first training a smaller number of layers, and then adding new layers on the end and fine-tuning. Instead, we use ResNet blocks like the one in Figure 14.2 throughout the CNN, initialized from scratch in the usual way, and trained with SGD in the usual way. We rely on the skip connections to make the network easier to train with SGD.\nThere’s another (largely equivalent) way to think of these ResNet blocks. This is how the paper describes it:\n\nInstead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as H(x), we let the stacked nonlinear layers fit another mapping of F(x) := H(x)−x. The original mapping is recast into F(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.\n\nAgain, this is rather inaccessible prose—so let’s try to restate it in plain English! If the outcome of a given layer is x, when using a ResNet block that returns y = x+block(x) we’re not asking the block to predict y, we are asking it to predict the difference between y and x. So the job of those blocks isn’t to predict certain features, but to minimize the error between x and the desired y. A ResNet is, therefore, good at learning about slight differences between doing nothing and passing though a block of two convolutional layers (with trainable weights). This is how these models got their name: they’re predicting residuals (reminder: “residual” is prediction minus target).\nOne key concept that both of these two ways of thinking about ResNets share is the idea of ease of learning. This is an important theme. Recall the universal approximation theorem, which states that a sufficiently large network can learn anything. This is still true, but there turns out to be a very important difference between what a network can learn in principle, and what it is easy for it to learn with realistic data and training regimes. Many of the advances in neural networks over the last decade have been like the ResNet block: the result of realizing how to make something that was always possible actually feasible.\n\nnote: True Identity Path: The original paper didn’t actually do the trick of using zero for the initial value of gamma in the last batchnorm layer of each block; that came a couple of years later. So, the original version of ResNet didn’t quite begin training with a truly identity path through the ResNet blocks, but nonetheless having the ability to “navigate through” the skip connections did indeed make it train better. Adding the batchnorm gamma init trick made the models train at even higher learning rates.\n\nHere’s the definition of a simple ResNet block (where norm_type=NormType.BatchZero causes fastai to init the gamma weights of the last batchnorm layer to zero):\n\nclass ResBlock(Module):\n def __init__(self, ni, nf):\n self.convs = nn.Sequential(\n ConvLayer(ni,nf),\n ConvLayer(nf,nf, norm_type=NormType.BatchZero))\n \n def forward(self, x): return x + self.convs(x)\n\nThere are two problems with this, however: it can’t handle a stride other than 1, and it requires that ni==nf. Stop for a moment to think carefully about why this is.\nThe issue is that with a stride of, say, 2 on one of the convolutions, the grid size of the output activations will be half the size on each axis of the input. So then we can’t add that back to x in forward because x and the output activations have different dimensions. The same basic issue occurs if ni!=nf: the shapes of the input and output connections won’t allow us to add them together.\nTo fix this, we need a way to change the shape of x to match the result of self.convs. Halving the grid size can be done using an average pooling layer with a stride of 2: that is, a layer that takes 2×2 patches from the input and replaces them with their average.\nChanging the number of channels can be done by using a convolution. We want this skip connection to be as close to an identity map as possible, however, which means making this convolution as simple as possible. The simplest possible convolution is one where the kernel size is 1. That means that the kernel is size ni*nf*1*1, so it’s only doing a dot product over the channels of each input pixel—it’s not combining across pixels at all. This kind of 1x1 convolution is very widely used in modern CNNs, so take a moment to think about how it works.\n\njargon: 1x1 convolution: A convolution with a kernel size of 1.\n\nHere’s a ResBlock using these tricks to handle changing shape in the skip connection:\n\ndef _conv_block(ni,nf,stride):\n return nn.Sequential(\n ConvLayer(ni, nf, stride=stride),\n ConvLayer(nf, nf, act_cls=None, norm_type=NormType.BatchZero))\n\n\nclass ResBlock(Module):\n def __init__(self, ni, nf, stride=1):\n self.convs = _conv_block(ni,nf,stride)\n self.idconv = noop if ni==nf else ConvLayer(ni, nf, 1, act_cls=None)\n self.pool = noop if stride==1 else nn.AvgPool2d(2, ceil_mode=True)\n\n def forward(self, x):\n return F.relu(self.convs(x) + self.idconv(self.pool(x)))\n\nNote that we’re using the noop function here, which simply returns its input unchanged (noop is a computer science term that stands for “no operation”). In this case, idconv does nothing at all if ni==nf, and pool does nothing if stride==1, which is what we wanted in our skip connection.\nAlso, you’ll see that we’ve removed the ReLU (act_cls=None) from the final convolution in convs and from idconv, and moved it to after we add the skip connection. The thinking behind this is that the whole ResNet block is like a layer, and you want your activation to be after your layer.\nLet’s replace our block with ResBlock, and try it out:\n\ndef block(ni,nf): return ResBlock(ni, nf, stride=2)\nlearn = get_learner(get_model())\n\n\nlearn.fit_one_cycle(5, 3e-3)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n1.973174\n1.845491\n0.373248\n00:08\n\n\n1\n1.678627\n1.778713\n0.439236\n00:08\n\n\n2\n1.386163\n1.596503\n0.507261\n00:08\n\n\n3\n1.177839\n1.102993\n0.644841\n00:09\n\n\n4\n1.052435\n1.038013\n0.667771\n00:09\n\n\n\n\n\nIt’s not much better. But the whole point of this was to allow us to train deeper models, and we’re not really taking advantage of that yet. To create a model that’s, say, twice as deep, all we need to do is replace our block with two ResBlocks in a row:\n\ndef block(ni, nf):\n return nn.Sequential(ResBlock(ni, nf, stride=2), ResBlock(nf, nf))\n\n\nlearn = get_learner(get_model())\nlearn.fit_one_cycle(5, 3e-3)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n1.964076\n1.864578\n0.355159\n00:12\n\n\n1\n1.636880\n1.596789\n0.502675\n00:12\n\n\n2\n1.335378\n1.304472\n0.588535\n00:12\n\n\n3\n1.089160\n1.065063\n0.663185\n00:12\n\n\n4\n0.942904\n0.963589\n0.692739\n00:12\n\n\n\n\n\nNow we’re making good progress!\nThe authors of the ResNet paper went on to win the 2015 ImageNet challenge. At the time, this was by far the most important annual event in computer vision. We have already seen another ImageNet winner: the 2013 winners, Zeiler and Fergus. It is interesting to note that in both cases the starting points for the breakthroughs were experimental observations: observations about what layers actually learn, in the case of Zeiler and Fergus, and observations about which kinds of networks can be trained, in the case of the ResNet authors. This ability to design and analyze thoughtful experiments, or even just to see an unexpected result, say “Hmmm, that’s interesting,” and then, most importantly, set about figuring out what on earth is going on, with great tenacity, is at the heart of many scientific discoveries. Deep learning is not like pure mathematics. It is a heavily experimental field, so it’s important to be a strong practitioner, not just a theoretician.\nSince the ResNet was introduced, it’s been widely studied and applied to many domains. One of the most interesting papers, published in 2018, is Hao Li et al.’s “Visualizing the Loss Landscape of Neural Nets”. It shows that using skip connections helps smooth the loss function, which makes training easier as it avoids falling into a very sharp area. Figure 14.3 shows a stunning picture from the paper, illustrating the difference between the bumpy terrain that SGD has to navigate to optimize a regular CNN (left) versus the smooth surface of a ResNet (right).\n\n\n\n\n\n\nFigure 14.3: Impact of ResNet on loss landscape (courtesy of Hao Li et al.)\n\n\n\nOur first model is already good, but further research has discovered more tricks we can apply to make it better. We’ll look at those next.\n\n\nA State-of-the-Art ResNet\nIn “Bag of Tricks for Image Classification with Convolutional Neural Networks”, Tong He et al. study different variations of the ResNet architecture that come at almost no additional cost in terms of number of parameters or computation. By using a tweaked ResNet-50 architecture and Mixup they achieved 94.6% top-5 accuracy on ImageNet, in comparison to 92.2% with a regular ResNet-50 without Mixup. This result is better than that achieved by regular ResNet models that are twice as deep (and twice as slow, and much more likely to overfit).\n\njargon: top-5 accuracy: A metric testing how often the label we want is in the top 5 predictions of our model. It was used in the ImageNet competition because many of the images contained multiple objects, or contained objects that could be easily confused or may even have been mislabeled with a similar label. In these situations, looking at top-1 accuracy may be inappropriate. However, recently CNNs have been getting so good that top-5 accuracy is nearly 100%, so some researchers are using top-1 accuracy for ImageNet too now.\n\nWe’ll use this tweaked version as we scale up to the full ResNet, because it’s substantially better. It differs a little bit from our previous implementation, in that instead of just starting with ResNet blocks, it begins with a few convolutional layers followed by a max pooling layer. This is what the first layers, called the stem of the network, look like:\n\ndef _resnet_stem(*sizes):\n return [\n ConvLayer(sizes[i], sizes[i+1], 3, stride = 2 if i==0 else 1)\n for i in range(len(sizes)-1)\n ] + [nn.MaxPool2d(kernel_size=3, stride=2, padding=1)]\n\n\n_resnet_stem(3,32,32,64)\n\n[ConvLayer(\n (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n (1): BatchNorm2d(32, eps=1e-05, momentum=0.1)\n (2): ReLU()\n ), ConvLayer(\n (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): BatchNorm2d(32, eps=1e-05, momentum=0.1)\n (2): ReLU()\n ), ConvLayer(\n (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): BatchNorm2d(64, eps=1e-05, momentum=0.1)\n (2): ReLU()\n ), MaxPool2d(kernel_size=3, stride=2, padding=1, ceil_mode=False)]\n\njargon: Stem: The first few layers of a CNN. Generally, the stem has a different structure than the main body of the CNN.\n\nThe reason that we have a stem of plain convolutional layers, instead of ResNet blocks, is based on a very important insight about all deep convolutional neural networks: the vast majority of the computation occurs in the early layers. Therefore, we should keep the early layers as fast and simple as possible.\nTo see why so much computation occurs in the early layers, consider the very first convolution on a 128-pixel input image. If it is a stride-1 convolution, then it will apply the kernel to every one of the 128×128 pixels. That’s a lot of work! In the later layers, however, the grid size could be as small as 4×4 or even 2×2, so there are far fewer kernel applications to do.\nOn the other hand, the first-layer convolution only has 3 input features and 32 output features. Since it is a 3×3 kernel, this is 3×32×3×3 = 864 parameters in the weights. But the last convolution will have 256 input features and 512 output features, resulting in 1,179,648 weights! So the first layers contain the vast majority of the computation, but the last layers contain the vast majority of the parameters.\nA ResNet block takes more computation than a plain convolutional block, since (in the stride-2 case) a ResNet block has three convolutions and a pooling layer. That’s why we want to have plain convolutions to start off our ResNet.\nWe’re now ready to show the implementation of a modern ResNet, with the “bag of tricks.” It uses four groups of ResNet blocks, with 64, 128, 256, then 512 filters. Each group starts with a stride-2 block, except for the first one, since it’s just after a MaxPooling layer:\n\nclass ResNet(nn.Sequential):\n def __init__(self, n_out, layers, expansion=1):\n stem = _resnet_stem(3,32,32,64)\n self.block_szs = [64, 64, 128, 256, 512]\n for i in range(1,5): self.block_szs[i] *= expansion\n blocks = [self._make_layer(*o) for o in enumerate(layers)]\n super().__init__(*stem, *blocks,\n nn.AdaptiveAvgPool2d(1), Flatten(),\n nn.Linear(self.block_szs[-1], n_out))\n \n def _make_layer(self, idx, n_layers):\n stride = 1 if idx==0 else 2\n ch_in,ch_out = self.block_szs[idx:idx+2]\n return nn.Sequential(*[\n ResBlock(ch_in if i==0 else ch_out, ch_out, stride if i==0 else 1)\n for i in range(n_layers)\n ])\n\nThe _make_layer function is just there to create a series of n_layers blocks. The first one is going from ch_in to ch_out with the indicated stride and all the others are blocks of stride 1 with ch_out to ch_out tensors. Once the blocks are defined, our model is purely sequential, which is why we define it as a subclass of nn.Sequential. (Ignore the expansion parameter for now; we’ll discuss it in the next section. For now, it’ll be 1, so it doesn’t do anything.)\nThe various versions of the models (ResNet-18, -34, -50, etc.) just change the number of blocks in each of those groups. This is the definition of a ResNet-18:\n\nrn = ResNet(dls.c, [2,2,2,2])\n\nLet’s train it for a little bit and see how it fares compared to the previous model:\n\nlearn = get_learner(rn)\nlearn.fit_one_cycle(5, 3e-3)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n1.673882\n1.828394\n0.413758\n00:13\n\n\n1\n1.331675\n1.572685\n0.518217\n00:13\n\n\n2\n1.087224\n1.086102\n0.650701\n00:13\n\n\n3\n0.900428\n0.968219\n0.684331\n00:12\n\n\n4\n0.760280\n0.782558\n0.757197\n00:12\n\n\n\n\n\nEven though we have more channels (and our model is therefore even more accurate), our training is just as fast as before, thanks to our optimized stem.\nTo make our model deeper without taking too much compute or memory, we can use another kind of layer introduced by the ResNet paper for ResNets with a depth of 50 or more: the bottleneck layer.\n\n\nBottleneck Layers\nInstead of stacking two convolutions with a kernel size of 3, bottleneck layers use three different convolutions: two 1×1 (at the beginning and the end) and one 3×3, as shown on the right in Figure 14.4.\n\n\n\n\n\n\nFigure 14.4: Comparison of regular and bottleneck ResNet blocks (courtesy of Kaiming He et al.)\n\n\n\nWhy is that useful? 1×1 convolutions are much faster, so even if this seems to be a more complex design, this block executes faster than the first ResNet block we saw. This then lets us use more filters: as we see in the illustration, the number of filters in and out is 4 times higher (256 instead of 64) diminish then restore the number of channels (hence the name bottleneck). The overall impact is that we can use more filters in the same amount of time.\nLet’s try replacing our ResBlock with this bottleneck design:\n\ndef _conv_block(ni,nf,stride):\n return nn.Sequential(\n ConvLayer(ni, nf//4, 1),\n ConvLayer(nf//4, nf//4, stride=stride), \n ConvLayer(nf//4, nf, 1, act_cls=None, norm_type=NormType.BatchZero))\n\nWe’ll use this to create a ResNet-50 with group sizes of (3,4,6,3). We now need to pass 4 in to the expansion parameter of ResNet, since we need to start with four times less channels and we’ll end with four times more channels.\nDeeper networks like this don’t generally show improvements when training for only 5 epochs, so we’ll bump it up to 20 epochs this time to make the most of our bigger model. And to really get great results, let’s use bigger images too:\n\ndls = get_data(URLs.IMAGENETTE_320, presize=320, resize=224)\n\nWe don’t have to do anything to account for the larger 224-pixel images; thanks to our fully convolutional network, it just works. This is also why we were able to do progressive resizing earlier in the book—the models we used were fully convolutional, so we were even able to fine-tune models trained with different sizes. We can now train our model and see the effects:\n\nrn = ResNet(dls.c, [3,4,6,3], 4)\n\n\nlearn = get_learner(rn)\nlearn.fit_one_cycle(20, 3e-3)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n1.613448\n1.473355\n0.514140\n00:31\n\n\n1\n1.359604\n2.050794\n0.397452\n00:31\n\n\n2\n1.253112\n4.511735\n0.387006\n00:31\n\n\n3\n1.133450\n2.575221\n0.396178\n00:31\n\n\n4\n1.054752\n1.264525\n0.613758\n00:32\n\n\n5\n0.927930\n2.670484\n0.422675\n00:32\n\n\n6\n0.838268\n1.724588\n0.528662\n00:32\n\n\n7\n0.748289\n1.180668\n0.666497\n00:31\n\n\n8\n0.688637\n1.245039\n0.650446\n00:32\n\n\n9\n0.645530\n1.053691\n0.674904\n00:31\n\n\n10\n0.593401\n1.180786\n0.676433\n00:32\n\n\n11\n0.536634\n0.879937\n0.713885\n00:32\n\n\n12\n0.479208\n0.798356\n0.741656\n00:32\n\n\n13\n0.440071\n0.600644\n0.806879\n00:32\n\n\n14\n0.402952\n0.450296\n0.858599\n00:32\n\n\n15\n0.359117\n0.486126\n0.846369\n00:32\n\n\n16\n0.313642\n0.442215\n0.861911\n00:32\n\n\n17\n0.294050\n0.485967\n0.853503\n00:32\n\n\n18\n0.270583\n0.408566\n0.875924\n00:32\n\n\n19\n0.266003\n0.411752\n0.872611\n00:33\n\n\n\n\n\nWe’re getting a great result now! Try adding Mixup, and then training this for a hundred epochs while you go get lunch. You’ll have yourself a very accurate image classifier, trained from scratch.\nThe bottleneck design we’ve shown here is typically only used in ResNet-50, -101, and -152 models. ResNet-18 and -34 models usually use the non-bottleneck design seen in the previous section. However, we’ve noticed that the bottleneck layer generally works better even for the shallower networks. This just goes to show that the little details in papers tend to stick around for years, even if they’re actually not quite the best design! Questioning assumptions and “stuff everyone knows” is always a good idea, because this is still a new field, and there are lots of details that aren’t always done well.",
+ "crumbs": [
+ "From the Foundations",
+ "14ResNets"
+ ]
+ },
+ {
+ "objectID": "resnet.html#conclusion",
+ "href": "resnet.html#conclusion",
+ "title": "14 ResNets",
+ "section": "14.3 Conclusion",
+ "text": "14.3 Conclusion\nYou have now seen how the models we have been using for computer vision since the first chapter are built, using skip connections to allow deeper models to be trained. Even if there has been a lot of research into better architectures, they all use one version or another of this trick, to make a direct path from the input to the end of the network. When using transfer learning, the ResNet is the pretrained model. In the next chapter, we will look at the final details of how the models we actually used were built from it.",
+ "crumbs": [
+ "From the Foundations",
+ "14ResNets"
+ ]
+ },
+ {
+ "objectID": "resnet.html#questionnaire",
+ "href": "resnet.html#questionnaire",
+ "title": "14 ResNets",
+ "section": "14.4 Questionnaire",
+ "text": "14.4 Questionnaire\n\nHow did we get to a single vector of activations in the CNNs used for MNIST in previous chapters? Why isn’t that suitable for Imagenette?\nWhat do we do for Imagenette instead?\nWhat is “adaptive pooling”?\nWhat is “average pooling”?\nWhy do we need Flatten after an adaptive average pooling layer?\nWhat is a “skip connection”?\nWhy do skip connections allow us to train deeper models?\nWhat does Figure 14.1 show? How did that lead to the idea of skip connections?\nWhat is “identity mapping”?\nWhat is the basic equation for a ResNet block (ignoring batchnorm and ReLU layers)?\nWhat do ResNets have to do with residuals?\nHow do we deal with the skip connection when there is a stride-2 convolution? How about when the number of filters changes?\nHow can we express a 1×1 convolution in terms of a vector dot product?\nCreate a 1x1 convolution with F.conv2d or nn.Conv2d and apply it to an image. What happens to the shape of the image?\nWhat does the noop function return?\nExplain what is shown in Figure 14.3.\nWhen is top-5 accuracy a better metric than top-1 accuracy?\nWhat is the “stem” of a CNN?\nWhy do we use plain convolutions in the CNN stem, instead of ResNet blocks?\nHow does a bottleneck block differ from a plain ResNet block?\nWhy is a bottleneck block faster?\nHow do fully convolutional nets (and nets with adaptive pooling in general) allow for progressive resizing?\n\n\nFurther Research\n\nTry creating a fully convolutional net with adaptive average pooling for MNIST (note that you’ll need fewer stride-2 layers). How does it compare to a network without such a pooling layer?\nIn Chapter 17 we introduce Einstein summation notation. Skip ahead to see how this works, and then write an implementation of the 1×1 convolution operation using torch.einsum. Compare it to the same operation using torch.conv2d.\nWrite a “top-5 accuracy” function using plain PyTorch or plain Python.\nTrain a model on Imagenette for more epochs, with and without label smoothing. Take a look at the Imagenette leaderboards and see how close you can get to the best results shown. Read the linked pages describing the leading approaches.",
+ "crumbs": [
+ "From the Foundations",
+ "14ResNets"
+ ]
+ },
+ {
+ "objectID": "accel_sgd.html",
+ "href": "accel_sgd.html",
+ "title": "16 The Training Process",
+ "section": "",
+ "text": "16.1 Establishing a Baseline\nFirst, we’ll create a baseline, using plain SGD, and compare it to fastai’s default optimizer. We’ll start by grabbing Imagenette with the same get_data we used in Chapter 14:\ndls = get_data(URLs.IMAGENETTE_160, 160, 128)\nWe’ll create a ResNet-34 without pretraining, and pass along any arguments received:\ndef get_learner(**kwargs):\n return vision_learner(dls, resnet34, pretrained=False,\n metrics=accuracy, **kwargs).to_fp16()\nHere’s the default fastai optimizer, with the usual 3e-3 learning rate:\nlearn = get_learner()\nlearn.fit_one_cycle(3, 0.003)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n2.571932\n2.685040\n0.322548\n00:11\n\n\n1\n1.904674\n1.852589\n0.437452\n00:11\n\n\n2\n1.586909\n1.374908\n0.594904\n00:11\nNow let’s try plain SGD. We can pass opt_func (optimization function) to vision_learner to get fastai to use any optimizer:\nlearn = get_learner(opt_func=SGD)\nThe first thing to look at is lr_find:\nlearn.lr_find()\nIt looks like we’ll need to use a higher learning rate than we normally use:\nlearn.fit_one_cycle(3, 0.03, moms=(0,0,0))\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n2.969412\n2.214596\n0.242038\n00:09\n\n\n1\n2.442730\n1.845950\n0.362548\n00:09\n\n\n2\n2.157159\n1.741143\n0.408917\n00:09\nBecause accelerating SGD with momentum is such a good idea, fastai does this by default in fit_one_cycle, so we turn it off with moms=(0,0,0). We’ll be discussing momentum shortly.)\nClearly, plain SGD isn’t training as fast as we’d like. So let’s learn some tricks to get accelerated training!",
+ "crumbs": [
+ "From the Foundations",
+ "16The Training Process"
+ ]
+ },
+ {
+ "objectID": "accel_sgd.html#a-generic-optimizer",
+ "href": "accel_sgd.html#a-generic-optimizer",
+ "title": "16 The Training Process",
+ "section": "16.2 A Generic Optimizer",
+ "text": "16.2 A Generic Optimizer\nTo build up our accelerated SGD tricks, we’ll need to start with a nice flexible optimizer foundation. No library prior to fastai provided such a foundation, but during fastai’s development we realized that all the optimizer improvements we’d seen in the academic literature could be handled using optimizer callbacks. These are small pieces of code that we can compose, mix and match in an optimizer to build the optimizer step. They are called by fastai’s lightweight Optimizer class. These are the definitions in Optimizer of the two key methods that we’ve been using in this book:\ndef zero_grad(self):\n for p,*_ in self.all_params():\n p.grad.detach_()\n p.grad.zero_()\n\ndef step(self):\n for p,pg,state,hyper in self.all_params():\n for cb in self.cbs:\n state = _update(state, cb(p, **{**state, **hyper}))\n self.state[p] = state\nAs we saw when training an MNIST model from scratch, zero_grad just loops through the parameters of the model and sets the gradients to zero. It also calls detach_, which removes any history of gradient computation, since it won’t be needed after zero_grad.\nThe more interesting method is step, which loops through the callbacks (cbs) and calls them to update the parameters (the _update function just calls state.update if there’s anything returned by cb). As you can see, Optimizer doesn’t actually do any SGD steps itself. Let’s see how we can add SGD to Optimizer.\nHere’s an optimizer callback that does a single SGD step, by multiplying -lr by the gradients and adding that to the parameter (when Tensor.add_ in PyTorch is passed two parameters, they are multiplied together before the addition):\n\ndef sgd_cb(p, lr, **kwargs): p.data.add_(-lr, p.grad.data)\n\nWe can pass this to Optimizer using the cbs parameter; we’ll need to use partial since Learner will call this function to create our optimizer later:\n\nopt_func = partial(Optimizer, cbs=[sgd_cb])\n\nLet’s see if this trains:\n\nlearn = get_learner(opt_func=opt_func)\nlearn.fit(3, 0.03)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n2.730918\n2.009971\n0.332739\n00:09\n\n\n1\n2.204893\n1.747202\n0.441529\n00:09\n\n\n2\n1.875621\n1.684515\n0.445350\n00:09\n\n\n\n\n\nIt’s working! So that’s how we create SGD from scratch in fastai. Now let’s see what “momentum” is.",
+ "crumbs": [
+ "From the Foundations",
+ "16The Training Process"
+ ]
+ },
+ {
+ "objectID": "accel_sgd.html#momentum",
+ "href": "accel_sgd.html#momentum",
+ "title": "16 The Training Process",
+ "section": "16.3 Momentum",
+ "text": "16.3 Momentum\nAs described in Chapter 4, SGD can be thought of as standing at the top of a mountain and working your way down by taking a step in the direction of the steepest slope at each point in time. But what if we have a ball rolling down the mountain? It won’t, at each given point, exactly follow the direction of the gradient, as it will have momentum. A ball with more momentum (for instance, a heavier ball) will skip over little bumps and holes, and be more likely to get to the bottom of a bumpy mountain. A ping pong ball, on the other hand, will get stuck in every little crevice.\nSo how can we bring this idea over to SGD? We can use a moving average, instead of only the current gradient, to make our step:\nweight.avg = beta * weight.avg + (1-beta) * weight.grad\nnew_weight = weight - lr * weight.avg\nHere beta is some number we choose which defines how much momentum to use. If beta is 0, then the first equation becomes weight.avg = weight.grad, so we end up with plain SGD. But if it’s a number close to 1, then the main direction chosen is an average of the previous steps. (If you have done a bit of statistics, you may recognize in the first equation an exponentially weighted moving average, which is very often used to denoise data and get the underlying tendency.)\nNote that we are writing weight.avg to highlight the fact that we need to store the moving averages for each parameter of the model (they all have their own independent moving averages).\nFigure 16.1 shows an example of noisy data for a single parameter, with the momentum curve plotted in red, and the gradients of the parameter plotted in blue. The gradients increase, then decrease, and the momentum does a good job of following the general trend without getting too influenced by noise.\n\n\n\n\n\n\n\n\nFigure 16.1: An example of momentum\n\n\n\n\n\nIt works particularly well if the loss function has narrow canyons we need to navigate: vanilla SGD would send us bouncing from one side to the other, while SGD with momentum will average those to roll smoothly down the side. The parameter beta determines the strength of the momentum we are using: with a small beta we stay closer to the actual gradient values, whereas with a high beta we will mostly go in the direction of the average of the gradients and it will take a while before any change in the gradients makes that trend move.\nWith a large beta, we might miss that the gradients have changed directions and roll over a small local minima. This is a desired side effect: intuitively, when we show a new input to our model, it will look like something in the training set but won’t be exactly like it. That means it will correspond to a point in the loss function that is close to the minimum we ended up with at the end of training, but not exactly at that minimum. So, we would rather end up training in a wide minimum, where nearby points have approximately the same loss (or if you prefer, a point where the loss is as flat as possible). Figure 16.2 shows how the chart in Figure 16.1 varies as we change beta.\n\n\n\n\n\n\n\n\nFigure 16.2: Momentum with different beta values\n\n\n\n\n\nWe can see in these examples that a beta that’s too high results in the overall changes in gradient getting ignored. In SGD with momentum, a value of beta that is often used is 0.9.\nfit_one_cycle by default starts with a beta of 0.95, gradually adjusts it to 0.85, and then gradually moves it back to 0.95 at the end of training. Let’s see how our training goes with momentum added to plain SGD.\nIn order to add momentum to our optimizer, we’ll first need to keep track of the moving average gradient, which we can do with another callback. When an optimizer callback returns a dict, it is used to update the state of the optimizer and is passed back to the optimizer on the next step. So this callback will keep track of the gradient averages in a parameter called grad_avg:\n\ndef average_grad(p, mom, grad_avg=None, **kwargs):\n if grad_avg is None: grad_avg = torch.zeros_like(p.grad.data)\n return {'grad_avg': grad_avg*mom + p.grad.data}\n\nTo use it, we just have to replace p.grad.data with grad_avg in our step function:\n\ndef momentum_step(p, lr, grad_avg, **kwargs): p.data.add_(-lr, grad_avg)\n\n\nopt_func = partial(Optimizer, cbs=[average_grad,momentum_step], mom=0.9)\n\nLearner will automatically schedule mom and lr, so fit_one_cycle will even work with our custom Optimizer:\n\nlearn = get_learner(opt_func=opt_func)\nlearn.fit_one_cycle(3, 0.03)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n2.856000\n2.493429\n0.246115\n00:10\n\n\n1\n2.504205\n2.463813\n0.348280\n00:10\n\n\n2\n2.187387\n1.755670\n0.418853\n00:10\n\n\n\n\n\n\nlearn.recorder.plot_sched()\n\n\n\n\n\n\n\n\nWe’re still not getting great results, so let’s see what else we can do.",
+ "crumbs": [
+ "From the Foundations",
+ "16The Training Process"
+ ]
+ },
+ {
+ "objectID": "accel_sgd.html#rmsprop",
+ "href": "accel_sgd.html#rmsprop",
+ "title": "16 The Training Process",
+ "section": "16.4 RMSProp",
+ "text": "16.4 RMSProp\nRMSProp is another variant of SGD introduced by Geoffrey Hinton in Lecture 6e of his Coursera class “Neural Networks for Machine Learning”. The main difference from SGD is that it uses an adaptive learning rate: instead of using the same learning rate for every parameter, each parameter gets its own specific learning rate controlled by a global learning rate. That way we can speed up training by giving a higher learning rate to the weights that need to change a lot while the ones that are good enough get a lower learning rate.\nHow do we decide which parameters should have a high learning rate and which should not? We can look at the gradients to get an idea. If a parameter’s gradients have been close to zero for a while, that parameter will need a higher learning rate because the loss is flat. On the other hand, if the gradients are all over the place, we should probably be careful and pick a low learning rate to avoid divergence. We can’t just average the gradients to see if they’re changing a lot, because the average of a large positive and a large negative number is close to zero. Instead, we can use the usual trick of either taking the absolute value or the squared values (and then taking the square root after the mean).\nOnce again, to determine the general tendency behind the noise, we will use a moving average—specifically the moving average of the gradients squared. Then we will update the corresponding weight by using the current gradient (for the direction) divided by the square root of this moving average (that way if it’s low, the effective learning rate will be higher, and if it’s high, the effective learning rate will be lower):\nw.square_avg = alpha * w.square_avg + (1-alpha) * (w.grad ** 2)\nnew_w = w - lr * w.grad / math.sqrt(w.square_avg + eps)\nThe eps (epsilon) is added for numerical stability (usually set at 1e-8), and the default value for alpha is usually 0.99.\nWe can add this to Optimizer by doing much the same thing we did for avg_grad, but with an extra **2:\n\ndef average_sqr_grad(p, sqr_mom, sqr_avg=None, **kwargs):\n if sqr_avg is None: sqr_avg = torch.zeros_like(p.grad.data)\n return {'sqr_avg': sqr_mom*sqr_avg + (1-sqr_mom)*p.grad.data**2}\n\nAnd we can define our step function and optimizer as before:\n\ndef rms_prop_step(p, lr, sqr_avg, eps, grad_avg=None, **kwargs):\n denom = sqr_avg.sqrt().add_(eps)\n p.data.addcdiv_(-lr, p.grad, denom)\n\nopt_func = partial(Optimizer, cbs=[average_sqr_grad,rms_prop_step],\n sqr_mom=0.99, eps=1e-7)\n\nLet’s try it out:\n\nlearn = get_learner(opt_func=opt_func)\nlearn.fit_one_cycle(3, 0.003)\n\n\n\n\nepoch\ntrain_loss\nvalid_loss\naccuracy\ntime\n\n\n\n\n0\n2.766912\n1.845900\n0.402548\n00:11\n\n\n1\n2.194586\n1.510269\n0.504459\n00:11\n\n\n2\n1.869099\n1.447939\n0.544968\n00:11\n\n\n\n\n\nMuch better! Now we just have to bring these ideas together, and we have Adam, fastai’s default optimizer.",
+ "crumbs": [
+ "From the Foundations",
+ "16The Training Process"
+ ]
+ },
+ {
+ "objectID": "accel_sgd.html#adam",
+ "href": "accel_sgd.html#adam",
+ "title": "16 The Training Process",
+ "section": "16.5 Adam",
+ "text": "16.5 Adam\nAdam mixes the ideas of SGD with momentum and RMSProp together: it uses the moving average of the gradients as a direction and divides by the square root of the moving average of the gradients squared to give an adaptive learning rate to each parameter.\nThere is one other difference in how Adam calculates moving averages. It takes the unbiased moving average, which is:\nw.avg = beta * w.avg + (1-beta) * w.grad\nunbias_avg = w.avg / (1 - (beta**(i+1)))\nif we are the i-th iteration (starting at 0 like Python does). This divisor of 1 - (beta**(i+1)) makes sure the unbiased average looks more like the gradients at the beginning (since beta < 1, the denominator is very quickly close to 1).\nPutting everything together, our update step looks like:\nw.avg = beta1 * w.avg + (1-beta1) * w.grad\nunbias_avg = w.avg / (1 - (beta1**(i+1)))\nw.sqr_avg = beta2 * w.sqr_avg + (1-beta2) * (w.grad ** 2)\nnew_w = w - lr * unbias_avg / sqrt(w.sqr_avg + eps)\nLike for RMSProp, eps is usually set to 1e-8, and the default for (beta1,beta2) suggested by the literature is (0.9,0.999).\nIn fastai, Adam is the default optimizer we use since it allows faster training, but we’ve found that beta2=0.99 is better suited to the type of schedule we are using. beta1 is the momentum parameter, which we specify with the argument moms in our call to fit_one_cycle. As for eps, fastai uses a default of 1e-5. eps is not just useful for numerical stability. A higher eps limits the maximum value of the adjusted learning rate. To take an extreme example, if eps is 1, then the adjusted learning will never be higher than the base learning rate.\nRather than show all the code for this in the book, we’ll let you look at the optimizer notebook in fastai’s GitHub repository (browse the nbs folder and search for the notebook called optimizer). You’ll see all the code we’ve shown so far, along with Adam and other optimizers, and lots of examples and tests.\nOne thing that changes when we go from SGD to Adam is the way we apply weight decay, and it can have important consequences.",
+ "crumbs": [
+ "From the Foundations",
+ "16The Training Process"
+ ]
+ },
+ {
+ "objectID": "accel_sgd.html#decoupled-weight-decay",
+ "href": "accel_sgd.html#decoupled-weight-decay",
+ "title": "16 The Training Process",
+ "section": "16.6 Decoupled Weight Decay",
+ "text": "16.6 Decoupled Weight Decay\nWeight decay, which we discussed in Chapter 8, is equivalent to (in the case of vanilla SGD) updating the parameters with:\nnew_weight = weight - lr*weight.grad - lr*wd*weight\nThe last part of this formula explains the name of this technique: each weight is decayed by a factor lr * wd.\nThe other name of weight decay is L2 regularization, which consists in adding the sum of all squared weights to the loss (multiplied by the weight decay). As we have seen in Chapter 8, this can be directly expressed on the gradients with:\nweight.grad += wd*weight\nFor SGD, those two formulas are equivalent. However, this equivalence only holds for standard SGD, because we have seen that with momentum, RMSProp or in Adam, the update has some additional formulas around the gradient.\nMost libraries use the second formulation, but it was pointed out in “Decoupled Weight Decay Regularization” by Ilya Loshchilov and Frank Hutter, that the first one is the only correct approach with the Adam optimizer or momentum, which is why fastai makes it its default.\nNow you know everything that is hidden behind the line learn.fit_one_cycle!\nOptimizers are only one part of the training process, however when you need to change the training loop with fastai, you can’t directly change the code inside the library. Instead, we have designed a system of callbacks to let you write any tweaks you like in independent blocks that you can then mix and match.",
+ "crumbs": [
+ "From the Foundations",
+ "16The Training Process"
+ ]
+ },
+ {
+ "objectID": "accel_sgd.html#callbacks",
+ "href": "accel_sgd.html#callbacks",
+ "title": "16 The Training Process",
+ "section": "16.7 Callbacks",
+ "text": "16.7 Callbacks\nSometimes you need to change how things work a little bit. In fact, we have already seen examples of this: Mixup, fp16 training, resetting the model after each epoch for training RNNs, and so forth. How do we go about making these kinds of tweaks to the training process?\nWe’ve seen the basic training loop, which, with the help of the Optimizer class, looks like this for a single epoch:\nfor xb,yb in dl:\n loss = loss_func(model(xb), yb)\n loss.backward()\n opt.step()\n opt.zero_grad()\nFigure 16.3 shows how to picture that.\n\n\n\n\n\n\nFigure 16.3: Basic training loop\n\n\n\nThe usual way for deep learning practitioners to customize the training loop is to make a copy of an existing training loop, and then insert the code necessary for their particular changes into it. This is how nearly all code that you find online will look. But it has some very serious problems.\nIt’s not very likely that some particular tweaked training loop is going to meet your particular needs. There are hundreds of changes that can be made to a training loop, which means there are billions and billions of possible permutations. You can’t just copy one tweak from a training loop here, another from a training loop there, and expect them all to work together. Each will be based on different assumptions about the environment that it’s working in, use different naming conventions, and expect the data to be in different formats.\nWe need a way to allow users to insert their own code at any part of the training loop, but in a consistent and well-defined way. Computer scientists have already come up with an elegant solution: the callback. A callback is a piece of code that you write, and inject into another piece of code at some predefined point. In fact, callbacks have been used with deep learning training loops for years. The problem is that in previous libraries it was only possible to inject code in a small subset of places where this may have been required, and, more importantly, callbacks were not able to do all the things they needed to do.\nIn order to be just as flexible as manually copying and pasting a training loop and directly inserting code into it, a callback must be able to read every possible piece of information available in the training loop, modify all of it as needed, and fully control when a batch, epoch, or even the whole training loop should be terminated. fastai is the first library to provide all of this functionality. It modifies the training loop so it looks like Figure 16.4.\n\n\n\n\n\n\nFigure 16.4: Training loop with callbacks\n\n\n\nThe real effectiveness of this approach has been borne out over the last couple of years—it has turned out that, by using the fastai callback system, we were able to implement every single new paper we tried and fulfilled every user request for modifying the training loop. The training loop itself has not required modifications. Figure 16.5 shows just a few of the callbacks that have been added.\n\n\n\n\n\n\nFigure 16.5: Some fastai callbacks\n\n\n\nThe reason that this is important is because it means that whatever idea we have in our head, we can implement it. We need never dig into the source code of PyTorch or fastai and hack together some one-off system to try out our ideas. And when we do implement our own callbacks to develop our own ideas, we know that they will work together with all of the other functionality provided by fastai–so we will get progress bars, mixed-precision training, hyperparameter annealing, and so forth.\nAnother advantage is that it makes it easy to gradually remove or add functionality and perform ablation studies. You just need to adjust the list of callbacks you pass along to your fit function.\nAs an example, here is the fastai source code that is run for each batch of the training loop:\ntry:\n self._split(b); self('before_batch')\n self.pred = self.model(*self.xb); self('after_pred')\n self.loss = self.loss_func(self.pred, *self.yb); self('after_loss')\n if not self.training: return\n self.loss.backward(); self('after_backward')\n self.opt.step(); self('after_step')\n self.opt.zero_grad()\nexcept CancelBatchException: self('after_cancel_batch')\nfinally: self('after_batch')\nThe calls of the form self('...') are where the callbacks are called. As you see, this happens after every step. The callback will receive the entire state of training, and can also modify it. For instance, the input data and target labels are in self.xb and self.yb, respectively; a callback can modify these to alter the data the training loop sees. It can also modify self.loss, or even the gradients.\nLet’s see how this works in practice by writing a callback.\n\nCreating a Callback\nWhen you want to write your own callback, the full list of available events is:\n\nbefore_fit:: called before doing anything; ideal for initial setup.\nbefore_epoch:: called at the beginning of each epoch; useful for any behavior you need to reset at each epoch.\nbefore_train:: called at the beginning of the training part of an epoch.\nbefore_batch:: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyperparameter scheduling) or to change the input/target before it goes into the model (for instance, apply Mixup).\nafter_pred:: called after computing the output of the model on the batch. It can be used to change that output before it’s fed to the loss function.\nafter_loss:: called after the loss has been computed, but before the backward pass. It can be used to add penalty to the loss (AR or TAR in RNN training, for instance).\nafter_backward:: called after the backward pass, but before the update of the parameters. It can be used to make changes to the gradients before said update (via gradient clipping, for instance).\nafter_step:: called after the step and before the gradients are zeroed.\nafter_batch:: called at the end of a batch, to perform any required cleanup before the next one.\nafter_train:: called at the end of the training phase of an epoch.\nbefore_validate:: called at the beginning of the validation phase of an epoch; useful for any setup needed specifically for validation.\nafter_validate:: called at the end of the validation part of an epoch.\nafter_epoch:: called at the end of an epoch, for any cleanup before the next one.\nafter_fit:: called at the end of training, for final cleanup.\n\nThe elements of this list are available as attributes of the special variable event, so you can just type event. and hit Tab in your notebook to see a list of all the options.\nLet’s take a look at an example. Do you recall how in Chapter 12 we needed to ensure that our special reset method was called at the start of training and validation for each epoch? We used the ModelResetter callback provided by fastai to do this for us. But how does it work? Here’s the full source code for that class:\n\nclass ModelResetter(Callback):\n def before_train(self): self.model.reset()\n def before_validate(self): self.model.reset()\n\nYes, that’s actually it! It just does what we said in the preceding paragraph: after completing training or validation for an epoch, call a method named reset.\nCallbacks are often “short and sweet” like this one. In fact, let’s look at one more. Here’s the fastai source for the callback that adds RNN regularization (AR and TAR):\n\nclass RNNRegularizer(Callback):\n def __init__(self, alpha=0., beta=0.): self.alpha,self.beta = alpha,beta\n\n def after_pred(self):\n self.raw_out,self.out = self.pred[1],self.pred[2]\n self.learn.pred = self.pred[0]\n\n def after_loss(self):\n if not self.training: return\n if self.alpha != 0.:\n self.learn.loss += self.alpha * self.out[-1].float().pow(2).mean()\n if self.beta != 0.:\n h = self.raw_out[-1]\n if len(h)>1:\n self.learn.loss += self.beta * (h[:,1:] - h[:,:-1]\n ).float().pow(2).mean()\n\n\nnote: Code It Yourself: Go back and reread “Activation Regularization and Temporal Activation Regularization” in Chapter 12 then take another look at the code here. Make sure you understand what it’s doing, and why.\n\nIn both of these examples, notice how we can access attributes of the training loop by directly checking self.model or self.pred. That’s because a Callback will always try to get an attribute it doesn’t have inside the Learner associated with it. These are shortcuts for self.learn.model or self.learn.pred. Note that they work for reading attributes, but not for writing them, which is why when RNNRegularizer changes the loss or the predictions you see self.learn.loss = or self.learn.pred =.\nWhen writing a callback, the following attributes of Learner are available:\n\nmodel:: The model used for training/validation.\ndata:: The underlying DataLoaders.\nloss_func:: The loss function used.\nopt:: The optimizer used to update the model parameters.\nopt_func:: The function used to create the optimizer.\ncbs:: The list containing all the Callbacks.\ndl:: The current DataLoader used for iteration.\nx/xb:: The last input drawn from self.dl (potentially modified by callbacks). xb is always a tuple (potentially with one element) and x is detuplified. You can only assign to xb.\ny/yb:: The last target drawn from self.dl (potentially modified by callbacks). yb is always a tuple (potentially with one element) and y is detuplified. You can only assign to yb.\npred:: The last predictions from self.model (potentially modified by callbacks).\nloss:: The last computed loss (potentially modified by callbacks).\nn_epoch:: The number of epochs in this training.\nn_iter:: The number of iterations in the current self.dl.\nepoch:: The current epoch index (from 0 to n_epoch-1).\niter:: The current iteration index in self.dl (from 0 to n_iter-1).\n\nThe following attributes are added by TrainEvalCallback and should be available unless you went out of your way to remove that callback:\n\ntrain_iter:: The number of training iterations done since the beginning of this training\npct_train:: The percentage of training iterations completed (from 0. to 1.)\ntraining:: A flag to indicate whether or not we’re in training mode\n\nThe following attribute is added by Recorder and should be available unless you went out of your way to remove that callback:\n\nsmooth_loss:: An exponentially averaged version of the training loss\n\nCallbacks can also interrupt any part of the training loop by using a system of exceptions.\n\n\nCallback Ordering and Exceptions\nSometimes, callbacks need to be able to tell fastai to skip over a batch, or an epoch, or stop training altogether. For instance, consider TerminateOnNaNCallback. This handy callback will automatically stop training any time the loss becomes infinite or NaN (not a number). Here’s the fastai source for this callback:\n\nclass TerminateOnNaNCallback(Callback):\n run_before=Recorder\n def after_batch(self):\n if torch.isinf(self.loss) or torch.isnan(self.loss):\n raise CancelFitException\n\nThe line raise CancelFitException tells the training loop to interrupt training at this point. The training loop catches this exception and does not run any further training or validation. The callback control flow exceptions available are:\n\nCancelBatchException:: Skip the rest of this batch and go to after_batch.\nCancelTrainException:: Skip the rest of the training part of the epoch and go to after_train.\nCancelValidException:: Skip the rest of the validation part of the epoch and go to after_validate.\nCancelEpochException:: Skip the rest of this epoch and go to after_epoch.\nCancelFitException:: Interrupt training and go to after_fit.\n\nYou can detect if one of those exceptions has occurred and add code that executes right after with the following events:\n\nafter_cancel_batch:: Reached immediately after a CancelBatchException before proceeding to after_batch\nafter_cancel_train:: Reached immediately after a CancelTrainException before proceeding to after_train\nafter_cancel_valid:: Reached immediately after a CancelValidException before proceeding to after_valid\nafter_cancel_epoch:: Reached immediately after a CancelEpochException before proceeding to after_epoch\nafter_cancel_fit:: Reached immediately after a CancelFitException before proceeding to after_fit\n\nSometimes, callbacks need to be called in a particular order. For example, in the case of TerminateOnNaNCallback, it’s important that Recorder runs its after_batch after this callback, to avoid registering an NaN loss. You can specify run_before (this callback must run before …) or run_after (this callback must run after …) in your callback to ensure the ordering that you need.",
+ "crumbs": [
+ "From the Foundations",
+ "16The Training Process"
+ ]
+ },
+ {
+ "objectID": "accel_sgd.html#conclusion",
+ "href": "accel_sgd.html#conclusion",
+ "title": "16 The Training Process",
+ "section": "16.8 Conclusion",
+ "text": "16.8 Conclusion\nIn this chapter we took a close look at the training loop, exploring different variants of SGD and why they can be more powerful. At the time of writing, developing new optimizers is a very active area of research, so by the time you read this chapter there may be an addendum on the book’s website that presents new variants. Be sure to check out how our general optimizer framework can help you implement new optimizers very quickly.\nWe also examined the powerful callback system that allows you to customize every bit of the training loop by enabling you to inspect and modify any parameter you like between each step.",
+ "crumbs": [
+ "From the Foundations",
+ "16The Training Process"
+ ]
+ },
+ {
+ "objectID": "accel_sgd.html#questionnaire",
+ "href": "accel_sgd.html#questionnaire",
+ "title": "16 The Training Process",
+ "section": "16.9 Questionnaire",
+ "text": "16.9 Questionnaire\n\nWhat is the equation for a step of SGD, in math or code (as you prefer)?\nWhat do we pass to vision_learner to use a non-default optimizer?\nWhat are optimizer callbacks?\nWhat does zero_grad do in an optimizer?\nWhat does step do in an optimizer? How is it implemented in the general optimizer?\nRewrite sgd_cb to use the += operator, instead of add_.\nWhat is “momentum”? Write out the equation.\nWhat’s a physical analogy for momentum? How does it apply in our model training settings?\nWhat does a bigger value for momentum do to the gradients?\nWhat are the default values of momentum for 1cycle training?\nWhat is RMSProp? Write out the equation.\nWhat do the squared values of the gradients indicate?\nHow does Adam differ from momentum and RMSProp?\nWrite out the equation for Adam.\nCalculate the values of unbias_avg and w.avg for a few batches of dummy values.\nWhat’s the impact of having a high eps in Adam?\nRead through the optimizer notebook in fastai’s repo, and execute it.\nIn what situations do dynamic learning rate methods like Adam change the behavior of weight decay?\nWhat are the four steps of a training loop?\nWhy is using callbacks better than writing a new training loop for each tweak you want to add?\nWhat aspects of the design of fastai’s callback system make it as flexible as copying and pasting bits of code?\nHow can you get the list of events available to you when writing a callback?\nWrite the ModelResetter callback (without peeking).\nHow can you access the necessary attributes of the training loop inside a callback? When can you use or not use the shortcuts that go with them?\nHow can a callback influence the control flow of the training loop?\nWrite the TerminateOnNaN callback (without peeking, if possible).\nHow do you make sure your callback runs after or before another callback?\n\n\nFurther Research\n\nLook up the “Rectified Adam” paper, implement it using the general optimizer framework, and try it out. Search for other recent optimizers that work well in practice, and pick one to implement.\nLook at the mixed-precision callback with the documentation. Try to understand what each event and line of code does.\nImplement your own version of the learning rate finder from scratch. Compare it with fastai’s version.\nLook at the source code of the callbacks that ship with fastai. See if you can find one that’s similar to what you’re looking to do, to get some inspiration.",
+ "crumbs": [
+ "From the Foundations",
+ "16The Training Process"
+ ]
+ },
+ {
+ "objectID": "accel_sgd.html#foundations-of-deep-learning-wrap-up",
+ "href": "accel_sgd.html#foundations-of-deep-learning-wrap-up",
+ "title": "16 The Training Process",
+ "section": "16.10 Foundations of Deep Learning: Wrap up",
+ "text": "16.10 Foundations of Deep Learning: Wrap up\nCongratulations, you have made it to the end of the “foundations of deep learning” section of the book! You now understand how all of fastai’s applications and most important architectures are built, and the recommended ways to train them—and you have all the information you need to build these from scratch. While you probably won’t need to create your own training loop, or batchnorm layer, for instance, knowing what is going on behind the scenes is very helpful for debugging, profiling, and deploying your solutions.\nSince you understand the foundations of fastai’s applications now, be sure to spend some time digging through the source notebooks and running and experimenting with parts of them. This will give you a better idea of how everything in fastai is developed.\nIn the next section, we will be looking even further under the covers: we’ll explore how the actual forward and backward passes of a neural network are done, and we will see what tools are at our disposal to get better performance. We will then continue with a project that brings together all the material in the book, which we will use to build a tool for interpreting convolutional neural networks. Last but not least, we’ll finish by building fastai’s Learner class from scratch.",
+ "crumbs": [
+ "From the Foundations",
+ "16The Training Process"
+ ]
+ },
+ {
+ "objectID": "foundations.html",
+ "href": "foundations.html",
+ "title": "17 A Neural Net from the Foundations",
+ "section": "",
+ "text": "17.1 Building a Neural Net Layer from Scratch\nLet’s start by refreshing our understanding of how matrix multiplication is used in a basic neural network. Since we’re building everything up from scratch, we’ll use nothing but plain Python initially (except for indexing into PyTorch tensors), and then replace the plain Python with PyTorch functionality once we’ve seen how to create it.",
+ "crumbs": [
+ "From the Foundations",
+ "17A Neural Net from the Foundations"
+ ]
+ },
+ {
+ "objectID": "foundations.html#building-a-neural-net-layer-from-scratch",
+ "href": "foundations.html#building-a-neural-net-layer-from-scratch",
+ "title": "17 A Neural Net from the Foundations",
+ "section": "",
+ "text": "Modeling a Neuron\nA neuron receives a given number of inputs and has an internal weight for each of them. It sums those weighted inputs to produce an output and adds an inner bias. In math, this can be written as:\n\\[ out = \\sum_{i=1}^{n} x_{i} w_{i} + b\\]\nif we name our inputs \\((x_{1},\\dots,x_{n})\\), our weights \\((w_{1},\\dots,w_{n})\\), and our bias \\(b\\). In code this translates into:\noutput = sum([x*w for x,w in zip(inputs,weights)]) + bias\nThis output is then fed into a nonlinear function called an activation function before being sent to another neuron. In deep learning the most common of these is the rectified Linear unit, or ReLU, which, as we’ve seen, is a fancy way of saying:\ndef relu(x): return x if x >= 0 else 0\nA deep learning model is then built by stacking a lot of those neurons in successive layers. We create a first layer with a certain number of neurons (known as hidden size) and link all the inputs to each of those neurons. Such a layer is often called a fully connected layer or a dense layer (for densely connected), or a linear layer.\nIt requires to compute, for each input in our batch and each neuron with a give weight, the dot product:\nsum([x*w for x,w in zip(input,weight)])\nIf you have done a little bit of linear algebra, you may remember that having a lot of those dot products happens when you do a matrix multiplication. More precisely, if our inputs are in a matrix x with a size of batch_size by n_inputs, and if we have grouped the weights of our neurons in a matrix w of size n_neurons by n_inputs (each neuron must have the same number of weights as it has inputs) and all the biases in a vector b of size n_neurons, then the output of this fully connected layer is:\ny = x @ w.t() + b\nwhere @ represents the matrix product and w.t() is the transpose matrix of w. The output y is then of size batch_size by n_neurons, and in position (i,j) we have (for the mathy folks out there):\n\\[y_{i,j} = \\sum_{k=1}^{n} x_{i,k} w_{k,j} + b_{j}\\]\nOr in code:\ny[i,j] = sum([a * b for a,b in zip(x[i,:],w[j,:])]) + b[j]\nThe transpose is necessary because in the mathematical definition of the matrix product m @ n, the coefficient (i,j) is:\nsum([a * b for a,b in zip(m[i,:],n[:,j])])\nSo the very basic operation we need is a matrix multiplication, as it’s what is hidden in the core of a neural net.\n\n\nMatrix Multiplication from Scratch\nLet’s write a function that computes the matrix product of two tensors, before we allow ourselves to use the PyTorch version of it. We will only use the indexing in PyTorch tensors:\n\nimport torch\nfrom torch import tensor\n\nWe’ll need three nested for loops: one for the row indices, one for the column indices, and one for the inner sum. ac and ar stand for number of columns of a and number of rows of a, respectively (the same convention is followed for b), and we make sure calculating the matrix product is possible by checking that a has as many columns as b has rows:\n\ndef matmul(a,b):\n ar,ac = a.shape # n_rows * n_cols\n br,bc = b.shape\n assert ac==br\n c = torch.zeros(ar, bc)\n for i in range(ar):\n for j in range(bc):\n for k in range(ac): c[i,j] += a[i,k] * b[k,j]\n return c\n\nTo test this out, we’ll pretend (using random matrices) that we’re working with a small batch of 5 MNIST images, flattened into 28×28 vectors, with linear model to turn them into 10 activations:\n\nm1 = torch.randn(5,28*28)\nm2 = torch.randn(784,10)\n\nLet’s time our function, using the Jupyter “magic” command %time:\n\n%time t1=matmul(m1, m2)\n\nCPU times: user 211 ms, sys: 504 µs, total: 212 ms\nWall time: 211 ms\n\n\nAnd see how that compares to PyTorch’s built-in @:\n\n%timeit -n 20 t2=m1@m2\n\nThe slowest run took 9.18 times longer than the fastest. This could mean that an intermediate result is being cached.\n8.15 µs ± 9.07 µs per loop (mean ± std. dev. of 7 runs, 20 loops each)\n\n\nAs we can see, in Python three nested loops is a very bad idea! Python is a slow language, and this isn’t going to be very efficient. We see here that PyTorch is around 100,000 times faster than Python—and that’s before we even start using the GPU!\nWhere does this difference come from? PyTorch didn’t write its matrix multiplication in Python, but rather in C++ to make it fast. In general, whenever we do computations on tensors we will need to vectorize them so that we can take advantage of the speed of PyTorch, usually by using two techniques: elementwise arithmetic and broadcasting.\n\n\nElementwise Arithmetic\nAll the basic operators (+, -, *, /, >, <, ==) can be applied elementwise. That means if we write a+b for two tensors a and b that have the same shape, we will get a tensor composed of the sums the elements of a and b:\n\na = tensor([10., 6, -4])\nb = tensor([2., 8, 7])\na + b\n\ntensor([12., 14., 3.])\n\n\nThe Booleans operators will return an array of Booleans:\n\na < b\n\ntensor([False, True, True])\n\n\nIf we want to know if every element of a is less than the corresponding element in b, or if two tensors are equal, we need to combine those elementwise operations with torch.all:\n\n(a < b).all(), (a==b).all()\n\n(tensor(False), tensor(False))\n\n\nReduction operations like all(), sum() and mean() return tensors with only one element, called rank-0 tensors. If you want to convert this to a plain Python Boolean or number, you need to call .item():\n\n(a + b).mean().item()\n\n9.666666984558105\n\n\nThe elementwise operations work on tensors of any rank, as long as they have the same shape:\n\nm = tensor([[1., 2, 3], [4,5,6], [7,8,9]])\nm*m\n\ntensor([[ 1., 4., 9.],\n [16., 25., 36.],\n [49., 64., 81.]])\n\n\nHowever you can’t perform elementwise operations on tensors that don’t have the same shape (unless they are broadcastable, as discussed in the next section):\nm*tensor([[1., 2, 3], [4,5,6]])\n---------------------------------------------------------------------------\nRuntimeError Traceback (most recent call last)\nCell In [14], line 1\n----> 1 m*n\n\nRuntimeError: The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 0\nWith elementwise arithmetic, we can remove one of our three nested loops: we can multiply the tensors that correspond to the i-th row of a and the j-th column of b before summing all the elements, which will speed things up because the inner loop will now be executed by PyTorch at C speed.\nTo access one column or row, we can simply write a[i,:] or b[:,j]. The : means take everything in that dimension. We could restrict this and take only a slice of that particular dimension by passing a range, like 1:5, instead of just :. In that case, we would take the elements in columns or rows 1 to 4 (the second number is noninclusive).\nOne simplification is that we can always omit a trailing colon, so a[i,:] can be abbreviated to a[i]. With all of that in mind, we can write a new version of our matrix multiplication:\n\ndef matmul(a,b):\n ar,ac = a.shape\n br,bc = b.shape\n assert ac==br\n c = torch.zeros(ar, bc)\n for i in range(ar):\n for j in range(bc): c[i,j] = (a[i] * b[:,j]).sum()\n return c\n\n\n%timeit -n 20 t3 = matmul(m1,m2)\n\n257 µs ± 25.4 µs per loop (mean ± std. dev. of 7 runs, 20 loops each)\n\n\nWe’re already ~700 times faster, just by removing that inner for loop! And that’s just the beginning—with broadcasting we can remove another loop and get an even more important speed up.\n\n\nBroadcasting\nAs we discussed in Chapter 4, broadcasting is a term introduced by the NumPy library that describes how tensors of different ranks are treated during arithmetic operations. For instance, it’s obvious there is no way to add a 3×3 matrix with a 4×5 matrix, but what if we want to add one scalar (which can be represented as a 1×1 tensor) with a matrix? Or a vector of size 3 with a 3×4 matrix? In both cases, we can find a way to make sense of this operation.\nBroadcasting gives specific rules to codify when shapes are compatible when trying to do an elementwise operation, and how the tensor of the smaller shape is expanded to match the tensor of the bigger shape. It’s essential to master those rules if you want to be able to write code that executes quickly. In this section, we’ll expand our previous treatment of broadcasting to understand these rules.\n\nBroadcasting with a scalar\nBroadcasting with a scalar is the easiest type of broadcasting. When we have a tensor a and a scalar, we just imagine a tensor of the same shape as a filled with that scalar and perform the operation:\n\na = tensor([10., 6, -4])\na > 0\n\ntensor([ True, True, False])\n\n\nHow are we able to do this comparison? 0 is being broadcast to have the same dimensions as a. Note that this is done without creating a tensor full of zeros in memory (that would be very inefficient).\nThis is very useful if you want to normalize your dataset by subtracting the mean (a scalar) from the entire data set (a matrix) and dividing by the standard deviation (another scalar):\n\nm = tensor([[1., 2, 3], [4,5,6], [7,8,9]])\n(m - 5) / 2.73\n\ntensor([[-1.4652, -1.0989, -0.7326],\n [-0.3663, 0.0000, 0.3663],\n [ 0.7326, 1.0989, 1.4652]])\n\n\nWhat if have different means for each row of the matrix? in that case you will need to broadcast a vector to a matrix.\n\n\nBroadcasting a vector to a matrix\nWe can broadcast a vector to a matrix as follows:\n\nc = tensor([10.,20,30])\nm = tensor([[1., 2, 3], [4,5,6], [7,8,9]])\nm.shape,c.shape\n\n(torch.Size([3, 3]), torch.Size([3]))\n\n\n\nm + c\n\ntensor([[11., 22., 33.],\n [14., 25., 36.],\n [17., 28., 39.]])\n\n\nHere the elements of c are expanded to make three rows that match, making the operation possible. Again, PyTorch doesn’t actually create three copies of c in memory. This is done by the expand_as method behind the scenes:\n\nc.expand_as(m)\n\ntensor([[10., 20., 30.],\n [10., 20., 30.],\n [10., 20., 30.]])\n\n\nIf we look at the corresponding tensor, we can ask for its storage property (which shows the actual contents of the memory used for the tensor) to check there is no useless data stored:\n\nt = c.expand_as(m)\nt.storage()\n\n 10.0\n 20.0\n 30.0\n[torch.storage._TypedStorage(dtype=torch.float32, device=cpu) of size 3]\n\n\nEven though the tensor officially has nine elements, only three scalars are stored in memory. This is possible thanks to the clever trick of giving that dimension a stride of 0 (which means that when PyTorch looks for the next row by adding the stride, it doesn’t move):\n\nt.stride(), t.shape\n\n((0, 1), torch.Size([3, 3]))\n\n\nSince m is of size 3×3, there are two ways to do broadcasting. The fact it was done on the last dimension is a convention that comes from the rules of broadcasting and has nothing to do with the way we ordered our tensors. If instead we do this, we get the same result:\n\nc + m\n\ntensor([[11., 22., 33.],\n [14., 25., 36.],\n [17., 28., 39.]])\n\n\nIn fact, it’s only possible to broadcast a vector of size n with a matrix of size m by n:\n\nc = tensor([10.,20,30])\nm = tensor([[1., 2, 3], [4,5,6]])\nc+m\n\ntensor([[11., 22., 33.],\n [14., 25., 36.]])\n\n\nThis won’t work:\nc = tensor([10.,20])\nc+tensor([[1., 2, 3], [4,5,6]])\n---------------------------------------------------------------------------\nRuntimeError Traceback (most recent call last)\nCell In [26], line 3\n 1 c = tensor([10.,20])\n 2 m = tensor([[1., 2, 3], [4,5,6]])\n----> 3 c+m\n\nRuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 1\nIf we want to broadcast in the other dimension, we have to change the shape of our vector to make it a 3×1 matrix. This is done with the unsqueeze method in PyTorch:\n\nc = tensor([10.,20,30])\nm = tensor([[1., 2, 3], [4,5,6], [7,8,9]])\nc = c.unsqueeze(1)\nm.shape,c.shape\n\n(torch.Size([3, 3]), torch.Size([3, 1]))\n\n\nThis time, c is expanded on the column side:\n\nc+m\n\ntensor([[11., 12., 13.],\n [24., 25., 26.],\n [37., 38., 39.]])\n\n\nLike before, only three scalars are stored in memory:\n\nt = c.expand_as(m)\nt.storage()\n\n 10.0\n 20.0\n 30.0\n[torch.storage._TypedStorage(dtype=torch.float32, device=cpu) of size 3]\n\n\nAnd the expanded tensor has the right shape because the column dimension has a stride of 0:\n\nt.stride(), t.shape\n\n((1, 0), torch.Size([3, 3]))\n\n\nWith broadcasting, by default if we need to add dimensions, they are added at the beginning. When we were broadcasting before, Pytorch was doing c.unsqueeze(0) behind the scenes:\n\nc = tensor([10.,20,30])\nc.shape, c.unsqueeze(0).shape,c.unsqueeze(1).shape\n\n(torch.Size([3]), torch.Size([1, 3]), torch.Size([3, 1]))\n\n\nThe unsqueeze command can be replaced by None indexing:\n\nc.shape, c[None,:].shape,c[:,None].shape\n\n(torch.Size([3]), torch.Size([1, 3]), torch.Size([3, 1]))\n\n\nYou can always omit trailing colons, and ... means all preceding dimensions:\n\nc[None].shape,c[...,None].shape\n\n(torch.Size([1, 3]), torch.Size([3, 1]))\n\n\nWith this, we can remove another for loop in our matrix multiplication function. Now, instead of multiplying a[i] with b[:,j], we can multiply a[i] with the whole matrix b using broadcasting, then sum the results:\n\ndef matmul(a,b):\n ar,ac = a.shape\n br,bc = b.shape\n assert ac==br\n c = torch.zeros(ar, bc)\n for i in range(ar):\n# c[i,j] = (a[i,:] * b[:,j]).sum() # previous\n c[i] = (a[i ].unsqueeze(-1) * b).sum(dim=0)\n return c\n\n\n%timeit -n 20 t4 = matmul(m1,m2)\n\n43.9 µs ± 8.28 µs per loop (mean ± std. dev. of 7 runs, 20 loops each)\n\n\nWe’re now 3,700 times faster than our first implementation! Before we move on, let’s discuss the rules of broadcasting in a little more detail.\n\n\nBroadcasting rules\nWhen operating on two tensors, PyTorch compares their shapes elementwise. It starts with the trailing dimensions and works its way backward, adding 1 when it meets empty dimensions. Two dimensions are compatible when one of the following is true:\n\nThey are equal.\nOne of them is 1, in which case that dimension is broadcast to make it the same as the other.\n\nArrays do not need to have the same number of dimensions. For example, if you have a 256×256×3 array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with three values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible:\nImage (3d tensor): 256 x 256 x 3\nScale (1d tensor): (1) (1) 3\nResult (3d tensor): 256 x 256 x 3\nHowever, a 2D tensor of size 256×256 isn’t compatible with our image:\nImage (3d tensor): 256 x 256 x 3\nScale (2d tensor): (1) 256 x 256\nError\nIn our earlier examples we had with a 3×3 matrix and a vector of size 3, broadcasting was done on the rows:\nMatrix (2d tensor): 3 x 3\nVector (1d tensor): (1) 3\nResult (2d tensor): 3 x 3\nAs an exercise, try to determine what dimensions to add (and where) when you need to normalize a batch of images of size 64 x 3 x 256 x 256 with vectors of three elements (one for the mean and one for the standard deviation).\nAnother useful way of simplifying tensor manipulations is the use of Einstein summations convention.\n\n\n\nEinstein Summation\nBefore using the PyTorch operation @ or torch.matmul, there is one last way we can implement matrix multiplication: Einstein summation (einsum). This is a compact representation for combining products and sums in a general way. We write an equation like this:\nik,kj -> ij\nThe lefthand side represents the operands dimensions, separated by commas. Here we have two tensors that each have two dimensions (i,k and k,j). The righthand side represents the result dimensions, so here we have a tensor with two dimensions i,j.\nThe rules of Einstein summation notation are as follows:\n\nRepeated indices on the left side are implicitly summed over if they are not on the right side.\nEach index can appear at most twice on the left side.\nThe unrepeated indices on the left side must appear on the right side.\n\nSo in our example, since k is repeated, we sum over that index. In the end the formula represents the matrix obtained when we put in (i,j) the sum of all the coefficients (i,k) in the first tensor multiplied by the coefficients (k,j) in the second tensor… which is the matrix product! Here is how we can code this in PyTorch:\n\ndef matmul(a,b): return torch.einsum('ik,kj->ij', a, b)\n\nEinstein summation is a very practical way of expressing operations involving indexing and sum of products. Note that you can have just one member on the lefthand side. For instance, this:\ntorch.einsum('ij->ji', a)\nreturns the transpose of the matrix a. You can also have three or more members. This:\ntorch.einsum('bi,ij,bj->b', a, b, c)\nwill return a vector of size b where the k-th coordinate is the sum of a[k,i] b[i,j] c[k,j]. This notation is particularly convenient when you have more dimensions because of batches. For example, if you have two batches of matrices and want to compute the matrix product per batch, you would could this:\ntorch.einsum('bik,bkj->bij', a, b)\nLet’s go back to our new matmul implementation using einsum and look at its speed:\n\n%timeit -n 20 t5 = matmul(m1,m2)\n\n19 µs ± 13.2 µs per loop (mean ± std. dev. of 7 runs, 20 loops each)\n\n\nAs you can see, not only is it practical, but it’s very fast. einsum is often the fastest way to do custom operations in PyTorch, without diving into C++ and CUDA. (But it’s generally not as fast as carefully optimized CUDA code, as you see from the results in “Matrix Multiplication from Scratch”.)\nNow that we know how to implement a matrix multiplication from scratch, we are ready to build our neural net—specifically its forward and backward passes—using just matrix multiplications.",
+ "crumbs": [
+ "From the Foundations",
+ "17A Neural Net from the Foundations"
+ ]
+ },
+ {
+ "objectID": "foundations.html#the-forward-and-backward-passes",
+ "href": "foundations.html#the-forward-and-backward-passes",
+ "title": "17 A Neural Net from the Foundations",
+ "section": "17.2 The Forward and Backward Passes",
+ "text": "17.2 The Forward and Backward Passes\nAs we saw in Chapter 4, to train a model, we will need to compute all the gradients of a given loss with respect to its parameters, which is known as the backward pass. The forward pass is where we compute the output of the model on a given input, based on the matrix products. As we define our first neural net, we will also delve into the problem of properly initializing the weights, which is crucial for making training start properly.\n\nDefining and Initializing a Layer\nWe will take the example of a two-layer neural net first. As we’ve seen, one layer can be expressed as y = x @ w + b, with x our inputs, y our outputs, w the weights of the layer (which is of size number of inputs by number of neurons if we don’t transpose like before), and b is the bias vector:\n\ndef lin(x, w, b): return x @ w + b\n\nWe can stack the second layer on top of the first, but since mathematically the composition of two linear operations is another linear operation, this only makes sense if we put something nonlinear in the middle, called an activation function. As mentioned at the beginning of the chapter, in deep learning applications the activation function most commonly used is a ReLU, which returns the maximum of x and 0.\nWe won’t actually train our model in this chapter, so we’ll use random tensors for our inputs and targets. Let’s say our inputs are 200 vectors of size 100, which we group into one batch, and our targets are 200 random floats:\n\nx = torch.randn(200, 100)\ny = torch.randn(200)\n\nFor our two-layer model we will need two weight matrices and two bias vectors. Let’s say we have a hidden size of 50 and the output size is 1 (for one of our inputs, the corresponding output is one float in this toy example). We initialize the weights randomly and the bias at zero:\n\nw1 = torch.randn(100,50)\nb1 = torch.zeros(50)\nw2 = torch.randn(50,1)\nb2 = torch.zeros(1)\n\nThen the result of our first layer is simply:\n\nl1 = lin(x, w1, b1)\nl1.shape\n\ntorch.Size([200, 50])\n\n\nNote that this formula works with our batch of inputs, and returns a batch of hidden state: l1 is a matrix of size 200 (our batch size) by 50 (our hidden size).\nThere is a problem with the way our model was initialized, however. To understand it, we need to look at the mean and standard deviation (std) of l1:\n\nl1.mean(), l1.std()\n\n(tensor(0.1381), tensor(9.8549))\n\n\nThe mean is close to zero, which is understandable since both our input and weight matrices have means close to zero. But the standard deviation, which represents how far away our activations go from the mean, went from 1 to 10. This is a really big problem because that’s with just one layer. Modern neural nets can have hundred of layers, so if each of them multiplies the scale of our activations by 10, by the end of the last layer we won’t have numbers representable by a computer.\nIndeed, if we make just 50 multiplications between x and random matrices of size 100×100, we’ll have:\n\nx = torch.randn(200, 100)\nfor i in range(50): x = x @ torch.randn(100,100)\nx[0:5,0:5]\n\ntensor([[nan, nan, nan, nan, nan],\n [nan, nan, nan, nan, nan],\n [nan, nan, nan, nan, nan],\n [nan, nan, nan, nan, nan],\n [nan, nan, nan, nan, nan]])\n\n\nThe result is nans everywhere. So maybe the scale of our matrix was too big, and we need to have smaller weights? But if we use too small weights, we will have the opposite problem—the scale of our activations will go from 1 to 0.1, and after 50 layers we’ll be left with zeros everywhere:\n\nx = torch.randn(200, 100)\nfor i in range(50): x = x @ (torch.randn(100,100) * 0.01)\nx[0:5,0:5]\n\ntensor([[0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.]])\n\n\nSo we have to scale our weight matrices exactly right so that the standard deviation of our activations stays at 1. We can compute the exact value to use mathematically, as illustrated by Xavier Glorot and Yoshua Bengio in “Understanding the Difficulty of Training Deep Feedforward Neural Networks”. The right scale for a given layer is \\(1/\\sqrt{n_{in}}\\), where \\(n_{in}\\) represents the number of inputs.\nIn our case, if we have 100 inputs, we should scale our weight matrices by 0.1:\n\nx = torch.randn(200, 100)\nfor i in range(50): x = x @ (torch.randn(100,100) * 0.1)\nx[0:5,0:5]\n\ntensor([[-0.1769, -0.0389, 0.5414, 0.5769, 0.1272],\n [ 0.2133, -1.1071, 0.8511, -1.2352, -0.6003],\n [ 0.4065, -2.1840, 2.3958, -2.3253, -0.9688],\n [ 0.1720, 1.2012, -2.4649, -0.7765, 0.7338],\n [-0.0977, 1.4870, -2.0249, 1.0367, 0.8047]])\n\n\nFinally some numbers that are neither zeros nor nans! Notice how stable the scale of our activations is, even after those 50 fake layers:\n\nx.std()\n\ntensor(1.5285)\n\n\nIf you play a little bit with the value for scale you’ll notice that even a slight variation from 0.1 will get you either to very small or very large numbers, so initializing the weights properly is extremely important.\nLet’s go back to our neural net. Since we messed a bit with our inputs, we need to redefine them:\n\nx = torch.randn(200, 100)\ny = torch.randn(200)\n\nAnd for our weights, we’ll use the right scale, which is known as Xavier initialization (or Glorot initialization):\n\nfrom math import sqrt\nw1 = torch.randn(100,50) / sqrt(100)\nb1 = torch.zeros(50)\nw2 = torch.randn(50,1) / sqrt(50)\nb2 = torch.zeros(1)\n\nNow if we compute the result of the first layer, we can check that the mean and standard deviation are under control:\n\nl1 = lin(x, w1, b1)\nl1.mean(),l1.std()\n\n(tensor(-0.0067), tensor(0.9904))\n\n\nVery good. Now we need to go through a ReLU, so let’s define one. A ReLU removes the negatives and replaces them with zeros, which is another way of saying it clamps our tensor at zero:\n\ndef relu(x): return x.clamp_min(0.)\n\nWe pass our activations through this:\n\nl2 = relu(l1)\nl2.mean(),l2.std()\n\n(tensor(0.3906), tensor(0.5769))\n\n\nAnd we’re back to square one: the mean of our activations has gone to 0.4 (which is understandable since we removed the negatives) and the std went down to 0.58. So like before, after a few layers we will probably wind up with zeros:\n\nx = torch.randn(200, 100)\nfor i in range(50): x = relu(x @ (torch.randn(100,100) * 0.1))\nx[0:5,0:5]\n\ntensor([[1.7712e-08, 0.0000e+00, 1.7827e-09, 0.0000e+00, 4.7123e-09],\n [1.6280e-08, 0.0000e+00, 3.4217e-09, 0.0000e+00, 4.2597e-09],\n [1.7071e-08, 0.0000e+00, 5.9012e-09, 0.0000e+00, 5.1323e-09],\n [2.7019e-08, 0.0000e+00, 2.2739e-09, 0.0000e+00, 6.4754e-09],\n [1.7600e-08, 0.0000e+00, 2.7536e-09, 0.0000e+00, 4.0279e-09]])\n\n\nThis means our initialization wasn’t right. Why? At the time Glorot and Bengio wrote their article, the popular activation in a neural net was the hyperbolic tangent (tanh, which is the one they used), and that initialization doesn’t account for our ReLU. Fortunately, someone else has done the math for us and computed the right scale for us to use. In “Delving Deep into Rectifiers: Surpassing Human-Level Performance” (which we’ve seen before—it’s the article that introduced the ResNet), Kaiming He et al. show that we should use the following scale instead: \\(\\sqrt{2 / n_{in}}\\), where \\(n_{in}\\) is the number of inputs of our model. Let’s see what this gives us:\n\nx = torch.randn(200, 100)\nfor i in range(50): x = relu(x @ (torch.randn(100,100) * sqrt(2/100)))\nx[0:5,0:5]\n\ntensor([[0.2880, 0.2231, 0.0000, 1.4916, 2.1533],\n [0.2511, 0.1750, 0.0000, 1.3327, 1.8034],\n [0.2895, 0.1873, 0.0000, 0.8633, 1.2818],\n [0.2071, 0.1811, 0.0000, 1.4611, 2.0061],\n [0.4338, 0.3052, 0.0101, 1.4161, 2.0849]])\n\n\nThat’s better: our numbers aren’t all zeroed this time. So let’s go back to the definition of our neural net and use this initialization (which is named Kaiming initialization or He initialization):\n\nx = torch.randn(200, 100)\ny = torch.randn(200)\n\n\nw1 = torch.randn(100,50) * sqrt(2 / 100)\nb1 = torch.zeros(50)\nw2 = torch.randn(50,1) * sqrt(2 / 50)\nb2 = torch.zeros(1)\n\nLet’s look at the scale of our activations after going through the first linear layer and ReLU:\n\nl1 = lin(x, w1, b1)\nl2 = relu(l1)\nl2.mean(), l2.std()\n\n(tensor(0.5656), tensor(0.8275))\n\n\nMuch better! Now that our weights are properly initialized, we can define our whole model:\n\ndef model(x):\n l1 = lin(x, w1, b1)\n l2 = relu(l1)\n l3 = lin(l2, w2, b2)\n return l3\n\nThis is the forward pass. Now all that’s left to do is to compare our output to the labels we have (random numbers, in this example) with a loss function. In this case, we will use the mean squared error. (It’s a toy problem, and this is the easiest loss function to use for what is next, computing the gradients.)\nThe only subtlety is that our outputs and targets don’t have exactly the same shape—after going though the model, we get an output like this:\n\nout = model(x)\nout.shape\n\ntorch.Size([200, 1])\n\n\nTo get rid of this trailing 1 dimension, we use the squeeze function:\n\ndef mse(output, targ): return (output.squeeze(-1) - targ).pow(2).mean()\n\nAnd now we are ready to compute our loss:\n\nloss = mse(out, y)\n\nThat’s all for the forward pass—let’s now look at the gradients.\n\n\nGradients and the Backward Pass\nWe’ve seen that PyTorch computes all the gradients we need with a magic call to loss.backward, but let’s explore what’s happening behind the scenes.\nNow comes the part where we need to compute the gradients of the loss with respect to all the weights of our model, so all the floats in w1, b1, w2, and b2. For this, we will need a bit of math—specifically the chain rule. This is the rule of calculus that guides how we can compute the derivative of a composed function:\n\\[(g \\circ f)'(x) = g'(f(x)) f'(x)\\]\n\nj: I find this notation very hard to wrap my head around, so instead I like to think of it as: if y = g(u) and u=f(x); then dy/dx = dy/du * du/dx. The two notations mean the same thing, so use whatever works for you.\n\nOur loss is a big composition of different functions: mean squared error (which is in turn the composition of a mean and a power of two), the second linear layer, a ReLU and the first linear layer. For instance, if we want the gradients of the loss with respect to b2 and our loss is defined by:\nloss = mse(out,y) = mse(lin(l2, w2, b2), y)\nThe chain rule tells us that we have: \\[\\frac{\\text{d} loss}{\\text{d} b_{2}} = \\frac{\\text{d} loss}{\\text{d} out} \\times \\frac{\\text{d} out}{\\text{d} b_{2}} = \\frac{\\text{d}}{\\text{d} out} mse(out, y) \\times \\frac{\\text{d}}{\\text{d} b_{2}} lin(l_{2}, w_{2}, b_{2})\\]\nTo compute the gradients of the loss with respect to \\(b_{2}\\), we first need the gradients of the loss with respect to our output \\(out\\). It’s the same if we want the gradients of the loss with respect to \\(w_{2}\\). Then, to get the gradients of the loss with respect to \\(b_{1}\\) or \\(w_{1}\\), we will need the gradients of the loss with respect to \\(l_{1}\\), which in turn requires the gradients of the loss with respect to \\(l_{2}\\), which will need the gradients of the loss with respect to \\(out\\).\nSo to compute all the gradients we need for the update, we need to begin from the output of the model and work our way backward, one layer after the other—which is why this step is known as backpropagation. We can automate it by having each function we implemented (relu, mse, lin) provide its backward step: that is, how to derive the gradients of the loss with respect to the input(s) from the gradients of the loss with respect to the output.\nHere we populate those gradients in an attribute of each tensor, a bit like PyTorch does with .grad.\nThe first are the gradients of the loss with respect to the output of our model (which is the input of the loss function). We undo the squeeze we did in mse, then we use the formula that gives us the derivative of \\(x^{2}\\): \\(2x\\). The derivative of the mean is just \\(1/n\\) where \\(n\\) is the number of elements in our input:\n\ndef mse_grad(inp, targ): \n # grad of loss with respect to output of previous layer\n inp.g = 2. * (inp.squeeze() - targ).unsqueeze(-1) / inp.shape[0]\n\nFor the gradients of the ReLU and our linear layer, we use the gradients of the loss with respect to the output (in out.g) and apply the chain rule to compute the gradients of the loss with respect to the input (in inp.g). The chain rule tells us that inp.g = relu'(inp) * out.g. The derivative of relu is either 0 (when inputs are negative) or 1 (when inputs are positive), so this gives us:\n\ndef relu_grad(inp, out):\n # grad of relu with respect to input activations\n inp.g = (inp>0).float() * out.g\n\nThe scheme is the same to compute the gradients of the loss with respect to the inputs, weights, and bias in the linear layer:\n\ndef lin_grad(inp, out, w, b):\n # grad of matmul with respect to input\n inp.g = out.g @ w.t()\n w.g = inp.t() @ out.g\n b.g = out.g.sum(0)\n\nWe won’t linger on the mathematical formulas that define them since they’re not important for our purposes, but do check out Khan Academy’s excellent calculus lessons if you’re interested in this topic.\n\n\nSidebar: SymPy\nSymPy is a library for symbolic computation that is extremely useful library when working with calculus. Per the documentation:\n\nSymbolic computation deals with the computation of mathematical objects symbolically. This means that the mathematical objects are represented exactly, not approximately, and mathematical expressions with unevaluated variables are left in symbolic form.\n\nTo do symbolic computation, we first define a symbol, and then do a computation, like so:\n\nfrom sympy import symbols,diff\nsx,sy = symbols('sx sy')\ndiff(sx**2, sx)\n\n\\(\\displaystyle 2 sx\\)\n\n\nHere, SymPy has taken the derivative of x**2 for us! It can take the derivative of complicated compound expressions, simplify and factor equations, and much more. There’s really not much reason for anyone to do calculus manually nowadays—for calculating gradients, PyTorch does it for us, and for showing the equations, SymPy does it for us!\n\n\nEnd sidebar\nOnce we have have defined those functions, we can use them to write the backward pass. Since each gradient is automatically populated in the right tensor, we don’t need to store the results of those _grad functions anywhere—we just need to execute them in the reverse order of the forward pass, to make sure that in each function out.g exists:\n\ndef forward_and_backward(inp, targ):\n # forward pass:\n l1 = inp @ w1 + b1\n l2 = relu(l1)\n out = l2 @ w2 + b2\n # we don't actually need the loss in backward!\n loss = mse(out, targ)\n \n # backward pass:\n mse_grad(out, targ)\n lin_grad(l2, out, w2, b2)\n relu_grad(l1, l2)\n lin_grad(inp, l1, w1, b1)\n\nAnd now we can access the gradients of our model parameters in w1.g, b1.g, w2.g, and b2.g.\nWe have successfully defined our model—now let’s make it a bit more like a PyTorch module.\n\n\nRefactoring the Model\nThe three functions we used have two associated functions: a forward pass and a backward pass. Instead of writing them separately, we can create a class to wrap them together. That class can also store the inputs and outputs for the backward pass. This way, we will just have to call backward:\n\nclass Relu():\n def __call__(self, inp):\n self.inp = inp\n self.out = inp.clamp_min(0.)\n return self.out\n \n def backward(self): self.inp.g = (self.inp>0).float() * self.out.g\n\n__call__ is a magic name in Python that will make our class callable. This is what will be executed when we type y = Relu()(x). We can do the same for our linear layer and the MSE loss:\n\nclass Lin():\n def __init__(self, w, b): self.w,self.b = w,b\n \n def __call__(self, inp):\n self.inp = inp\n self.out = inp@self.w + self.b\n return self.out\n \n def backward(self):\n self.inp.g = self.out.g @ self.w.t()\n self.w.g = self.inp.t() @ self.out.g\n self.b.g = self.out.g.sum(0)\n\n\nclass Mse():\n def __call__(self, inp, targ):\n self.inp = inp\n self.targ = targ\n self.out = (inp.squeeze() - targ).pow(2).mean()\n return self.out\n \n def backward(self):\n x = (self.inp.squeeze()-self.targ).unsqueeze(-1)\n self.inp.g = 2.*x/self.targ.shape[0]\n\nThen we can put everything in a model that we initiate with our tensors w1, b1, w2, b2:\n\nclass Model():\n def __init__(self, w1, b1, w2, b2):\n self.layers = [Lin(w1,b1), Relu(), Lin(w2,b2)]\n self.loss = Mse()\n \n def __call__(self, x, targ):\n for l in self.layers: x = l(x)\n return self.loss(x, targ)\n \n def backward(self):\n self.loss.backward()\n for l in reversed(self.layers): l.backward()\n\nWhat is really nice about this refactoring and registering things as layers of our model is that the forward and backward passes are now really easy to write. If we want to instantiate our model, we just need to write:\n\nmodel = Model(w1, b1, w2, b2)\n\nThe forward pass can then be executed with:\n\nloss = model(x, y)\n\nAnd the backward pass with:\n\nmodel.backward()\n\n\n\nGoing to PyTorch\nThe Lin, Mse and Relu classes we wrote have a lot in common, so we could make them all inherit from the same base class:\n\nclass LayerFunction():\n def __call__(self, *args):\n self.args = args\n self.out = self.forward(*args)\n return self.out\n \n def forward(self): raise Exception('not implemented')\n def bwd(self): raise Exception('not implemented')\n def backward(self): self.bwd(self.out, *self.args)\n\nThen we just need to implement forward and bwd in each of our subclasses:\n\nclass Relu(LayerFunction):\n def forward(self, inp): return inp.clamp_min(0.)\n def bwd(self, out, inp): inp.g = (inp>0).float() * out.g\n\n\nclass Lin(LayerFunction):\n def __init__(self, w, b): self.w,self.b = w,b\n \n def forward(self, inp): return inp@self.w + self.b\n \n def bwd(self, out, inp):\n inp.g = out.g @ self.w.t()\n self.w.g = inp.t() @ self.out.g\n self.b.g = out.g.sum(0)\n\n\nclass Mse(LayerFunction):\n def forward (self, inp, targ): return (inp.squeeze() - targ).pow(2).mean()\n def bwd(self, out, inp, targ): \n inp.g = 2*(inp.squeeze()-targ).unsqueeze(-1) / targ.shape[0]\n\nThe rest of our model can be the same as before. This is getting closer and closer to what PyTorch does. Each basic function we need to differentiate is written as a torch.autograd.Function object that has a forward and a backward method. PyTorch will then keep trace of any computation we do to be able to properly run the backward pass, unless we set the requires_grad attribute of our tensors to False.\nWriting one of these is (almost) as easy as writing our original classes. The difference is that we choose what to save and what to put in a context variable (so that we make sure we don’t save anything we don’t need), and we return the gradients in the backward pass. It’s very rare to have to write your own Function but if you ever need something exotic or want to mess with the gradients of a regular function, here is how to write one:\n\nfrom torch.autograd import Function\n\nclass MyRelu(Function):\n @staticmethod\n def forward(ctx, i):\n result = i.clamp_min(0.)\n ctx.save_for_backward(i)\n return result\n \n @staticmethod\n def backward(ctx, grad_output):\n i, = ctx.saved_tensors\n return grad_output * (i>0).float()\n\nThe structure used to build a more complex model that takes advantage of those Functions is a torch.nn.Module. This is the base structure for all models, and all the neural nets you have seen up until now inherited from that class. It mostly helps to register all the trainable parameters, which as we’ve seen can be used in the training loop.\nTo implement an nn.Module you just need to:\n\nMake sure the superclass __init__ is called first when you initialize it.\nDefine any parameters of the model as attributes with nn.Parameter.\nDefine a forward function that returns the output of your model.\n\nAs an example, here is the linear layer from scratch:\n\nimport torch.nn as nn\n\nclass LinearLayer(nn.Module):\n def __init__(self, n_in, n_out):\n super().__init__()\n self.weight = nn.Parameter(torch.randn(n_out, n_in) * sqrt(2/n_in))\n self.bias = nn.Parameter(torch.zeros(n_out))\n \n def forward(self, x): return x @ self.weight.t() + self.bias\n\nAs you see, this class automatically keeps track of what parameters have been defined:\n\nlin = LinearLayer(10,2)\np1,p2 = lin.parameters()\np1.shape,p2.shape\n\n(torch.Size([2, 10]), torch.Size([2]))\n\n\nIt is thanks to this feature of nn.Module that we can just say opt.step() and have an optimizer loop through the parameters and update each one.\nNote that in PyTorch, the weights are stored as an n_out x n_in matrix, which is why we have the transpose in the forward pass.\nBy using the linear layer from PyTorch (which uses the Kaiming initialization as well), the model we have been building up during this chapter can be written like this:\n\nclass Model(nn.Module):\n def __init__(self, n_in, nh, n_out):\n super().__init__()\n self.layers = nn.Sequential(\n nn.Linear(n_in,nh), nn.ReLU(), nn.Linear(nh,n_out))\n self.loss = mse\n \n def forward(self, x, targ): return self.loss(self.layers(x).squeeze(), targ)\n\nIn the last chapter, we will start from such a model and see how to build a training loop from scratch and refactor it to what we’ve been using in previous chapters.",
+ "crumbs": [
+ "From the Foundations",
+ "17A Neural Net from the Foundations"
+ ]
+ },
+ {
+ "objectID": "foundations.html#conclusion",
+ "href": "foundations.html#conclusion",
+ "title": "17 A Neural Net from the Foundations",
+ "section": "17.3 Conclusion",
+ "text": "17.3 Conclusion\nIn this chapter we explored the foundations of deep learning, beginning with matrix multiplication and moving on to implementing the forward and backward passes of a neural net from scratch. We then refactored our code to show how PyTorch works beneath the hood.\nHere are a few things to remember:\n\nA neural net is basically a bunch of matrix multiplications with nonlinearities in between.\nPython is slow, so to write fast code we have to vectorize it and take advantage of techniques such as elementwise arithmetic and broadcasting.\nTwo tensors are broadcastable if the dimensions starting from the end and going backward match (if they are the same, or one of them is 1). To make tensors broadcastable, we may need to add dimensions of size 1 with unsqueeze or a None index.\nProperly initializing a neural net is crucial to get training started. Kaiming initialization should be used when we have ReLU nonlinearities.\nThe backward pass is the chain rule applied multiple times, computing the gradients from the output of our model and going back, one layer at a time.\nWhen subclassing nn.Module (if not using fastai’s Module) we have to call the superclass __init__ method in our __init__ method and we have to define a forward function that takes an input and returns the desired result.",
+ "crumbs": [
+ "From the Foundations",
+ "17A Neural Net from the Foundations"
+ ]
+ },
+ {
+ "objectID": "foundations.html#questionnaire",
+ "href": "foundations.html#questionnaire",
+ "title": "17 A Neural Net from the Foundations",
+ "section": "17.4 Questionnaire",
+ "text": "17.4 Questionnaire\n\nWrite the Python code to implement a single neuron.\nWrite the Python code to implement ReLU.\nWrite the Python code for a dense layer in terms of matrix multiplication.\nWrite the Python code for a dense layer in plain Python (that is, with list comprehensions and functionality built into Python).\nWhat is the “hidden size” of a layer?\nWhat does the t method do in PyTorch?\nWhy is matrix multiplication written in plain Python very slow?\nIn matmul, why is ac==br?\nIn Jupyter Notebook, how do you measure the time taken for a single cell to execute?\nWhat is “elementwise arithmetic”?\nWrite the PyTorch code to test whether every element of a is greater than the corresponding element of b.\nWhat is a rank-0 tensor? How do you convert it to a plain Python data type?\nWhat does this return, and why? tensor([1,2]) + tensor([1])\nWhat does this return, and why? tensor([1,2]) + tensor([1,2,3])\nHow does elementwise arithmetic help us speed up matmul?\nWhat are the broadcasting rules?\nWhat is expand_as? Show an example of how it can be used to match the results of broadcasting.\nHow does unsqueeze help us to solve certain broadcasting problems?\nHow can we use indexing to do the same operation as unsqueeze?\nHow do we show the actual contents of the memory used for a tensor?\nWhen adding a vector of size 3 to a matrix of size 3×3, are the elements of the vector added to each row or each column of the matrix? (Be sure to check your answer by running this code in a notebook.)\nDo broadcasting and expand_as result in increased memory use? Why or why not?\nImplement matmul using Einstein summation.\nWhat does a repeated index letter represent on the left-hand side of einsum?\nWhat are the three rules of Einstein summation notation? Why?\nWhat are the forward pass and backward pass of a neural network?\nWhy do we need to store some of the activations calculated for intermediate layers in the forward pass?\nWhat is the downside of having activations with a standard deviation too far away from 1?\nHow can weight initialization help avoid this problem?\nWhat is the formula to initialize weights such that we get a standard deviation of 1 for a plain linear layer, and for a linear layer followed by ReLU?\nWhy do we sometimes have to use the squeeze method in loss functions?\nWhat does the argument to the squeeze method do? Why might it be important to include this argument, even though PyTorch does not require it?\nWhat is the “chain rule”? Show the equation in either of the two forms presented in this chapter.\nShow how to calculate the gradients of mse(lin(l2, w2, b2), y) using the chain rule.\nWhat is the gradient of ReLU? Show it in math or code. (You shouldn’t need to commit this to memory—try to figure it using your knowledge of the shape of the function.)\nIn what order do we need to call the *_grad functions in the backward pass? Why?\nWhat is __call__?\nWhat methods must we implement when writing a torch.autograd.Function?\nWrite nn.Linear from scratch, and test it works.\nWhat is the difference between nn.Module and fastai’s Module?\n\n\nFurther Research\n\nImplement ReLU as a torch.autograd.Function and train a model with it.\nIf you are mathematically inclined, find out what the gradients of a linear layer are in mathematical notation. Map that to the implementation we saw in this chapter.\nLearn about the unfold method in PyTorch, and use it along with matrix multiplication to implement your own 2D convolution function. Then train a CNN that uses it.\nImplement everything in this chapter using NumPy instead of PyTorch.",
+ "crumbs": [
+ "From the Foundations",
+ "17A Neural Net from the Foundations"
+ ]
+ },
+ {
+ "objectID": "index.html",
+ "href": "index.html",
+ "title": "Practical Deep Learning for Coders",
+ "section": "",
+ "text": "Preface\nThis is a preview version of Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD. Note that chapters shown in italics in the sidebar are only available as a preview of the first few paragraphs. The full content of all chapters is available for free as Jupyter Notebooks here with only basic formatting. A nicely typeset version can be purchased from Amazon.\n\nHere’s a list of all the full chapters available here:\n\n1 Your Deep Learning Journey: Your Deep Learning Journey\n4 Under the Hood: Training a Digit Classifier: Under the Hood: Training a Digit Classifier\n13 Convolutional Neural Networks: Convolutional Neural Networks\n14 ResNets: ResNets\n16 The Training Process: The Training Process\n17 A Neural Net from the Foundations: A Neural Net from the Foundations\n\nThis book is designed to go with our free deep learning course, available at course.fast.ai.\nOnce you’ve finished the first eight chapters of the book, or completed course.fast.ai, you’ll be ready for our new course, From Deep Learning Foundations to Stable Diffusion, which starts on Oct 11th 2022 (Australian time; Oct 10th US time). You can sign up here. If you’re an open source author you may qualify for a scholarship – details here.",
+ "crumbs": [
+ "Preface"
+ ]
+ },
+ {
+ "objectID": "book2.html",
+ "href": "book2.html",
+ "title": "2 From Model to Production",
+ "section": "",
+ "text": "2.1 The Practice of Deep Learning\nWe’ve seen that deep learning can solve a lot of challenging problems quickly and with little code. As a beginner, there’s a sweet spot of problems that are similar enough to our example problems that you can very quickly get extremely useful results. However, deep learning isn’t magic! The same 6 lines of code won’t work for every problem anyone can think of today. Underestimating the constraints and overestimating the capabilities of deep learning may lead to frustratingly poor results, at least until you gain some experience and can solve the problems that arise. Conversely, overestimating the constraints and underestimating the capabilities of deep learning may mean you do not attempt a solvable problem because you talk yourself out of it.\nWe often talk to people who underestimate both the constraints and the capabilities of deep learning. Both of these can be problems: underestimating the capabilities means that you might not even try things that could be very beneficial, and underestimating the constraints might mean that you fail to consider and react to important issues.\nThe best thing to do is to keep an open mind. If you remain open to the possibility that deep learning might solve part of your problem with less data or complexity than you expect, then it is possible to design a process where you can find the specific capabilities and constraints related to your particular problem as you work through the process. This doesn’t mean making any risky bets — we will show you how you can gradually roll out models so that they don’t create significant risks, and can even backtest them prior to putting them in production.\nThis is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).",
+ "crumbs": [
+ "Getting Started",
+ "2*From Model to Production*"
+ ]
+ },
+ {
+ "objectID": "book3.html",
+ "href": "book3.html",
+ "title": "3 AI and Data Ethics",
+ "section": "",
+ "text": "Sidebar: Acknowledgement: Dr. Rachel Thomas\nThis chapter was co-authored by Dr. Rachel Thomas, the cofounder of fast.ai, and founding director of the Center for Applied Data Ethics at the University of San Francisco. It largely follows a subset of the syllabus she developed for the Introduction to Data Ethics course.",
+ "crumbs": [
+ "Getting Started",
+ "3*AI and Data Ethics*"
+ ]
+ },
+ {
+ "objectID": "book3.html#key-examples-for-data-ethics",
+ "href": "book3.html#key-examples-for-data-ethics",
+ "title": "3 AI and Data Ethics",
+ "section": "3.1 Key Examples for Data Ethics",
+ "text": "3.1 Key Examples for Data Ethics\nWe are going to start with three specific examples that illustrate three common ethical issues in tech:\n\nRecourse processes—Arkansas’s buggy healthcare algorithms left patients stranded.\nFeedback loops—YouTube’s recommendation system helped unleash a conspiracy theory boom.\nBias—When a traditionally African-American name is searched for on Google, it displays ads for criminal background checks.\n\nIn fact, for every concept that we introduce in this chapter, we are going to provide at least one specific example. For each one, think about what you could have done in this situation, and what kinds of obstructions there might have been to you getting that done. How would you deal with them? What would you look out for?\n\nBugs and Recourse: Buggy Algorithm Used for Healthcare Benefits\nThe Verge investigated software used in over half of the US states to determine how much healthcare people receive, and documented their findings in the article “What Happens When an Algorithm Cuts Your Healthcare”. After implementation of the algorithm in Arkansas, hundreds of people (many with severe disabilities) had their healthcare drastically cut. For instance, Tammy Dobbs, a woman with cerebral palsy who needs an aid to help her to get out of bed, to go to the bathroom, to get food, and more, had her hours of help suddenly reduced by 20 hours a week. She couldn’t get any explanation for why her healthcare was cut. Eventually, a court case revealed that there were mistakes in the software implementation of the algorithm, negatively impacting people with diabetes or cerebral palsy. However, Dobbs and many other people reliant on these healthcare benefits live in fear that their benefits could again be cut suddenly and inexplicably.\n\n\nFeedback Loops: YouTube’s Recommendation System\nFeedback loops can occur when your model is controlling the next round of data you get. The data that is returned quickly becomes flawed by the software itself.\nFor instance, YouTube has 1.9 billion users, who watch over 1 billion hours of YouTube videos a day. Its recommendation algorithm (built by Google), which was designed to optimize watch time, is responsible for around 70% of the content that is watched. But there was a problem: it led to out-of-control feedback loops, leading the New York Times to run the headline “YouTube Unleashed a Conspiracy Theory Boom. Can It Be Contained?”. Ostensibly recommendation systems are predicting what content people will like, but they also have a lot of power in determining what content people even see.\n\n\nBias: Professor Latanya Sweeney “Arrested”\nDr. Latanya Sweeney is a professor at Harvard and director of the university’s data privacy lab. In the paper “Discrimination in Online Ad Delivery” (see <>) she describes her discovery that Googling her name resulted in advertisements saying “Latanya Sweeney, arrested?” even though she is the only known Latanya Sweeney and has never been arrested. However when she Googled other names, such as “Kirsten Lindquist,” she got more neutral ads, even though Kirsten Lindquist has been arrested three times.\n\nBeing a computer scientist, she studied this systematically, and looked at over 2000 names. She found a clear pattern where historically Black names received advertisements suggesting that the person had a criminal record, whereas, white names had more neutral advertisements.\nThis is an example of bias. It can make a big difference to people’s lives—for instance, if a job applicant is Googled it may appear that they have a criminal record when they do not.\n\n\nWhy Does This Matter?\nOne very natural reaction to considering these issues is: “So what? What’s that got to do with me? I’m a data scientist, not a politician. I’m not one of the senior executives at my company who make the decisions about what we do. I’m just trying to build the most predictive model I can.”\nThese are very reasonable questions. But we’re going to try to convince you that the answer is that everybody who is training models absolutely needs to consider how their models will be used, and consider how to best ensure that they are used as positively as possible. There are things you can do. And if you don’t do them, then things can go pretty badly.\nOne particularly hideous example of what happens when technologists focus on technology at all costs is the story of IBM and Nazi Germany. In 2001, a Swiss judge ruled that it was not unreasonable “to deduce that IBM’s technical assistance facilitated the tasks of the Nazis in the commission of their crimes against humanity, acts also involving accountancy and classification by IBM machines and utilized in the concentration camps themselves.”\nIBM, you see, supplied the Nazis with data tabulation products necessary to track the extermination of Jews and other groups on a massive scale. This was driven from the top of the company, with marketing to Hitler and his leadership team. Company President Thomas Watson personally approved the 1939 release of special IBM alphabetizing machines to help organize the deportation of Polish Jews. Pictured in <> is Adolf Hitler (far left) meeting with IBM CEO Tom Watson Sr. (second from left), shortly before Hitler awarded Watson a special “Service to the Reich” medal in 1937.\n\nBut this was not an isolated incident—the organization’s involvement was extensive. IBM and its subsidiaries provided regular training and maintenance onsite at the concentration camps: printing off cards, configuring machines, and repairing them as they broke frequently. IBM set up categorizations on its punch card system for the way that each person was killed, which group they were assigned to, and the logistical information necessary to track them through the vast Holocaust system. IBM’s code for Jews in the concentration camps was 8: some 6,000,000 were killed. Its code for Romanis was 12 (they were labeled by the Nazis as “asocials,” with over 300,000 killed in the Zigeunerlager, or “Gypsy camp”). General executions were coded as 4, death in the gas chambers as 6.\n\nOf course, the project managers and engineers and technicians involved were just living their ordinary lives. Caring for their families, going to the church on Sunday, doing their jobs the best they could. Following orders. The marketers were just doing what they could to meet their business development goals. As Edwin Black, author of IBM and the Holocaust (Dialog Press) observed: “To the blind technocrat, the means were more important than the ends. The destruction of the Jewish people became even less important because the invigorating nature of IBM’s technical achievement was only heightened by the fantastical profits to be made at a time when bread lines stretched across the world.”\nStep back for a moment and consider: How would you feel if you discovered that you had been part of a system that ended up hurting society? Would you be open to finding out? How can you help make sure this doesn’t happen? We have described the most extreme situation here, but there are many negative societal consequences linked to AI and machine learning being observed today, some of which we’ll describe in this chapter.\nIt’s not just a moral burden, either. Sometimes technologists pay very directly for their actions. For instance, the first person who was jailed as a result of the Volkswagen scandal, where the car company was revealed to have cheated on its diesel emissions tests, was not the manager that oversaw the project, or an executive at the helm of the company. It was one of the engineers, James Liang, who just did what he was told.\nOf course, it’s not all bad—if a project you are involved in turns out to make a huge positive impact on even one person, this is going to make you feel pretty great!\nOkay, so hopefully we have convinced you that you ought to care. But what should you do? As data scientists, we’re naturally inclined to focus on making our models better by optimizing some metric or other. But optimizing that metric may not actually lead to better outcomes. And even if it does help create better outcomes, it almost certainly won’t be the only thing that matters. Consider the pipeline of steps that occurs between the development of a model or an algorithm by a researcher or practitioner, and the point at which this work is actually used to make some decision. This entire pipeline needs to be considered as a whole if we’re to have a hope of getting the kinds of outcomes we want.\nNormally there is a very long chain from one end to the other. This is especially true if you are a researcher, where you might not even know if your research will ever get used for anything, or if you’re involved in data collection, which is even earlier in the pipeline. But no one is better placed to inform everyone involved in this chain about the capabilities, constraints, and details of your work than you are. Although there’s no “silver bullet” that can ensure your work is used the right way, by getting involved in the process, and asking the right questions, you can at the very least ensure that the right issues are being considered.\nSometimes, the right response to being asked to do a piece of work is to just say “no.” Often, however, the response we hear is, “If I don’t do it, someone else will.” But consider this: if you’ve been picked for the job, you’re the best person they’ve found to do it—so if you don’t do it, the best person isn’t working on that project. If the first five people they ask all say no too, even better!",
+ "crumbs": [
+ "Getting Started",
+ "3*AI and Data Ethics*"
+ ]
+ },
+ {
+ "objectID": "book3.html#integrating-machine-learning-with-product-design",
+ "href": "book3.html#integrating-machine-learning-with-product-design",
+ "title": "3 AI and Data Ethics",
+ "section": "3.2 Integrating Machine Learning with Product Design",
+ "text": "3.2 Integrating Machine Learning with Product Design\nPresumably the reason you’re doing this work is because you hope it will be used for something. Otherwise, you’re just wasting your time. So, let’s start with the assumption that your work will end up somewhere. Now, as you are collecting your data and developing your model, you are making lots of decisions. What level of aggregation will you store your data at? What loss function should you use? What validation and training sets should you use? Should you focus on simplicity of implementation, speed of inference, or accuracy of the model? How will your model handle out-of-domain data items? Can it be fine-tuned, or must it be retrained from scratch over time?\nThese are not just algorithm questions. They are data product design questions. But the product managers, executives, judges, journalists, doctors… whoever ends up developing and using the system of which your model is a part will not be well-placed to understand the decisions that you made, let alone change them.\nFor instance, two studies found that Amazon’s facial recognition software produced inaccurate and racially biased results. Amazon claimed that the researchers should have changed the default parameters, without explaining how this would have changed the biased results. Furthermore, it turned out that Amazon was not instructing police departments that used its software to do this either. There was, presumably, a big distance between the researchers that developed these algorithms and the Amazon documentation staff that wrote the guidelines provided to the police. A lack of tight integration led to serious problems for society at large, the police, and Amazon themselves. It turned out that their system erroneously matched 28 members of congress to criminal mugshots! (And the Congresspeople wrongly matched to criminal mugshots were disproportionately people of color, as seen in <>.)\n\nData scientists need to be part of a cross-disciplinary team. And researchers need to work closely with the kinds of people who will end up using their research. Better still is if the domain experts themselves have learned enough to be able to train and debug some models themselves—hopefully there are a few of you reading this book right now!\nThe modern workplace is a very specialized place. Everybody tends to have well-defined jobs to perform. Especially in large companies, it can be hard to know what all the pieces of the puzzle are. Sometimes companies even intentionally obscure the overall project goals that are being worked on, if they know that their employees are not going to like the answers. This is sometimes done by compartmentalising pieces as much as possible.\nIn other words, we’re not saying that any of this is easy. It’s hard. It’s really hard. We all have to do our best. And we have often seen that the people who do get involved in the higher-level context of these projects, and attempt to develop cross-disciplinary capabilities and teams, become some of the most important and well rewarded members of their organizations. It’s the kind of work that tends to be highly appreciated by senior executives, even if it is sometimes considered rather uncomfortable by middle management.",
+ "crumbs": [
+ "Getting Started",
+ "3*AI and Data Ethics*"
+ ]
+ },
+ {
+ "objectID": "book3.html#topics-in-data-ethics",
+ "href": "book3.html#topics-in-data-ethics",
+ "title": "3 AI and Data Ethics",
+ "section": "3.3 Topics in Data Ethics",
+ "text": "3.3 Topics in Data Ethics\nData ethics is a big field, and we can’t cover everything. Instead, we’re going to pick a few topics that we think are particularly relevant:\n\nThe need for recourse and accountability\nFeedback loops\nBias\nDisinformation\n\nLet’s look at each in turn.\n\nRecourse and Accountability\nIn a complex system, it is easy for no one person to feel responsible for outcomes. While this is understandable, it does not lead to good results. In the earlier example of the Arkansas healthcare system in which a bug led to people with cerebral palsy losing access to needed care, the creator of the algorithm blamed government officials, and government officials blamed those who implemented the software. NYU professor Danah Boyd described this phenomenon: “Bureaucracy has often been used to shift or evade responsibility… Today’s algorithmic systems are extending bureaucracy.”\nAn additional reason why recourse is so necessary is because data often contains errors. Mechanisms for audits and error correction are crucial. A database of suspected gang members maintained by California law enforcement officials was found to be full of errors, including 42 babies who had been added to the database when they were less than 1 year old (28 of whom were marked as “admitting to being gang members”). In this case, there was no process in place for correcting mistakes or removing people once they’d been added. Another example is the US credit report system: in a large-scale study of credit reports by the Federal Trade Commission (FTC) in 2012, it was found that 26% of consumers had at least one mistake in their files, and 5% had errors that could be devastating. Yet, the process of getting such errors corrected is incredibly slow and opaque. When public radio reporter Bobby Allyn discovered that he was erroneously listed as having a firearms conviction, it took him “more than a dozen phone calls, the handiwork of a county court clerk and six weeks to solve the problem. And that was only after I contacted the company’s communications department as a journalist.”\nAs machine learning practitioners, we do not always think of it as our responsibility to understand how our algorithms end up being implemented in practice. But we need to.\n\n\nFeedback Loops\nWe explained in <> how an algorithm can interact with its environment to create a feedback loop, making predictions that reinforce actions taken in the real world, which lead to predictions even more pronounced in the same direction. As an example, let’s again consider YouTube’s recommendation system. A couple of years ago the Google team talked about how they had introduced reinforcement learning (closely related to deep learning, but where your loss function represents a result potentially a long time after an action occurs) to improve YouTube’s recommendation system. They described how they used an algorithm that made recommendations such that watch time would be optimized.\nHowever, human beings tend to be drawn to controversial content. This meant that videos about things like conspiracy theories started to get recommended more and more by the recommendation system. Furthermore, it turns out that the kinds of people that are interested in conspiracy theories are also people that watch a lot of online videos! So, they started to get drawn more and more toward YouTube. The increasing number of conspiracy theorists watching videos on YouTube resulted in the algorithm recommending more and more conspiracy theory and other extremist content, which resulted in more extremists watching videos on YouTube, and more people watching YouTube developing extremist views, which led to the algorithm recommending more extremist content… The system was spiraling out of control.\nAnd this phenomenon was not contained to this particular type of content. In June 2019 the New York Times published an article on YouTube’s recommendation system, titled “On YouTube’s Digital Playground, an Open Gate for Pedophiles”. The article started with this chilling story:\n\n: Christiane C. didn’t think anything of it when her 10-year-old daughter and a friend uploaded a video of themselves playing in a backyard pool… A few days later… the video had thousands of views. Before long, it had ticked up to 400,000… “I saw the video again and I got scared by the number of views,” Christiane said. She had reason to be. YouTube’s automated recommendation system… had begun showing the video to users who watched other videos of prepubescent, partially clothed children, a team of researchers has found.\n\n\n: On its own, each video might be perfectly innocent, a home movie, say, made by a child. Any revealing frames are fleeting and appear accidental. But, grouped together, their shared features become unmistakable.\n\nYouTube’s recommendation algorithm had begun curating playlists for pedophiles, picking out innocent home videos that happened to contain prepubescent, partially clothed children.\nNo one at Google planned to create a system that turned family videos into porn for pedophiles. So what happened?\nPart of the problem here is the centrality of metrics in driving a financially important system. When an algorithm has a metric to optimize, as you have seen, it will do everything it can to optimize that number. This tends to lead to all kinds of edge cases, and humans interacting with a system will search for, find, and exploit these edge cases and feedback loops for their advantage.\nThere are signs that this is exactly what has happened with YouTube’s recommendation system. The Guardian ran an article called “How an ex-YouTube Insider Investigated its Secret Algorithm” about Guillaume Chaslot, an ex-YouTube engineer who created AlgoTransparency, which tracks these issues. Chaslot published the chart in <>, following the release of Robert Mueller’s “Report on the Investigation Into Russian Interference in the 2016 Presidential Election.”\n\nRussia Today’s coverage of the Mueller report was an extreme outlier in terms of how many channels were recommending it. This suggests the possibility that Russia Today, a state-owned Russia media outlet, has been successful in gaming YouTube’s recommendation algorithm. Unfortunately, the lack of transparency of systems like this makes it hard to uncover the kinds of problems that we’re discussing.\nOne of our reviewers for this book, Aurélien Géron, led YouTube’s video classification team from 2013 to 2016 (well before the events discussed here). He pointed out that it’s not just feedback loops involving humans that are a problem. There can also be feedback loops without humans! He told us about an example from YouTube:\n\n: One important signal to classify the main topic of a video is the channel it comes from. For example, a video uploaded to a cooking channel is very likely to be a cooking video. But how do we know what topic a channel is about? Well… in part by looking at the topics of the videos it contains! Do you see the loop? For example, many videos have a description which indicates what camera was used to shoot the video. As a result, some of these videos might get classified as videos about “photography.” If a channel has such a misclassified video, it might be classified as a “photography” channel, making it even more likely for future videos on this channel to be wrongly classified as “photography.” This could even lead to runaway virus-like classifications! One way to break this feedback loop is to classify videos with and without the channel signal. Then when classifying the channels, you can only use the classes obtained without the channel signal. This way, the feedback loop is broken.\n\nThere are positive examples of people and organizations attempting to combat these problems. Evan Estola, lead machine learning engineer at Meetup, discussed the example of men expressing more interest than women in tech meetups. taking gender into account could therefore cause Meetup’s algorithm to recommend fewer tech meetups to women, and as a result, fewer women would find out about and attend tech meetups, which could cause the algorithm to suggest even fewer tech meetups to women, and so on in a self-reinforcing feedback loop. So, Evan and his team made the ethical decision for their recommendation algorithm to not create such a feedback loop, by explicitly not using gender for that part of their model. It is encouraging to see a company not just unthinkingly optimize a metric, but consider its impact. According to Evan, “You need to decide which feature not to use in your algorithm… the most optimal algorithm is perhaps not the best one to launch into production.”\nWhile Meetup chose to avoid such an outcome, Facebook provides an example of allowing a runaway feedback loop to run wild. Like YouTube, it tends to radicalize users interested in one conspiracy theory by introducing them to more. As Renee DiResta, a researcher on proliferation of disinformation, writes:\n\n: Once people join a single conspiracy-minded [Facebook] group, they are algorithmically routed to a plethora of others. Join an anti-vaccine group, and your suggestions will include anti-GMO, chemtrail watch, flat Earther (yes, really), and “curing cancer naturally groups. Rather than pulling a user out of the rabbit hole, the recommendation engine pushes them further in.”\n\nIt is extremely important to keep in mind that this kind of behavior can happen, and to either anticipate a feedback loop or take positive action to break it when you see the first signs of it in your own projects. Another thing to keep in mind is bias, which, as we discussed briefly in the previous chapter, can interact with feedback loops in very troublesome ways.\n\n\nBias\nDiscussions of bias online tend to get pretty confusing pretty fast. The word “bias” means so many different things. Statisticians often think when data ethicists are talking about bias that they’re talking about the statistical definition of the term bias. But they’re not. And they’re certainly not talking about the biases that appear in the weights and biases which are the parameters of your model!\nWhat they’re talking about is the social science concept of bias. In “A Framework for Understanding Unintended Consequences of Machine Learning” MIT’s Harini Suresh and John Guttag describe six types of bias in machine learning, summarized in <> from their paper.\n\nWe’ll discuss four of these types of bias, those that we’ve found most helpful in our own work (see the paper for details on the others).\n\nHistorical bias\nHistorical bias comes from the fact that people are biased, processes are biased, and society is biased. Suresh and Guttag say: “Historical bias is a fundamental, structural issue with the first step of the data generation process and can exist even given perfect sampling and feature selection.”\nFor instance, here are a few examples of historical race bias in the US, from the New York Times article “Racial Bias, Even When We Have Good Intentions” by the University of Chicago’s Sendhil Mullainathan:\n\nWhen doctors were shown identical files, they were much less likely to recommend cardiac catheterization (a helpful procedure) to Black patients.\nWhen bargaining for a used car, Black people were offered initial prices $700 higher and received far smaller concessions.\nResponding to apartment rental ads on Craigslist with a Black name elicited fewer responses than with a white name.\nAn all-white jury was 16 percentage points more likely to convict a Black defendant than a white one, but when a jury had one Black member it convicted both at the same rate.\n\nThe COMPAS algorithm, widely used for sentencing and bail decisions in the US, is an example of an important algorithm that, when tested by ProPublica, showed clear racial bias in practice (<>).\n\nAny dataset involving humans can have this kind of bias: medical data, sales data, housing data, political data, and so on. Because underlying bias is so pervasive, bias in datasets is very pervasive. Racial bias even turns up in computer vision, as shown in the example of autocategorized photos shared on Twitter by a Google Photos user shown in <>.\n\nYes, that is showing what you think it is: Google Photos classified a Black user’s photo with their friend as “gorillas”! This algorithmic misstep got a lot of attention in the media. “We’re appalled and genuinely sorry that this happened,” a company spokeswoman said. “There is still clearly a lot of work to do with automatic image labeling, and we’re looking at how we can prevent these types of mistakes from happening in the future.”\nUnfortunately, fixing problems in machine learning systems when the input data has problems is hard. Google’s first attempt didn’t inspire confidence, as coverage by The Guardian suggested (<>).\n\nThese kinds of problems are certainly not limited to just Google. MIT researchers studied the most popular online computer vision APIs to see how accurate they were. But they didn’t just calculate a single accuracy number—instead, they looked at the accuracy across four different groups, as illustrated in <>.\n\nIBM’s system, for instance, had a 34.7% error rate for darker females, versus 0.3% for lighter males—over 100 times more errors! Some people incorrectly reacted to these experiments by claiming that the difference was simply because darker skin is harder for computers to recognize. However, what actually happened was that, after the negative publicity that this result created, all of the companies in question dramatically improved their models for darker skin, such that one year later they were nearly as good as for lighter skin. So what this actually showed is that the developers failed to utilize datasets containing enough darker faces, or test their product with darker faces.\nOne of the MIT researchers, Joy Buolamwini, warned: “We have entered the age of automation overconfident yet underprepared. If we fail to make ethical and inclusive artificial intelligence, we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.”\nPart of the issue appears to be a systematic imbalance in the makeup of popular datasets used for training models. The abstract to the paper “No Classification Without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World” by Shreya Shankar et al. states, “We analyze two large, publicly available image data sets to assess geo-diversity and find that these data sets appear to exhibit an observable amerocentric and eurocentric representation bias. Further, we analyze classifiers trained on these data sets to assess the impact of these training distributions and find strong differences in the relative performance on images from different locales.” <> shows one of the charts from the paper, showing the geographic makeup of what was, at the time (and still are, as this book is being written) the two most important image datasets for training models.\n\nThe vast majority of the images are from the United States and other Western countries, leading to models trained on ImageNet performing worse on scenes from other countries and cultures. For instance, research found that such models are worse at identifying household items (such as soap, spices, sofas, or beds) from lower-income countries. <> shows an image from the paper, “Does Object Recognition Work for Everyone?” by Terrance DeVries et al. of Facebook AI Research that illustrates this point.\n\nIn this example, we can see that the lower-income soap example is a very long way away from being accurate, with every commercial image recognition service predicting “food” as the most likely answer!\nAs we will discuss shortly, in addition, the vast majority of AI researchers and developers are young white men. Most projects that we have seen do most user testing using friends and families of the immediate product development group. Given this, the kinds of problems we just discussed should not be surprising.\nSimilar historical bias is found in the texts used as data for natural language processing models. This crops up in downstream machine learning tasks in many ways. For instance, it was widely reported that until last year Google Translate showed systematic bias in how it translated the Turkish gender-neutral pronoun “o” into English: when applied to jobs which are often associated with males it used “he,” and when applied to jobs which are often associated with females it used “she” (<>).\n\nWe also see this kind of bias in online advertisements. For instance, a study in 2019 by Muhammad Ali et al. found that even when the person placing the ad does not intentionally discriminate, Facebook will show ads to very different audiences based on race and gender. Housing ads with the same text, but picture either a white or a Black family, were shown to racially different audiences.\n\n\nMeasurement bias\nIn the paper “Does Machine Learning Automate Moral Hazard and Error” in American Economic Review, Sendhil Mullainathan and Ziad Obermeyer look at a model that tries to answer the question: using historical electronic health record (EHR) data, what factors are most predictive of stroke? These are the top predictors from the model:\n\nPrior stroke\nCardiovascular disease\nAccidental injury\nBenign breast lump\nColonoscopy\nSinusitis\n\nHowever, only the top two have anything to do with a stroke! Based on what we’ve studied so far, you can probably guess why. We haven’t really measured stroke, which occurs when a region of the brain is denied oxygen due to an interruption in the blood supply. What we’ve measured is who had symptoms, went to a doctor, got the appropriate tests, and received a diagnosis of stroke. Actually having a stroke is not the only thing correlated with this complete list—it’s also correlated with being the kind of person who actually goes to the doctor (which is influenced by who has access to healthcare, can afford their co-pay, doesn’t experience racial or gender-based medical discrimination, and more)! If you are likely to go to the doctor for an accidental injury, then you are likely to also go the doctor when you are having a stroke.\nThis is an example of measurement bias. It occurs when our models make mistakes because we are measuring the wrong thing, or measuring it in the wrong way, or incorporating that measurement into the model inappropriately.\n\n\nAggregation bias\nAggregation bias occurs when models do not aggregate data in a way that incorporates all of the appropriate factors, or when a model does not include the necessary interaction terms, nonlinearities, or so forth. This can particularly occur in medical settings. For instance, the way diabetes is treated is often based on simple univariate statistics and studies involving small groups of heterogeneous people. Analysis of results is often done in a way that does not take account of different ethnicities or genders. However, it turns out that diabetes patients have different complications across ethnicities, and HbA1c levels (widely used to diagnose and monitor diabetes) differ in complex ways across ethnicities and genders. This can result in people being misdiagnosed or incorrectly treated because medical decisions are based on a model that does not include these important variables and interactions.\n\n\nRepresentation bias\nThe abstract of the paper “Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting” by Maria De-Arteaga et al. notes that there is gender imbalance in occupations (e.g., females are more likely to be nurses, and males are more likely to be pastors), and says that: “differences in true positive rates between genders are correlated with existing gender imbalances in occupations, which may compound these imbalances.”\nIn other words, the researchers noticed that models predicting occupation did not only reflect the actual gender imbalance in the underlying population, but actually amplified it! This type of representation bias is quite common, particularly for simple models. When there is some clear, easy-to-see underlying relationship, a simple model will often simply assume that this relationship holds all the time. As <> from the paper shows, for occupations that had a higher percentage of females, the model tended to overestimate the prevalence of that occupation.\n\nFor example, in the training dataset 14.6% of surgeons were women, yet in the model predictions only 11.6% of the true positives were women. The model is thus amplifying the bias existing in the training set.\nNow that we’ve seen that those biases exist, what can we do to mitigate them?\n\n\n\nAddressing different types of bias\nDifferent types of bias require different approaches for mitigation. While gathering a more diverse dataset can address representation bias, this would not help with historical bias or measurement bias. All datasets contain bias. There is no such thing as a completely debiased dataset. Many researchers in the field have been converging on a set of proposals to enable better documentation of the decisions, context, and specifics about how and why a particular dataset was created, what scenarios it is appropriate to use in, and what the limitations are. This way, those using a particular dataset will not be caught off guard by its biases and limitations.\nWe often hear the question—“Humans are biased, so does algorithmic bias even matter?” This comes up so often, there must be some reasoning that makes sense to the people that ask it, but it doesn’t seem very logically sound to us! Independently of whether this is logically sound, it’s important to realize that algorithms (particularly machine learning algorithms!) and people are different. Consider these points about machine learning algorithms:\n\nMachine learning can create feedback loops:: Small amounts of bias can rapidly increase exponentially due to feedback loops.\nMachine learning can amplify bias:: Human bias can lead to larger amounts of machine learning bias.\nAlgorithms & humans are used differently:: Human decision makers and algorithmic decision makers are not used in a plug-and-play interchangeable way in practice.\nTechnology is power:: And with that comes responsibility.\n\nAs the Arkansas healthcare example showed, machine learning is often implemented in practice not because it leads to better outcomes, but because it is cheaper and more efficient. Cathy O’Neill, in her book Weapons of Math Destruction (Crown), described the pattern of how the privileged are processed by people, whereas the poor are processed by algorithms. This is just one of a number of ways that algorithms are used differently than human decision makers. Others include:\n\nPeople are more likely to assume algorithms are objective or error-free (even if they’re given the option of a human override).\nAlgorithms are more likely to be implemented with no appeals process in place.\nAlgorithms are often used at scale.\nAlgorithmic systems are cheap.\n\nEven in the absence of bias, algorithms (and deep learning especially, since it is such an effective and scalable algorithm) can lead to negative societal problems, such as when used for disinformation.\n\n\nDisinformation\nDisinformation has a history stretching back hundreds or even thousands of years. It is not necessarily about getting someone to believe something false, but rather often used to sow disharmony and uncertainty, and to get people to give up on seeking the truth. Receiving conflicting accounts can lead people to assume that they can never know whom or what to trust.\nSome people think disinformation is primarily about false information or fake news, but in reality, disinformation can often contain seeds of truth, or half-truths taken out of context. Ladislav Bittman was an intelligence officer in the USSR who later defected to the US and wrote some books in the 1970s and 1980s on the role of disinformation in Soviet propaganda operations. In The KGB and Soviet Disinformation (Pergamon) he wrote, “Most campaigns are a carefully designed mixture of facts, half-truths, exaggerations, and deliberate lies.”\nIn the US this has hit close to home in recent years, with the FBI detailing a massive disinformation campaign linked to Russia in the 2016 election. Understanding the disinformation that was used in this campaign is very educational. For instance, the FBI found that the Russian disinformation campaign often organized two separate fake “grass roots” protests, one for each side of an issue, and got them to protest at the same time! The Houston Chronicle reported on one of these odd events (<>).\n\n: A group that called itself the “Heart of Texas” had organized it on social media—a protest, they said, against the “Islamization” of Texas. On one side of Travis Street, I found about 10 protesters. On the other side, I found around 50 counterprotesters. But I couldn’t find the rally organizers. No “Heart of Texas.” I thought that was odd, and mentioned it in the article: What kind of group is a no-show at its own event? Now I know why. Apparently, the rally’s organizers were in Saint Petersburg, Russia, at the time. “Heart of Texas” is one of the internet troll groups cited in Special Prosecutor Robert Mueller’s recent indictment of Russians attempting to tamper with the U.S. presidential election.\n\n\nDisinformation often involves coordinated campaigns of inauthentic behavior. For instance, fraudulent accounts may try to make it seem like many people hold a particular viewpoint. While most of us like to think of ourselves as independent-minded, in reality we evolved to be influenced by others in our in-group, and in opposition to those in our out-group. Online discussions can influence our viewpoints, or alter the range of what we consider acceptable viewpoints. Humans are social animals, and as social animals we are extremely influenced by the people around us. Increasingly, radicalization occurs in online environments; influence is coming from people in the virtual space of online forums and social networks.\nDisinformation through autogenerated text is a particularly significant issue, due to the greatly increased capability provided by deep learning. We discuss this issue in depth when we delve into creating language models, in <>.\nOne proposed approach is to develop some form of digital signature, to implement it in a seamless way, and to create norms that we should only trust content that has been verified. The head of the Allen Institute on AI, Oren Etzioni, wrote such a proposal in an article titled “How Will We Prevent AI-Based Forgery?”: “AI is poised to make high-fidelity forgery inexpensive and automated, leading to potentially disastrous consequences for democracy, security, and society. The specter of AI forgery means that we need to act to make digital signatures de rigueur as a means of authentication of digital content.”\nWhilst we can’t hope to discuss all the ethical issues that deep learning, and algorithms more generally, brings up, hopefully this brief introduction has been a useful starting point you can build on. We’ll now move on to the questions of how to identify ethical issues, and what to do about them.",
+ "crumbs": [
+ "Getting Started",
+ "3*AI and Data Ethics*"
+ ]
+ },
+ {
+ "objectID": "book3.html#identifying-and-addressing-ethical-issues",
+ "href": "book3.html#identifying-and-addressing-ethical-issues",
+ "title": "3 AI and Data Ethics",
+ "section": "3.4 Identifying and Addressing Ethical Issues",
+ "text": "3.4 Identifying and Addressing Ethical Issues\nMistakes happen. Finding out about them, and dealing with them, needs to be part of the design of any system that includes machine learning (and many other systems too). The issues raised within data ethics are often complex and interdisciplinary, but it is crucial that we work to address them.\nSo what can we do? This is a big topic, but a few steps towards addressing ethical issues are:\n\nAnalyze a project you are working on.\nImplement processes at your company to find and address ethical risks.\nSupport good policy.\nIncrease diversity.\n\nLet’s walk through each of these steps, starting with analyzing a project you are working on.\n\nAnalyze a Project You Are Working On\nIt’s easy to miss important issues when considering ethical implications of your work. One thing that helps enormously is simply asking the right questions. Rachel Thomas recommends considering the following questions throughout the development of a data project:\n\nShould we even be doing this?\nWhat bias is in the data?\nCan the code and data be audited?\nWhat are the error rates for different sub-groups?\nWhat is the accuracy of a simple rule-based alternative?\nWhat processes are in place to handle appeals or mistakes?\nHow diverse is the team that built it?\n\nThese questions may be able to help you identify outstanding issues, and possible alternatives that are easier to understand and control. In addition to asking the right questions, it’s also important to consider practices and processes to implement.\nOne thing to consider at this stage is what data you are collecting and storing. Data often ends up being used for different purposes than what it was originally collected for. For instance, IBM began selling to Nazi Germany well before the Holocaust, including helping with Germany’s 1933 census conducted by Adolf Hitler, which was effective at identifying far more Jewish people than had previously been recognized in Germany. Similarly, US census data was used to round up Japanese-Americans (who were US citizens) for internment during World War II. It is important to recognize how data and images collected can be weaponized later. Columbia professor Tim Wu wrote that “You must assume that any personal data that Facebook or Android keeps are data that governments around the world will try to get or that thieves will try to steal.”\n\n\nProcesses to Implement\nThe Markkula Center has released An Ethical Toolkit for Engineering/Design Practice that includes some concrete practices to implement at your company, including regularly scheduled sweeps to proactively search for ethical risks (in a manner similar to cybersecurity penetration testing), expanding the ethical circle to include the perspectives of a variety of stakeholders, and considering the terrible people (how could bad actors abuse, steal, misinterpret, hack, destroy, or weaponize what you are building?).\nEven if you don’t have a diverse team, you can still try to pro-actively include the perspectives of a wider group, considering questions such as these (provided by the Markkula Center):\n\nWhose interests, desires, skills, experiences, and values have we simply assumed, rather than actually consulted?\nWho are all the stakeholders who will be directly affected by our product? How have their interests been protected? How do we know what their interests really are—have we asked?\nWho/which groups and individuals will be indirectly affected in significant ways?\nWho might use this product that we didn’t expect to use it, or for purposes we didn’t initially intend?\n\n\nEthical lenses\nAnother useful resource from the Markkula Center is its Conceptual Frameworks in Technology and Engineering Practice. This considers how different foundational ethical lenses can help identify concrete issues, and lays out the following approaches and key questions:\n\nThe rights approach:: Which option best respects the rights of all who have a stake?\nThe justice approach:: Which option treats people equally or proportionately?\nThe utilitarian approach:: Which option will produce the most good and do the least harm?\nThe common good approach:: Which option best serves the community as a whole, not just some members?\nThe virtue approach:: Which option leads me to act as the sort of person I want to be?\n\nMarkkula’s recommendations include a deeper dive into each of these perspectives, including looking at a project through the lenses of its consequences:\n\nWho will be directly affected by this project? Who will be indirectly affected?\nWill the effects in aggregate likely create more good than harm, and what types of good and harm?\nAre we thinking about all relevant types of harm/benefit (psychological, political, environmental, moral, cognitive, emotional, institutional, cultural)?\nHow might future generations be affected by this project?\nDo the risks of harm from this project fall disproportionately on the least powerful in society? Will the benefits go disproportionately to the well-off?\nHave we adequately considered “dual-use”?\n\nThe alternative lens to this is the deontological perspective, which focuses on basic concepts of right and wrong:\n\nWhat rights of others and duties to others must we respect?\nHow might the dignity and autonomy of each stakeholder be impacted by this project?\nWhat considerations of trust and of justice are relevant to this design/project?\nDoes this project involve any conflicting moral duties to others, or conflicting stakeholder rights? How can we prioritize these?\n\nOne of the best ways to help come up with complete and thoughtful answers to questions like these is to ensure that the people asking the questions are diverse.\n\n\n\nThe Power of Diversity\nCurrently, less than 12% of AI researchers are women, according to a study from Element AI. The statistics are similarly dire when it comes to race and age. When everybody on a team has similar backgrounds, they are likely to have similar blindspots around ethical risks. The Harvard Business Review (HBR) has published a number of studies showing many benefits of diverse teams, including:\n\n“How Diversity Can Drive Innovation”\n“Teams Solve Problems Faster When They’re More Cognitively Diverse”\n“Why Diverse Teams Are Smarter”, and\n“Defend Your Research: What Makes a Team Smarter? More Women”\n\nDiversity can lead to problems being identified earlier, and a wider range of solutions being considered. For instance, Tracy Chou was an early engineer at Quora. She wrote of her experiences, describing how she advocated internally for adding a feature that would allow trolls and other bad actors to be blocked. Chou recounts, “I was eager to work on the feature because I personally felt antagonized and abused on the site (gender isn’t an unlikely reason as to why)… But if I hadn’t had that personal perspective, it’s possible that the Quora team wouldn’t have prioritized building a block button so early in its existence.” Harassment often drives people from marginalized groups off online platforms, so this functionality has been important for maintaining the health of Quora’s community.\nA crucial aspect to understand is that women leave the tech industry at over twice the rate that men do, according to the Harvard Business Review (41% of women working in tech leave, compared to 17% of men). An analysis of over 200 books, white papers, and articles found that the reason they leave is that “they’re treated unfairly; underpaid, less likely to be fast-tracked than their male colleagues, and unable to advance.”\nStudies have confirmed a number of the factors that make it harder for women to advance in the workplace. Women receive more vague feedback and personality criticism in performance evaluations, whereas men receive actionable advice tied to business outcomes (which is more useful). Women frequently experience being excluded from more creative and innovative roles, and not receiving high-visibility “stretch” assignments that are helpful in getting promoted. One study found that men’s voices are perceived as more persuasive, fact-based, and logical than women’s voices, even when reading identical scripts.\nReceiving mentorship has been statistically shown to help men advance, but not women. The reason behind this is that when women receive mentorship, it’s advice on how they should change and gain more self-knowledge. When men receive mentorship, it’s public endorsement of their authority. Guess which is more useful in getting promoted?\nAs long as qualified women keep dropping out of tech, teaching more girls to code will not solve the diversity issues plaguing the field. Diversity initiatives often end up focusing primarily on white women, even though women of color face many additional barriers. In interviews with 60 women of color who work in STEM research, 100% had experienced discrimination.\nThe hiring process is particularly broken in tech. One study indicative of the dysfunction comes from Triplebyte, a company that helps place software engineers in companies, conducting a standardized technical interview as part of this process. They have a fascinating dataset: the results of how over 300 engineers did on their exam, coupled with the results of how those engineers did during the interview process for a variety of companies. The number one finding from Triplebyte’s research is that “the types of programmers that each company looks for often have little to do with what the company needs or does. Rather, they reflect company culture and the backgrounds of the founders.”\nThis is a challenge for those trying to break into the world of deep learning, since most companies’ deep learning groups today were founded by academics. These groups tend to look for people “like them”—that is, people that can solve complex math problems and understand dense jargon. They don’t always know how to spot people who are actually good at solving real problems using deep learning.\nThis leaves a big opportunity for companies that are ready to look beyond status and pedigree, and focus on results!\n\n\nFairness, Accountability, and Transparency\nThe professional society for computer scientists, the ACM, runs a data ethics conference called the Conference on Fairness, Accountability, and Transparency. “Fairness, Accountability, and Transparency” which used to go under the acronym FAT but now uses to the less objectionable FAccT. Microsoft has a group focused on “Fairness, Accountability, Transparency, and Ethics” (FATE). In this section, we’ll use “FAccT” to refer to the concepts of Fairness, Accountability, and Transparency.\nFAccT is another lens that you may find useful in considering ethical issues. One useful resource for this is the free online book Fairness and Machine Learning: Limitations and Opportunities by Solon Barocas, Moritz Hardt, and Arvind Narayanan, which “gives a perspective on machine learning that treats fairness as a central concern rather than an afterthought.” It also warns, however, that it “is intentionally narrow in scope… A narrow framing of machine learning ethics might be tempting to technologists and businesses as a way to focus on technical interventions while sidestepping deeper questions about power and accountability. We caution against this temptation.” Rather than provide an overview of the FAccT approach to ethics (which is better done in books such as that one), our focus here will be on the limitations of this kind of narrow framing.\nOne great way to consider whether an ethical lens is complete is to try to come up with an example where the lens and our own ethical intuitions give diverging results. Os Keyes, Jevan Hutson, and Meredith Durbin explored this in a graphic way in their paper “A Mulching Proposal: Analysing and Improving an Algorithmic System for Turning the Elderly into High-Nutrient Slurry”. The paper’s abstract says:\n\n: The ethical implications of algorithmic systems have been much discussed in both HCI and the broader community of those interested in technology design, development and policy. In this paper, we explore the application of one prominent ethical framework - Fairness, Accountability, and Transparency - to a proposed algorithm that resolves various societal issues around food security and population aging. Using various standardised forms of algorithmic audit and evaluation, we drastically increase the algorithm’s adherence to the FAT framework, resulting in a more ethical and beneficent system. We discuss how this might serve as a guide to other researchers or practitioners looking to ensure better ethical outcomes from algorithmic systems in their line of work.\n\nIn this paper, the rather controversial proposal (“Turning the Elderly into High-Nutrient Slurry”) and the results (“drastically increase the algorithm’s adherence to the FAT framework, resulting in a more ethical and beneficent system”) are at odds… to say the least!\nIn philosophy, and especially philosophy of ethics, this is one of the most effective tools: first, come up with a process, definition, set of questions, etc., which is designed to resolve some problem. Then try to come up with an example where that apparent solution results in a proposal that no one would consider acceptable. This can then lead to a further refinement of the solution.\nSo far, we’ve focused on things that you and your organization can do. But sometimes individual or organizational action is not enough. Sometimes, governments also need to consider policy implications.",
+ "crumbs": [
+ "Getting Started",
+ "3*AI and Data Ethics*"
+ ]
+ },
+ {
+ "objectID": "book3.html#role-of-policy",
+ "href": "book3.html#role-of-policy",
+ "title": "3 AI and Data Ethics",
+ "section": "3.5 Role of Policy",
+ "text": "3.5 Role of Policy\nWe often talk to people who are eager for technical or design fixes to be a full solution to the kinds of problems that we’ve been discussing; for instance, a technical approach to debias data, or design guidelines for making technology less addictive. While such measures can be useful, they will not be sufficient to address the underlying problems that have led to our current state. For example, as long as it is incredibly profitable to create addictive technology, companies will continue to do so, regardless of whether this has the side effect of promoting conspiracy theories and polluting our information ecosystem. While individual designers may try to tweak product designs, we will not see substantial changes until the underlying profit incentives change.\n\nThe Effectiveness of Regulation\nTo look at what can cause companies to take concrete action, consider the following two examples of how Facebook has behaved. In 2018, a UN investigation found that Facebook had played a “determining role” in the ongoing genocide of the Rohingya, an ethnic minority in Mynamar described by UN Secretary-General Antonio Guterres as “one of, if not the, most discriminated people in the world.” Local activists had been warning Facebook executives that their platform was being used to spread hate speech and incite violence since as early as 2013. In 2015, they were warned that Facebook could play the same role in Myanmar that the radio broadcasts played during the Rwandan genocide (where a million people were killed). Yet, by the end of 2015, Facebook only employed four contractors that spoke Burmese. As one person close to the matter said, “That’s not 20/20 hindsight. The scale of this problem was significant and it was already apparent.” Zuckerberg promised during the congressional hearings to hire “dozens” to address the genocide in Myanmar (in 2018, years after the genocide had begun, including the destruction by fire of at least 288 villages in northern Rakhine state after August 2017).\nThis stands in stark contrast to Facebook quickly hiring 1,200 people in Germany to try to avoid expensive penalties (of up to 50 million euros) under a new German law against hate speech. Clearly, in this case, Facebook was more reactive to the threat of a financial penalty than to the systematic destruction of an ethnic minority.\nIn an article on privacy issues, Maciej Ceglowski draws parallels with the environmental movement:\n\n: This regulatory project has been so successful in the First World that we risk forgetting what life was like before it. Choking smog of the kind that today kills thousands in Jakarta and Delhi was https://en.wikipedia.org/wiki/Pea_soup_fog[once emblematic of London]. The Cuyahoga River in Ohio used to http://www.ohiohistorycentral.org/w/Cuyahoga_River_Fire[reliably catch fire]. In a particularly horrific example of unforeseen consequences, tetraethyl lead added to gasoline https://en.wikipedia.org/wiki/Lead%E2%80%93crime_hypothesis[raised violent crime rates] worldwide for fifty years. None of these harms could have been fixed by telling people to vote with their wallet, or carefully review the environmental policies of every company they gave their business to, or to stop using the technologies in question. It took coordinated, and sometimes highly technical, regulation across jurisdictional boundaries to fix them. In some cases, like the https://en.wikipedia.org/wiki/Montreal_Protocol[ban on commercial refrigerants] that depleted the ozone layer, that regulation required a worldwide consensus. We’re at the point where we need a similar shift in perspective in our privacy law.\n\n\n\nRights and Policy\nClean air and clean drinking water are public goods which are nearly impossible to protect through individual market decisions, but rather require coordinated regulatory action. Similarly, many of the harms resulting from unintended consequences of misuses of technology involve public goods, such as a polluted information environment or deteriorated ambient privacy. Too often privacy is framed as an individual right, yet there are societal impacts to widespread surveillance (which would still be the case even if it was possible for a few individuals to opt out).\nMany of the issues we are seeing in tech are actually human rights issues, such as when a biased algorithm recommends that Black defendants have longer prison sentences, when particular job ads are only shown to young people, or when police use facial recognition to identify protesters. The appropriate venue to address human rights issues is typically through the law.\nWe need both regulatory and legal changes, and the ethical behavior of individuals. Individual behavior change can’t address misaligned profit incentives, externalities (where corporations reap large profits while offloading their costs and harms to the broader society), or systemic failures. However, the law will never cover all edge cases, and it is important that individual software developers and data scientists are equipped to make ethical decisions in practice.\n\n\nCars: A Historical Precedent\nThe problems we are facing are complex, and there are no simple solutions. This can be discouraging, but we find hope in considering other large challenges that people have tackled throughout history. One example is the movement to increase car safety, covered as a case study in “Datasheets for Datasets” by Timnit Gebru et al. and in the design podcast 99% Invisible. Early cars had no seatbelts, metal knobs on the dashboard that could lodge in people’s skulls during a crash, regular plate glass windows that shattered in dangerous ways, and non-collapsible steering columns that impaled drivers. However, car companies were incredibly resistant to even discussing the idea of safety as something they could help address, and the widespread belief was that cars are just the way they are, and that it was the people using them who caused problems.\nIt took consumer safety activists and advocates decades of work to even change the national conversation to consider that perhaps car companies had some responsibility which should be addressed through regulation. When the collapsible steering column was invented, it was not implemented for several years as there was no financial incentive to do so. Major car company General Motors hired private detectives to try to dig up dirt on consumer safety advocate Ralph Nader. The requirement of seatbelts, crash test dummies, and collapsible steering columns were major victories. It was only in 2011 that car companies were required to start using crash test dummies that would represent the average woman, and not just average men’s bodies; prior to this, women were 40% more likely to be injured in a car crash of the same impact compared to a man. This is a vivid example of the ways that bias, policy, and technology have important consequences.",
+ "crumbs": [
+ "Getting Started",
+ "3*AI and Data Ethics*"
+ ]
+ },
+ {
+ "objectID": "book3.html#conclusion",
+ "href": "book3.html#conclusion",
+ "title": "3 AI and Data Ethics",
+ "section": "3.6 Conclusion",
+ "text": "3.6 Conclusion\nComing from a background of working with binary logic, the lack of clear answers in ethics can be frustrating at first. Yet, the implications of how our work impacts the world, including unintended consequences and the work becoming weaponized by bad actors, are some of the most important questions we can (and should!) consider. Even though there aren’t any easy answers, there are definite pitfalls to avoid and practices to follow to move toward more ethical behavior.\nMany people (including us!) are looking for more satisfying, solid answers about how to address harmful impacts of technology. However, given the complex, far-reaching, and interdisciplinary nature of the problems we are facing, there are no simple solutions. Julia Angwin, former senior reporter at ProPublica who focuses on issues of algorithmic bias and surveillance (and one of the 2016 investigators of the COMPAS recidivism algorithm that helped spark the field of FAccT) said in a 2019 interview:\n\n: I strongly believe that in order to solve a problem, you have to diagnose it, and that we’re still in the diagnosis phase of this. If you think about the turn of the century and industrialization, we had, I don’t know, 30 years of child labor, unlimited work hours, terrible working conditions, and it took a lot of journalist muckraking and advocacy to diagnose the problem and have some understanding of what it was, and then the activism to get laws changed. I feel like we’re in a second industrialization of data information… I see my role as trying to make as clear as possible what the downsides are, and diagnosing them really accurately so that they can be solvable. That’s hard work, and lots more people need to be doing it.\n\nIt’s reassuring that Angwin thinks we are largely still in the diagnosis phase: if your understanding of these problems feels incomplete, that is normal and natural. Nobody has a “cure” yet, although it is vital that we continue working to better understand and address the problems we are facing.\nOne of our reviewers for this book, Fred Monroe, used to work in hedge fund trading. He told us, after reading this chapter, that many of the issues discussed here (distribution of data being dramatically different than what a model was trained on, the impact feedback loops on a model once deployed and at scale, and so forth) were also key issues for building profitable trading models. The kinds of things you need to do to consider societal consequences are going to have a lot of overlap with things you need to do to consider organizational, market, and customer consequences—so thinking carefully about ethics can also help you think carefully about how to make your data product successful more generally!",
+ "crumbs": [
+ "Getting Started",
+ "3*AI and Data Ethics*"
+ ]
+ },
+ {
+ "objectID": "book3.html#questionnaire",
+ "href": "book3.html#questionnaire",
+ "title": "3 AI and Data Ethics",
+ "section": "3.7 Questionnaire",
+ "text": "3.7 Questionnaire\n\nDoes ethics provide a list of “right answers”?\nHow can working with people of different backgrounds help when considering ethical questions?\nWhat was the role of IBM in Nazi Germany? Why did the company participate as it did? Why did the workers participate?\nWhat was the role of the first person jailed in the Volkswagen diesel scandal?\nWhat was the problem with a database of suspected gang members maintained by California law enforcement officials?\nWhy did YouTube’s recommendation algorithm recommend videos of partially clothed children to pedophiles, even though no employee at Google had programmed this feature?\nWhat are the problems with the centrality of metrics?\nWhy did Meetup.com not include gender in its recommendation system for tech meetups?\nWhat are the six types of bias in machine learning, according to Suresh and Guttag?\nGive two examples of historical race bias in the US.\nWhere are most images in ImageNet from?\nIn the paper “Does Machine Learning Automate Moral Hazard and Error” why is sinusitis found to be predictive of a stroke?\nWhat is representation bias?\nHow are machines and people different, in terms of their use for making decisions?\nIs disinformation the same as “fake news”?\nWhy is disinformation through auto-generated text a particularly significant issue?\nWhat are the five ethical lenses described by the Markkula Center?\nWhere is policy an appropriate tool for addressing data ethics issues?\n\n\nFurther Research:\n\nRead the article “What Happens When an Algorithm Cuts Your Healthcare”. How could problems like this be avoided in the future?\nResearch to find out more about YouTube’s recommendation system and its societal impacts. Do you think recommendation systems must always have feedback loops with negative results? What approaches could Google take to avoid them? What about the government?\nRead the paper “Discrimination in Online Ad Delivery”. Do you think Google should be considered responsible for what happened to Dr. Sweeney? What would be an appropriate response?\nHow can a cross-disciplinary team help avoid negative consequences?\nRead the paper “Does Machine Learning Automate Moral Hazard and Error”. What actions do you think should be taken to deal with the issues identified in this paper?\nRead the article “How Will We Prevent AI-Based Forgery?” Do you think Etzioni’s proposed approach could work? Why?\nComplete the section “Analyze a Project You Are Working On” in this chapter.\nConsider whether your team could be more diverse. If so, what approaches might help?",
+ "crumbs": [
+ "Getting Started",
+ "3*AI and Data Ethics*"
+ ]
+ },
+ {
+ "objectID": "book3.html#deep-learning-in-practice-thats-a-wrap",
+ "href": "book3.html#deep-learning-in-practice-thats-a-wrap",
+ "title": "3 AI and Data Ethics",
+ "section": "3.8 Deep Learning in Practice: That’s a Wrap!",
+ "text": "3.8 Deep Learning in Practice: That’s a Wrap!\nCongratulations! You’ve made it to the end of the first section of the book. In this section we’ve tried to show you what deep learning can do, and how you can use it to create real applications and products. At this point, you will get a lot more out of the book if you spend some time trying out what you’ve learned. Perhaps you have already been doing this as you go along—in which case, great! If not, that’s no problem either… Now is a great time to start experimenting yourself.\nIf you haven’t been to the book’s website yet, head over there now. It’s really important that you get yourself set up to run the notebooks. Becoming an effective deep learning practitioner is all about practice, so you need to be training models. So, please go get the notebooks running now if you haven’t already! And also have a look on the website for any important updates or notices; deep learning changes fast, and we can’t change the words that are printed in this book, so the website is where you need to look to ensure you have the most up-to-date information.\nMake sure that you have completed the following steps:\n\nConnect to one of the GPU Jupyter servers recommended on the book’s website.\nRun the first notebook yourself.\nUpload an image that you find in the first notebook; then try a few different images of different kinds to see what happens.\nRun the second notebook, collecting your own dataset based on image search queries that you come up with.\nThink about how you can use deep learning to help you with your own projects, including what kinds of data you could use, what kinds of problems may come up, and how you might be able to mitigate these issues in practice.\n\nIn the next section of the book you will learn about how and why deep learning works, instead of just seeing how you can use it in practice. Understanding the how and why is important for both practitioners and researchers, because in this fairly new field nearly every project requires some level of customization and debugging. The better you understand the foundations of deep learning, the better your models will be. These foundations are less important for executives, product managers, and so forth (although still useful, so feel free to keep reading!), but they are critical for anybody who is actually training and deploying models themselves.",
+ "crumbs": [
+ "Getting Started",
+ "3*AI and Data Ethics*"
+ ]
+ },
+ {
+ "objectID": "book5.html",
+ "href": "book5.html",
+ "title": "5 Image Classification",
+ "section": "",
+ "text": "Now that you understand what deep learning is, what it’s for, and how to create and deploy a model, it’s time for us to go deeper! In an ideal world deep learning practitioners wouldn’t have to know every detail of how things work under the hood… But as yet, we don’t live in an ideal world. The truth is, to make your model really work, and work reliably, there are a lot of details you have to get right, and a lot of details that you have to check. This process requires being able to look inside your neural network as it trains, and as it makes predictions, find possible problems, and know how to fix them.\nSo, from here on in the book we are going to do a deep dive into the mechanics of deep learning. What is the architecture of a computer vision model, an NLP model, a tabular model, and so on? How do you create an architecture that matches the needs of your particular domain? How do you get the best possible results from the training process? How do you make things faster? What do you have to change as your datasets change?\nWe will start by repeating the same basic applications that we looked at in the first chapter, but we are going to do two things:\n\nMake them better.\nApply them to a wider variety of types of data.\n\nIn order to do these two things, we will have to learn all of the pieces of the deep learning puzzle. This includes different types of layers, regularization methods, optimizers, how to put layers together into architectures, labeling techniques, and much more. We are not just going to dump all of these things on you, though; we will introduce them progressively as needed, to solve actual problems related to the projects we are working on.\n\nThis is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).",
+ "crumbs": [
+ "The Details",
+ "5*Image Classification*"
+ ]
+ },
+ {
+ "objectID": "book6.html",
+ "href": "book6.html",
+ "title": "6 Other Computer Vision Problems",
+ "section": "",
+ "text": "6.1 Multi-Label Classification\nMulti-label classification refers to the problem of identifying the categories of objects in images that may not contain exactly one type of object. There may be more than one kind of object, or there may be no objects at all in the classes that you are looking for.\nFor instance, this would have been a great approach for our bear classifier. One problem with the bear classifier that we rolled out in Chapter 2 was that if a user uploaded something that wasn’t any kind of bear, the model would still say it was either a grizzly, black, or teddy bear—it had no ability to predict “not a bear at all.” In fact, after we have completed this chapter, it would be a great exercise for you to go back to your image classifier application, and try to retrain it using the multi-label technique, then test it by passing in an image that is not of any of your recognized classes.\nIn practice, we have not seen many examples of people training multi-label classifiers for this purpose—but we very often see both users and developers complaining about this problem. It appears that this simple solution is not at all widely understood or appreciated! Because in practice it is probably more common to have some images with zero matches or more than one match, we should probably expect in practice that multi-label classifiers are more widely applicable than single-label classifiers.\nFirst, let’s see what a multi-label dataset looks like, then we’ll explain how to get it ready for our model. You’ll see that the architecture of the model does not change from the last chapter; only the loss function does. Let’s start with the data.\nThis is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).",
+ "crumbs": [
+ "The Details",
+ "6*Other Computer Vision Problems*"
+ ]
+ },
+ {
+ "objectID": "book7.html",
+ "href": "book7.html",
+ "title": "7 Training a State-of-the-Art Model",
+ "section": "",
+ "text": "7.1 Imagenette\nWhen fast.ai first started there were three main datasets that people used for building and testing computer vision models:\nThe problem was that the smaller datasets didn’t actually generalize effectively to the large ImageNet dataset. The approaches that worked well on ImageNet generally had to be developed and trained on ImageNet. This led to many people believing that only researchers with access to giant computing resources could effectively contribute to developing image classification algorithms.\nWe thought that seemed very unlikely to be true. We had never actually seen a study that showed that ImageNet happen to be exactly the right size, and that other datasets could not be developed which would provide useful insights. So we thought we would try to create a new dataset that researchers could test their algorithms on quickly and cheaply, but which would also provide insights likely to work on the full ImageNet dataset.\nAbout three hours later we had created Imagenette. We selected 10 classes from the full ImageNet that looked very different from one another. As we had hoped, we were able to quickly and cheaply create a classifier capable of recognizing these classes. We then tried out a few algorithmic tweaks to see how they impacted Imagenette. We found some that worked pretty well, and tested them on ImageNet as well—and we were very pleased to find that our tweaks worked well on ImageNet too!\nThere is an important message here: the dataset you get given is not necessarily the dataset you want. It’s particularly unlikely to be the dataset that you want to do your development and prototyping in. You should aim to have an iteration speed of no more than a couple of minutes—that is, when you come up with a new idea you want to try out, you should be able to train a model and see how it goes within a couple of minutes. If it’s taking longer to do an experiment, think about how you could cut down your dataset, or simplify your model, to improve your experimentation speed. The more experiments you can do, the better!\nThis is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).",
+ "crumbs": [
+ "The Details",
+ "7*Training a State-of-the-Art Model*"
+ ]
+ },
+ {
+ "objectID": "book7.html#imagenette",
+ "href": "book7.html#imagenette",
+ "title": "7 Training a State-of-the-Art Model",
+ "section": "",
+ "text": "ImageNet:: 1.3 million images of various sizes around 500 pixels across, in 1,000 categories, which took a few days to train\nMNIST:: 50,000 28×28-pixel grayscale handwritten digits\nCIFAR10:: 60,000 32×32-pixel color images in 10 classes",
+ "crumbs": [
+ "The Details",
+ "7*Training a State-of-the-Art Model*"
+ ]
+ },
+ {
+ "objectID": "book8.html",
+ "href": "book8.html",
+ "title": "8 Collaborative Filtering Deep Dive",
+ "section": "",
+ "text": "One very common problem to solve is when you have a number of users and a number of products, and you want to recommend which products are most likely to be useful for which users. There are many variations of this: for example, recommending movies (such as on Netflix), figuring out what to highlight for a user on a home page, deciding what stories to show in a social media feed, and so forth. There is a general solution to this problem, called collaborative filtering, which works like this: look at what products the current user has used or liked, find other users that have used or liked similar products, and then recommend other products that those users have used or liked.\nFor example, on Netflix you may have watched lots of movies that are science fiction, full of action, and were made in the 1970s. Netflix may not know these particular properties of the films you have watched, but it will be able to see that other people that have watched the same movies that you watched also tended to watch other movies that are science fiction, full of action, and were made in the 1970s. In other words, to use this approach we don’t necessarily need to know anything about the movies, except who like to watch them.\nThere is actually a more general class of problems that this approach can solve, not necessarily involving users and products. Indeed, for collaborative filtering we more commonly refer to items, rather than products. Items could be links that people click on, diagnoses that are selected for patients, and so forth.\nThe key foundational idea is that of latent factors. In the Netflix example, we started with the assumption that you like old, action-packed sci-fi movies. But you never actually told Netflix that you like these kinds of movies. And Netflix never actually needed to add columns to its movies table saying which movies are of these types. Still, there must be some underlying concept of sci-fi, action, and movie age, and these concepts must be relevant for at least some people’s movie watching decisions.\nFor this chapter we are going to work on this movie recommendation problem. We’ll start by getting some data suitable for a collaborative filtering model.\n\nThis is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).",
+ "crumbs": [
+ "The Details",
+ "8*Collaborative Filtering Deep Dive*"
+ ]
+ },
+ {
+ "objectID": "book9.html",
+ "href": "book9.html",
+ "title": "9 Tabular Modeling Deep Dive",
+ "section": "",
+ "text": "9.1 Categorical Embeddings\nIn tabular data some columns may contain numerical data, like “age,” while others contain string values, like “sex.” The numerical data can be directly fed to the model (with some optional preprocessing), but the other columns need to be converted to numbers. Since the values in those correspond to different categories, we often call this type of variables categorical variables. The first type are called continuous variables.\nAt the end of 2015, the Rossmann sales competition ran on Kaggle. Competitors were given a wide range of information about various stores in Germany, and were tasked with trying to predict sales on a number of days. The goal was to help the company to manage stock properly and be able to satisfy demand without holding unnecessary inventory. The official training set provided a lot of information about the stores. It was also permitted for competitors to use additional data, as long as that data was made public and available to all participants.\nThis is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).",
+ "crumbs": [
+ "The Details",
+ "9*Tabular Modeling Deep Dive*"
+ ]
+ },
+ {
+ "objectID": "book9.html#categorical-embeddings",
+ "href": "book9.html#categorical-embeddings",
+ "title": "9 Tabular Modeling Deep Dive",
+ "section": "",
+ "text": "jargon: Continuous and Categorical Variables: Continuous variables are numerical data, such as “age,” that can be directly fed to the model, since you can add and multiply them directly. Categorical variables contain a number of discrete levels, such as “movie ID,” for which addition and multiplication don’t have meaning (even if they’re stored as numbers).",
+ "crumbs": [
+ "The Details",
+ "9*Tabular Modeling Deep Dive*"
+ ]
+ },
+ {
+ "objectID": "book10.html",
+ "href": "book10.html",
+ "title": "10 NLP Deep Dive: RNNs",
+ "section": "",
+ "text": "In Chapter 1 we saw that deep learning can be used to get great results with natural language datasets. Our example relied on using a pretrained language model and fine-tuning it to classify reviews. That example highlighted a difference between transfer learning in NLP and computer vision: in general in NLP the pretrained model is trained on a different task.\nWhat we call a language model is a model that has been trained to guess what the next word in a text is (having read the ones before). This kind of task is called self-supervised learning: we do not need to give labels to our model, just feed it lots and lots of texts. It has a process to automatically get labels from the data, and this task isn’t trivial: to properly guess the next word in a sentence, the model will have to develop an understanding of the English (or other) language. Self-supervised learning can also be used in other domains; for instance, see “Self-Supervised Learning and Computer Vision” for an introduction to vision applications. Self-supervised learning is not usually used for the model that is trained directly, but instead is used for pretraining a model used for transfer learning.\n\njargon: Self-supervised learning: Training a model using labels that are embedded in the independent variable, rather than requiring external labels. For instance, training a model to predict the next word in a text.\n\nThe language model we used in Chapter 1 to classify IMDb reviews was pretrained on Wikipedia. We got great results by directly fine-tuning this language model to a movie review classifier, but with one extra step, we can do even better. The Wikipedia English is slightly different from the IMDb English, so instead of jumping directly to the classifier, we could fine-tune our pretrained language model to the IMDb corpus and then use that as the base for our classifier.\nEven if our language model knows the basics of the language we are using in the task (e.g., our pretrained model is in English), it helps to get used to the style of the corpus we are targeting. It may be more informal language, or more technical, with new words to learn or different ways of composing sentences. In the case of the IMDb dataset, there will be lots of names of movie directors and actors, and often a less formal style of language than that seen in Wikipedia.\nWe already saw that with fastai, we can download a pretrained English language model and use it to get state-of-the-art results for NLP classification. (We expect pretrained models in many more languages to be available soon—they might well be available by the time you are reading this book, in fact.) So, why are we learning how to train a language model in detail?\nOne reason, of course, is that it is helpful to understand the foundations of the models that you are using. But there is another very practical reason, which is that you get even better results if you fine-tune the (sequence-based) language model prior to fine-tuning the classification model. For instance, for the IMDb sentiment analysis task, the dataset includes 50,000 additional movie reviews that do not have any positive or negative labels attached. Since there are 25,000 labeled reviews in the training set and 25,000 in the validation set, that makes 100,000 movie reviews altogether. We can use all of these reviews to fine-tune the pretrained language model, which was trained only on Wikipedia articles; this will result in a language model that is particularly good at predicting the next word of a movie review.\n\nThis is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).",
+ "crumbs": [
+ "The Details",
+ "10*NLP Deep Dive: RNNs*"
+ ]
+ },
+ {
+ "objectID": "book11.html",
+ "href": "book11.html",
+ "title": "11 Data Munging with fastai’s Mid-Level API",
+ "section": "",
+ "text": "11.1 Going Deeper into fastai’s Layered API\nThe fastai library is built on a layered API. In the very top layer there are applications that allow us to train a model in five lines of codes, as we saw in Chapter 1. In the case of creating DataLoaders for a text classifier, for instance, we used the line:\nThe factory method TextDataLoaders.from_folder is very convenient when your data is arranged the exact same way as the IMDb dataset, but in practice, that often won’t be the case. The data block API offers more flexibility.\nThis is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).",
+ "crumbs": [
+ "From the Foundations",
+ "11*Data Munging with fastai's Mid-Level API*"
+ ]
+ },
+ {
+ "objectID": "book11.html#going-deeper-into-fastais-layered-api",
+ "href": "book11.html#going-deeper-into-fastais-layered-api",
+ "title": "11 Data Munging with fastai’s Mid-Level API",
+ "section": "",
+ "text": "from fastai.text.all import *\n\ndls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')",
+ "crumbs": [
+ "From the Foundations",
+ "11*Data Munging with fastai's Mid-Level API*"
+ ]
+ },
+ {
+ "objectID": "book12.html",
+ "href": "book12.html",
+ "title": "12 A Language Model from Scratch",
+ "section": "",
+ "text": "12.1 The Data\nWhenever we start working on a new problem, we always first try to think of the simplest dataset we can that will allow us to try out methods quickly and easily, and interpret the results. When we started working on language modeling a few years ago we didn’t find any datasets that would allow for quick prototyping, so we made one. We call it Human Numbers, and it simply contains the first 10,000 numbers written out in English.\nThis is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).",
+ "crumbs": [
+ "From the Foundations",
+ "12*A Language Model from Scratch*"
+ ]
+ },
+ {
+ "objectID": "book12.html#the-data",
+ "href": "book12.html#the-data",
+ "title": "12 A Language Model from Scratch",
+ "section": "",
+ "text": "j: One of the most common practical mistakes I see even amongst highly experienced practitioners is failing to use appropriate datasets at appropriate times during the analysis process. In particular, most people tend to start with datasets that are too big and too complicated.",
+ "crumbs": [
+ "From the Foundations",
+ "12*A Language Model from Scratch*"
+ ]
+ },
+ {
+ "objectID": "book15.html",
+ "href": "book15.html",
+ "title": "15 Application Architectures Deep Dive",
+ "section": "",
+ "text": "15.1 Computer Vision\nFor computer vision application we use the functions vision_learner and unet_learner to build our models, depending on the task. In this section we’ll explore how to build the Learner objects we used in Parts 1 and 2 of this book.",
+ "crumbs": [
+ "From the Foundations",
+ "15*Application Architectures Deep Dive*"
+ ]
+ },
+ {
+ "objectID": "book15.html#computer-vision",
+ "href": "book15.html#computer-vision",
+ "title": "15 Application Architectures Deep Dive",
+ "section": "",
+ "text": "vision_learner\nLet’s take a look at what happens when we use the vision_learner function. We begin by passing this function an architecture to use for the body of the network. Most of the time we use a ResNet, which you already know how to create, so we don’t need to delve into that any further. Pretrained weights are downloaded as required and loaded into the ResNet.\nThen, for transfer learning, the network needs to be cut. This refers to slicing off the final layer, which is only responsible for ImageNet-specific categorization. In fact, we do not slice off only this layer, but everything from the adaptive average pooling layer onwards. The reason for this will become clear in just a moment. Since different architectures might use different types of pooling layers, or even completely different kinds of heads, we don’t just search for the adaptive pooling layer to decide where to cut the pretrained model. Instead, we have a dictionary of information that is used for each model to determine where its body ends, and its head starts.\n\nThis is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).",
+ "crumbs": [
+ "From the Foundations",
+ "15*Application Architectures Deep Dive*"
+ ]
+ },
+ {
+ "objectID": "book18.html",
+ "href": "book18.html",
+ "title": "18 CNN Interpretation with CAM",
+ "section": "",
+ "text": "18.1 CAM and Hooks\nThe class activation map (CAM) was introduced by Bolei Zhou et al. in “Learning Deep Features for Discriminative Localization”. It uses the output of the last convolutional layer (just before the average pooling layer) together with the predictions to give us a heatmap visualization of why the model made its decision. This is a useful tool for interpretation.\nMore precisely, at each position of our final convolutional layer, we have as many filters as in the last linear layer. We can therefore compute the dot product of those activations with the final weights to get, for each location on our feature map, the score of the feature that was used to make a decision.\nWe’re going to need a way to get access to the activations inside the model while it’s training. In PyTorch this can be done with a hook. Hooks are PyTorch’s equivalent of fastai’s callbacks. However, rather than allowing you to inject code into the training loop like a fastai Learner callback, hooks allow you to inject code into the forward and backward calculations themselves. We can attach a hook to any layer of the model, and it will be executed when we compute the outputs (forward hook) or during backpropagation (backward hook). A forward hook is a function that takes three things—a module, its input, and its output—and it can perform any behavior you want. (fastai also provides a handy HookCallback that we won’t cover here, but take a look at the fastai docs; it makes working with hooks a little easier.)\nThis is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).",
+ "crumbs": [
+ "From the Foundations",
+ "18*CNN Interpretation with CAM*"
+ ]
+ },
+ {
+ "objectID": "book19.html",
+ "href": "book19.html",
+ "title": "19 A fastai Learner from Scratch",
+ "section": "",
+ "text": "This final chapter (other than the conclusion and the online chapters) is going to look a bit different. It contains far more code and far less prose than the previous chapters. We will introduce new Python keywords and libraries without discussing them. This chapter is meant to be the start of a significant research project for you. You see, we are going to implement many of the key pieces of the fastai and PyTorch APIs from scratch, building on nothing other than the components that we developed in Chapter 17! The key goal here is to end up with your own Learner class, and some callbacks—enough to be able to train a model on Imagenette, including examples of each of the key techniques we’ve studied. On the way to building Learner, we will create our own version of Module, Parameter, and parallel DataLoader so you have a very good idea of what those PyTorch classes do.\nThe end-of-chapter questionnaire is particularly important for this chapter. This is where we will be pointing you in the many interesting directions that you could take, using this chapter as your starting point. We suggest that you follow along with this chapter on your computer, and do lots of experiments, web searches, and whatever else you need to understand what’s going on. You’ve built up the skills and expertise to do this in the rest of this book, so we think you are going to do great!\nLet’s begin by gathering (manually) some data.\n\nThis is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).",
+ "crumbs": [
+ "From the Foundations",
+ "19*A fastai Learner from Scratch*"
+ ]
+ },
+ {
+ "objectID": "book20.html",
+ "href": "book20.html",
+ "title": "20 Concluding Thoughts",
+ "section": "",
+ "text": "Congratulations! You’ve made it! If you have worked through all of the notebooks to this point, then you have joined the small, but growing group of people that are able to harness the power of deep learning to solve real problems. You may not feel that way yet—in fact you probably don’t. We have seen again and again that students that complete the fast.ai courses dramatically underestimate how effective they are as deep learning practitioners. We’ve also seen that these people are often underestimated by others with a classic academic background. So if you are to rise above your own expectations and the expectations of others, what you do next, after closing this book, is even more important than what you’ve done to get to this point.\nThe most important thing is to keep the momentum going. In fact, as you know from your study of optimizers, momentum is something that can build upon itself! So think about what you can do now to maintain and accelerate your deep learning journey. Figure 20.1 can give you a few ideas.\n\n\n\n\n\n\nFigure 20.1: What to do next\n\n\n\n\nThis is just a preview of this chapter. The rest of this chapter is not available here, but you read the source notebook which has the same content (but with less nice formatting).",
+ "crumbs": [
+ "From the Foundations",
+ "20*Concluding Thoughts*"
+ ]
+ }
+]
\ No newline at end of file
diff --git a/site_libs/bootstrap/bootstrap-icons.css b/site_libs/bootstrap/bootstrap-icons.css
new file mode 100644
index 0000000..285e444
--- /dev/null
+++ b/site_libs/bootstrap/bootstrap-icons.css
@@ -0,0 +1,2078 @@
+/*!
+ * Bootstrap Icons v1.11.1 (https://icons.getbootstrap.com/)
+ * Copyright 2019-2023 The Bootstrap Authors
+ * Licensed under MIT (https://github.com/twbs/icons/blob/main/LICENSE)
+ */
+
+@font-face {
+ font-display: block;
+ font-family: "bootstrap-icons";
+ src:
+url("./bootstrap-icons.woff?2820a3852bdb9a5832199cc61cec4e65") format("woff");
+}
+
+.bi::before,
+[class^="bi-"]::before,
+[class*=" bi-"]::before {
+ display: inline-block;
+ font-family: bootstrap-icons !important;
+ font-style: normal;
+ font-weight: normal !important;
+ font-variant: normal;
+ text-transform: none;
+ line-height: 1;
+ vertical-align: -.125em;
+ -webkit-font-smoothing: antialiased;
+ -moz-osx-font-smoothing: grayscale;
+}
+
+.bi-123::before { content: "\f67f"; }
+.bi-alarm-fill::before { content: "\f101"; }
+.bi-alarm::before { content: "\f102"; }
+.bi-align-bottom::before { content: "\f103"; }
+.bi-align-center::before { content: "\f104"; }
+.bi-align-end::before { content: "\f105"; }
+.bi-align-middle::before { content: "\f106"; }
+.bi-align-start::before { content: "\f107"; }
+.bi-align-top::before { content: "\f108"; }
+.bi-alt::before { content: "\f109"; }
+.bi-app-indicator::before { content: "\f10a"; }
+.bi-app::before { content: "\f10b"; }
+.bi-archive-fill::before { content: "\f10c"; }
+.bi-archive::before { content: "\f10d"; }
+.bi-arrow-90deg-down::before { content: "\f10e"; }
+.bi-arrow-90deg-left::before { content: "\f10f"; }
+.bi-arrow-90deg-right::before { content: "\f110"; }
+.bi-arrow-90deg-up::before { content: "\f111"; }
+.bi-arrow-bar-down::before { content: "\f112"; }
+.bi-arrow-bar-left::before { content: "\f113"; }
+.bi-arrow-bar-right::before { content: "\f114"; }
+.bi-arrow-bar-up::before { content: "\f115"; }
+.bi-arrow-clockwise::before { content: "\f116"; }
+.bi-arrow-counterclockwise::before { content: "\f117"; }
+.bi-arrow-down-circle-fill::before { content: "\f118"; }
+.bi-arrow-down-circle::before { content: "\f119"; }
+.bi-arrow-down-left-circle-fill::before { content: "\f11a"; }
+.bi-arrow-down-left-circle::before { content: "\f11b"; }
+.bi-arrow-down-left-square-fill::before { content: "\f11c"; }
+.bi-arrow-down-left-square::before { content: "\f11d"; }
+.bi-arrow-down-left::before { content: "\f11e"; }
+.bi-arrow-down-right-circle-fill::before { content: "\f11f"; }
+.bi-arrow-down-right-circle::before { content: "\f120"; }
+.bi-arrow-down-right-square-fill::before { content: "\f121"; }
+.bi-arrow-down-right-square::before { content: "\f122"; }
+.bi-arrow-down-right::before { content: "\f123"; }
+.bi-arrow-down-short::before { content: "\f124"; }
+.bi-arrow-down-square-fill::before { content: "\f125"; }
+.bi-arrow-down-square::before { content: "\f126"; }
+.bi-arrow-down-up::before { content: "\f127"; }
+.bi-arrow-down::before { content: "\f128"; }
+.bi-arrow-left-circle-fill::before { content: "\f129"; }
+.bi-arrow-left-circle::before { content: "\f12a"; }
+.bi-arrow-left-right::before { content: "\f12b"; }
+.bi-arrow-left-short::before { content: "\f12c"; }
+.bi-arrow-left-square-fill::before { content: "\f12d"; }
+.bi-arrow-left-square::before { content: "\f12e"; }
+.bi-arrow-left::before { content: "\f12f"; }
+.bi-arrow-repeat::before { content: "\f130"; }
+.bi-arrow-return-left::before { content: "\f131"; }
+.bi-arrow-return-right::before { content: "\f132"; }
+.bi-arrow-right-circle-fill::before { content: "\f133"; }
+.bi-arrow-right-circle::before { content: "\f134"; }
+.bi-arrow-right-short::before { content: "\f135"; }
+.bi-arrow-right-square-fill::before { content: "\f136"; }
+.bi-arrow-right-square::before { content: "\f137"; }
+.bi-arrow-right::before { content: "\f138"; }
+.bi-arrow-up-circle-fill::before { content: "\f139"; }
+.bi-arrow-up-circle::before { content: "\f13a"; }
+.bi-arrow-up-left-circle-fill::before { content: "\f13b"; }
+.bi-arrow-up-left-circle::before { content: "\f13c"; }
+.bi-arrow-up-left-square-fill::before { content: "\f13d"; }
+.bi-arrow-up-left-square::before { content: "\f13e"; }
+.bi-arrow-up-left::before { content: "\f13f"; }
+.bi-arrow-up-right-circle-fill::before { content: "\f140"; }
+.bi-arrow-up-right-circle::before { content: "\f141"; }
+.bi-arrow-up-right-square-fill::before { content: "\f142"; }
+.bi-arrow-up-right-square::before { content: "\f143"; }
+.bi-arrow-up-right::before { content: "\f144"; }
+.bi-arrow-up-short::before { content: "\f145"; }
+.bi-arrow-up-square-fill::before { content: "\f146"; }
+.bi-arrow-up-square::before { content: "\f147"; }
+.bi-arrow-up::before { content: "\f148"; }
+.bi-arrows-angle-contract::before { content: "\f149"; }
+.bi-arrows-angle-expand::before { content: "\f14a"; }
+.bi-arrows-collapse::before { content: "\f14b"; }
+.bi-arrows-expand::before { content: "\f14c"; }
+.bi-arrows-fullscreen::before { content: "\f14d"; }
+.bi-arrows-move::before { content: "\f14e"; }
+.bi-aspect-ratio-fill::before { content: "\f14f"; }
+.bi-aspect-ratio::before { content: "\f150"; }
+.bi-asterisk::before { content: "\f151"; }
+.bi-at::before { content: "\f152"; }
+.bi-award-fill::before { content: "\f153"; }
+.bi-award::before { content: "\f154"; }
+.bi-back::before { content: "\f155"; }
+.bi-backspace-fill::before { content: "\f156"; }
+.bi-backspace-reverse-fill::before { content: "\f157"; }
+.bi-backspace-reverse::before { content: "\f158"; }
+.bi-backspace::before { content: "\f159"; }
+.bi-badge-3d-fill::before { content: "\f15a"; }
+.bi-badge-3d::before { content: "\f15b"; }
+.bi-badge-4k-fill::before { content: "\f15c"; }
+.bi-badge-4k::before { content: "\f15d"; }
+.bi-badge-8k-fill::before { content: "\f15e"; }
+.bi-badge-8k::before { content: "\f15f"; }
+.bi-badge-ad-fill::before { content: "\f160"; }
+.bi-badge-ad::before { content: "\f161"; }
+.bi-badge-ar-fill::before { content: "\f162"; }
+.bi-badge-ar::before { content: "\f163"; }
+.bi-badge-cc-fill::before { content: "\f164"; }
+.bi-badge-cc::before { content: "\f165"; }
+.bi-badge-hd-fill::before { content: "\f166"; }
+.bi-badge-hd::before { content: "\f167"; }
+.bi-badge-tm-fill::before { content: "\f168"; }
+.bi-badge-tm::before { content: "\f169"; }
+.bi-badge-vo-fill::before { content: "\f16a"; }
+.bi-badge-vo::before { content: "\f16b"; }
+.bi-badge-vr-fill::before { content: "\f16c"; }
+.bi-badge-vr::before { content: "\f16d"; }
+.bi-badge-wc-fill::before { content: "\f16e"; }
+.bi-badge-wc::before { content: "\f16f"; }
+.bi-bag-check-fill::before { content: "\f170"; }
+.bi-bag-check::before { content: "\f171"; }
+.bi-bag-dash-fill::before { content: "\f172"; }
+.bi-bag-dash::before { content: "\f173"; }
+.bi-bag-fill::before { content: "\f174"; }
+.bi-bag-plus-fill::before { content: "\f175"; }
+.bi-bag-plus::before { content: "\f176"; }
+.bi-bag-x-fill::before { content: "\f177"; }
+.bi-bag-x::before { content: "\f178"; }
+.bi-bag::before { content: "\f179"; }
+.bi-bar-chart-fill::before { content: "\f17a"; }
+.bi-bar-chart-line-fill::before { content: "\f17b"; }
+.bi-bar-chart-line::before { content: "\f17c"; }
+.bi-bar-chart-steps::before { content: "\f17d"; }
+.bi-bar-chart::before { content: "\f17e"; }
+.bi-basket-fill::before { content: "\f17f"; }
+.bi-basket::before { content: "\f180"; }
+.bi-basket2-fill::before { content: "\f181"; }
+.bi-basket2::before { content: "\f182"; }
+.bi-basket3-fill::before { content: "\f183"; }
+.bi-basket3::before { content: "\f184"; }
+.bi-battery-charging::before { content: "\f185"; }
+.bi-battery-full::before { content: "\f186"; }
+.bi-battery-half::before { content: "\f187"; }
+.bi-battery::before { content: "\f188"; }
+.bi-bell-fill::before { content: "\f189"; }
+.bi-bell::before { content: "\f18a"; }
+.bi-bezier::before { content: "\f18b"; }
+.bi-bezier2::before { content: "\f18c"; }
+.bi-bicycle::before { content: "\f18d"; }
+.bi-binoculars-fill::before { content: "\f18e"; }
+.bi-binoculars::before { content: "\f18f"; }
+.bi-blockquote-left::before { content: "\f190"; }
+.bi-blockquote-right::before { content: "\f191"; }
+.bi-book-fill::before { content: "\f192"; }
+.bi-book-half::before { content: "\f193"; }
+.bi-book::before { content: "\f194"; }
+.bi-bookmark-check-fill::before { content: "\f195"; }
+.bi-bookmark-check::before { content: "\f196"; }
+.bi-bookmark-dash-fill::before { content: "\f197"; }
+.bi-bookmark-dash::before { content: "\f198"; }
+.bi-bookmark-fill::before { content: "\f199"; }
+.bi-bookmark-heart-fill::before { content: "\f19a"; }
+.bi-bookmark-heart::before { content: "\f19b"; }
+.bi-bookmark-plus-fill::before { content: "\f19c"; }
+.bi-bookmark-plus::before { content: "\f19d"; }
+.bi-bookmark-star-fill::before { content: "\f19e"; }
+.bi-bookmark-star::before { content: "\f19f"; }
+.bi-bookmark-x-fill::before { content: "\f1a0"; }
+.bi-bookmark-x::before { content: "\f1a1"; }
+.bi-bookmark::before { content: "\f1a2"; }
+.bi-bookmarks-fill::before { content: "\f1a3"; }
+.bi-bookmarks::before { content: "\f1a4"; }
+.bi-bookshelf::before { content: "\f1a5"; }
+.bi-bootstrap-fill::before { content: "\f1a6"; }
+.bi-bootstrap-reboot::before { content: "\f1a7"; }
+.bi-bootstrap::before { content: "\f1a8"; }
+.bi-border-all::before { content: "\f1a9"; }
+.bi-border-bottom::before { content: "\f1aa"; }
+.bi-border-center::before { content: "\f1ab"; }
+.bi-border-inner::before { content: "\f1ac"; }
+.bi-border-left::before { content: "\f1ad"; }
+.bi-border-middle::before { content: "\f1ae"; }
+.bi-border-outer::before { content: "\f1af"; }
+.bi-border-right::before { content: "\f1b0"; }
+.bi-border-style::before { content: "\f1b1"; }
+.bi-border-top::before { content: "\f1b2"; }
+.bi-border-width::before { content: "\f1b3"; }
+.bi-border::before { content: "\f1b4"; }
+.bi-bounding-box-circles::before { content: "\f1b5"; }
+.bi-bounding-box::before { content: "\f1b6"; }
+.bi-box-arrow-down-left::before { content: "\f1b7"; }
+.bi-box-arrow-down-right::before { content: "\f1b8"; }
+.bi-box-arrow-down::before { content: "\f1b9"; }
+.bi-box-arrow-in-down-left::before { content: "\f1ba"; }
+.bi-box-arrow-in-down-right::before { content: "\f1bb"; }
+.bi-box-arrow-in-down::before { content: "\f1bc"; }
+.bi-box-arrow-in-left::before { content: "\f1bd"; }
+.bi-box-arrow-in-right::before { content: "\f1be"; }
+.bi-box-arrow-in-up-left::before { content: "\f1bf"; }
+.bi-box-arrow-in-up-right::before { content: "\f1c0"; }
+.bi-box-arrow-in-up::before { content: "\f1c1"; }
+.bi-box-arrow-left::before { content: "\f1c2"; }
+.bi-box-arrow-right::before { content: "\f1c3"; }
+.bi-box-arrow-up-left::before { content: "\f1c4"; }
+.bi-box-arrow-up-right::before { content: "\f1c5"; }
+.bi-box-arrow-up::before { content: "\f1c6"; }
+.bi-box-seam::before { content: "\f1c7"; }
+.bi-box::before { content: "\f1c8"; }
+.bi-braces::before { content: "\f1c9"; }
+.bi-bricks::before { content: "\f1ca"; }
+.bi-briefcase-fill::before { content: "\f1cb"; }
+.bi-briefcase::before { content: "\f1cc"; }
+.bi-brightness-alt-high-fill::before { content: "\f1cd"; }
+.bi-brightness-alt-high::before { content: "\f1ce"; }
+.bi-brightness-alt-low-fill::before { content: "\f1cf"; }
+.bi-brightness-alt-low::before { content: "\f1d0"; }
+.bi-brightness-high-fill::before { content: "\f1d1"; }
+.bi-brightness-high::before { content: "\f1d2"; }
+.bi-brightness-low-fill::before { content: "\f1d3"; }
+.bi-brightness-low::before { content: "\f1d4"; }
+.bi-broadcast-pin::before { content: "\f1d5"; }
+.bi-broadcast::before { content: "\f1d6"; }
+.bi-brush-fill::before { content: "\f1d7"; }
+.bi-brush::before { content: "\f1d8"; }
+.bi-bucket-fill::before { content: "\f1d9"; }
+.bi-bucket::before { content: "\f1da"; }
+.bi-bug-fill::before { content: "\f1db"; }
+.bi-bug::before { content: "\f1dc"; }
+.bi-building::before { content: "\f1dd"; }
+.bi-bullseye::before { content: "\f1de"; }
+.bi-calculator-fill::before { content: "\f1df"; }
+.bi-calculator::before { content: "\f1e0"; }
+.bi-calendar-check-fill::before { content: "\f1e1"; }
+.bi-calendar-check::before { content: "\f1e2"; }
+.bi-calendar-date-fill::before { content: "\f1e3"; }
+.bi-calendar-date::before { content: "\f1e4"; }
+.bi-calendar-day-fill::before { content: "\f1e5"; }
+.bi-calendar-day::before { content: "\f1e6"; }
+.bi-calendar-event-fill::before { content: "\f1e7"; }
+.bi-calendar-event::before { content: "\f1e8"; }
+.bi-calendar-fill::before { content: "\f1e9"; }
+.bi-calendar-minus-fill::before { content: "\f1ea"; }
+.bi-calendar-minus::before { content: "\f1eb"; }
+.bi-calendar-month-fill::before { content: "\f1ec"; }
+.bi-calendar-month::before { content: "\f1ed"; }
+.bi-calendar-plus-fill::before { content: "\f1ee"; }
+.bi-calendar-plus::before { content: "\f1ef"; }
+.bi-calendar-range-fill::before { content: "\f1f0"; }
+.bi-calendar-range::before { content: "\f1f1"; }
+.bi-calendar-week-fill::before { content: "\f1f2"; }
+.bi-calendar-week::before { content: "\f1f3"; }
+.bi-calendar-x-fill::before { content: "\f1f4"; }
+.bi-calendar-x::before { content: "\f1f5"; }
+.bi-calendar::before { content: "\f1f6"; }
+.bi-calendar2-check-fill::before { content: "\f1f7"; }
+.bi-calendar2-check::before { content: "\f1f8"; }
+.bi-calendar2-date-fill::before { content: "\f1f9"; }
+.bi-calendar2-date::before { content: "\f1fa"; }
+.bi-calendar2-day-fill::before { content: "\f1fb"; }
+.bi-calendar2-day::before { content: "\f1fc"; }
+.bi-calendar2-event-fill::before { content: "\f1fd"; }
+.bi-calendar2-event::before { content: "\f1fe"; }
+.bi-calendar2-fill::before { content: "\f1ff"; }
+.bi-calendar2-minus-fill::before { content: "\f200"; }
+.bi-calendar2-minus::before { content: "\f201"; }
+.bi-calendar2-month-fill::before { content: "\f202"; }
+.bi-calendar2-month::before { content: "\f203"; }
+.bi-calendar2-plus-fill::before { content: "\f204"; }
+.bi-calendar2-plus::before { content: "\f205"; }
+.bi-calendar2-range-fill::before { content: "\f206"; }
+.bi-calendar2-range::before { content: "\f207"; }
+.bi-calendar2-week-fill::before { content: "\f208"; }
+.bi-calendar2-week::before { content: "\f209"; }
+.bi-calendar2-x-fill::before { content: "\f20a"; }
+.bi-calendar2-x::before { content: "\f20b"; }
+.bi-calendar2::before { content: "\f20c"; }
+.bi-calendar3-event-fill::before { content: "\f20d"; }
+.bi-calendar3-event::before { content: "\f20e"; }
+.bi-calendar3-fill::before { content: "\f20f"; }
+.bi-calendar3-range-fill::before { content: "\f210"; }
+.bi-calendar3-range::before { content: "\f211"; }
+.bi-calendar3-week-fill::before { content: "\f212"; }
+.bi-calendar3-week::before { content: "\f213"; }
+.bi-calendar3::before { content: "\f214"; }
+.bi-calendar4-event::before { content: "\f215"; }
+.bi-calendar4-range::before { content: "\f216"; }
+.bi-calendar4-week::before { content: "\f217"; }
+.bi-calendar4::before { content: "\f218"; }
+.bi-camera-fill::before { content: "\f219"; }
+.bi-camera-reels-fill::before { content: "\f21a"; }
+.bi-camera-reels::before { content: "\f21b"; }
+.bi-camera-video-fill::before { content: "\f21c"; }
+.bi-camera-video-off-fill::before { content: "\f21d"; }
+.bi-camera-video-off::before { content: "\f21e"; }
+.bi-camera-video::before { content: "\f21f"; }
+.bi-camera::before { content: "\f220"; }
+.bi-camera2::before { content: "\f221"; }
+.bi-capslock-fill::before { content: "\f222"; }
+.bi-capslock::before { content: "\f223"; }
+.bi-card-checklist::before { content: "\f224"; }
+.bi-card-heading::before { content: "\f225"; }
+.bi-card-image::before { content: "\f226"; }
+.bi-card-list::before { content: "\f227"; }
+.bi-card-text::before { content: "\f228"; }
+.bi-caret-down-fill::before { content: "\f229"; }
+.bi-caret-down-square-fill::before { content: "\f22a"; }
+.bi-caret-down-square::before { content: "\f22b"; }
+.bi-caret-down::before { content: "\f22c"; }
+.bi-caret-left-fill::before { content: "\f22d"; }
+.bi-caret-left-square-fill::before { content: "\f22e"; }
+.bi-caret-left-square::before { content: "\f22f"; }
+.bi-caret-left::before { content: "\f230"; }
+.bi-caret-right-fill::before { content: "\f231"; }
+.bi-caret-right-square-fill::before { content: "\f232"; }
+.bi-caret-right-square::before { content: "\f233"; }
+.bi-caret-right::before { content: "\f234"; }
+.bi-caret-up-fill::before { content: "\f235"; }
+.bi-caret-up-square-fill::before { content: "\f236"; }
+.bi-caret-up-square::before { content: "\f237"; }
+.bi-caret-up::before { content: "\f238"; }
+.bi-cart-check-fill::before { content: "\f239"; }
+.bi-cart-check::before { content: "\f23a"; }
+.bi-cart-dash-fill::before { content: "\f23b"; }
+.bi-cart-dash::before { content: "\f23c"; }
+.bi-cart-fill::before { content: "\f23d"; }
+.bi-cart-plus-fill::before { content: "\f23e"; }
+.bi-cart-plus::before { content: "\f23f"; }
+.bi-cart-x-fill::before { content: "\f240"; }
+.bi-cart-x::before { content: "\f241"; }
+.bi-cart::before { content: "\f242"; }
+.bi-cart2::before { content: "\f243"; }
+.bi-cart3::before { content: "\f244"; }
+.bi-cart4::before { content: "\f245"; }
+.bi-cash-stack::before { content: "\f246"; }
+.bi-cash::before { content: "\f247"; }
+.bi-cast::before { content: "\f248"; }
+.bi-chat-dots-fill::before { content: "\f249"; }
+.bi-chat-dots::before { content: "\f24a"; }
+.bi-chat-fill::before { content: "\f24b"; }
+.bi-chat-left-dots-fill::before { content: "\f24c"; }
+.bi-chat-left-dots::before { content: "\f24d"; }
+.bi-chat-left-fill::before { content: "\f24e"; }
+.bi-chat-left-quote-fill::before { content: "\f24f"; }
+.bi-chat-left-quote::before { content: "\f250"; }
+.bi-chat-left-text-fill::before { content: "\f251"; }
+.bi-chat-left-text::before { content: "\f252"; }
+.bi-chat-left::before { content: "\f253"; }
+.bi-chat-quote-fill::before { content: "\f254"; }
+.bi-chat-quote::before { content: "\f255"; }
+.bi-chat-right-dots-fill::before { content: "\f256"; }
+.bi-chat-right-dots::before { content: "\f257"; }
+.bi-chat-right-fill::before { content: "\f258"; }
+.bi-chat-right-quote-fill::before { content: "\f259"; }
+.bi-chat-right-quote::before { content: "\f25a"; }
+.bi-chat-right-text-fill::before { content: "\f25b"; }
+.bi-chat-right-text::before { content: "\f25c"; }
+.bi-chat-right::before { content: "\f25d"; }
+.bi-chat-square-dots-fill::before { content: "\f25e"; }
+.bi-chat-square-dots::before { content: "\f25f"; }
+.bi-chat-square-fill::before { content: "\f260"; }
+.bi-chat-square-quote-fill::before { content: "\f261"; }
+.bi-chat-square-quote::before { content: "\f262"; }
+.bi-chat-square-text-fill::before { content: "\f263"; }
+.bi-chat-square-text::before { content: "\f264"; }
+.bi-chat-square::before { content: "\f265"; }
+.bi-chat-text-fill::before { content: "\f266"; }
+.bi-chat-text::before { content: "\f267"; }
+.bi-chat::before { content: "\f268"; }
+.bi-check-all::before { content: "\f269"; }
+.bi-check-circle-fill::before { content: "\f26a"; }
+.bi-check-circle::before { content: "\f26b"; }
+.bi-check-square-fill::before { content: "\f26c"; }
+.bi-check-square::before { content: "\f26d"; }
+.bi-check::before { content: "\f26e"; }
+.bi-check2-all::before { content: "\f26f"; }
+.bi-check2-circle::before { content: "\f270"; }
+.bi-check2-square::before { content: "\f271"; }
+.bi-check2::before { content: "\f272"; }
+.bi-chevron-bar-contract::before { content: "\f273"; }
+.bi-chevron-bar-down::before { content: "\f274"; }
+.bi-chevron-bar-expand::before { content: "\f275"; }
+.bi-chevron-bar-left::before { content: "\f276"; }
+.bi-chevron-bar-right::before { content: "\f277"; }
+.bi-chevron-bar-up::before { content: "\f278"; }
+.bi-chevron-compact-down::before { content: "\f279"; }
+.bi-chevron-compact-left::before { content: "\f27a"; }
+.bi-chevron-compact-right::before { content: "\f27b"; }
+.bi-chevron-compact-up::before { content: "\f27c"; }
+.bi-chevron-contract::before { content: "\f27d"; }
+.bi-chevron-double-down::before { content: "\f27e"; }
+.bi-chevron-double-left::before { content: "\f27f"; }
+.bi-chevron-double-right::before { content: "\f280"; }
+.bi-chevron-double-up::before { content: "\f281"; }
+.bi-chevron-down::before { content: "\f282"; }
+.bi-chevron-expand::before { content: "\f283"; }
+.bi-chevron-left::before { content: "\f284"; }
+.bi-chevron-right::before { content: "\f285"; }
+.bi-chevron-up::before { content: "\f286"; }
+.bi-circle-fill::before { content: "\f287"; }
+.bi-circle-half::before { content: "\f288"; }
+.bi-circle-square::before { content: "\f289"; }
+.bi-circle::before { content: "\f28a"; }
+.bi-clipboard-check::before { content: "\f28b"; }
+.bi-clipboard-data::before { content: "\f28c"; }
+.bi-clipboard-minus::before { content: "\f28d"; }
+.bi-clipboard-plus::before { content: "\f28e"; }
+.bi-clipboard-x::before { content: "\f28f"; }
+.bi-clipboard::before { content: "\f290"; }
+.bi-clock-fill::before { content: "\f291"; }
+.bi-clock-history::before { content: "\f292"; }
+.bi-clock::before { content: "\f293"; }
+.bi-cloud-arrow-down-fill::before { content: "\f294"; }
+.bi-cloud-arrow-down::before { content: "\f295"; }
+.bi-cloud-arrow-up-fill::before { content: "\f296"; }
+.bi-cloud-arrow-up::before { content: "\f297"; }
+.bi-cloud-check-fill::before { content: "\f298"; }
+.bi-cloud-check::before { content: "\f299"; }
+.bi-cloud-download-fill::before { content: "\f29a"; }
+.bi-cloud-download::before { content: "\f29b"; }
+.bi-cloud-drizzle-fill::before { content: "\f29c"; }
+.bi-cloud-drizzle::before { content: "\f29d"; }
+.bi-cloud-fill::before { content: "\f29e"; }
+.bi-cloud-fog-fill::before { content: "\f29f"; }
+.bi-cloud-fog::before { content: "\f2a0"; }
+.bi-cloud-fog2-fill::before { content: "\f2a1"; }
+.bi-cloud-fog2::before { content: "\f2a2"; }
+.bi-cloud-hail-fill::before { content: "\f2a3"; }
+.bi-cloud-hail::before { content: "\f2a4"; }
+.bi-cloud-haze-fill::before { content: "\f2a6"; }
+.bi-cloud-haze::before { content: "\f2a7"; }
+.bi-cloud-haze2-fill::before { content: "\f2a8"; }
+.bi-cloud-lightning-fill::before { content: "\f2a9"; }
+.bi-cloud-lightning-rain-fill::before { content: "\f2aa"; }
+.bi-cloud-lightning-rain::before { content: "\f2ab"; }
+.bi-cloud-lightning::before { content: "\f2ac"; }
+.bi-cloud-minus-fill::before { content: "\f2ad"; }
+.bi-cloud-minus::before { content: "\f2ae"; }
+.bi-cloud-moon-fill::before { content: "\f2af"; }
+.bi-cloud-moon::before { content: "\f2b0"; }
+.bi-cloud-plus-fill::before { content: "\f2b1"; }
+.bi-cloud-plus::before { content: "\f2b2"; }
+.bi-cloud-rain-fill::before { content: "\f2b3"; }
+.bi-cloud-rain-heavy-fill::before { content: "\f2b4"; }
+.bi-cloud-rain-heavy::before { content: "\f2b5"; }
+.bi-cloud-rain::before { content: "\f2b6"; }
+.bi-cloud-slash-fill::before { content: "\f2b7"; }
+.bi-cloud-slash::before { content: "\f2b8"; }
+.bi-cloud-sleet-fill::before { content: "\f2b9"; }
+.bi-cloud-sleet::before { content: "\f2ba"; }
+.bi-cloud-snow-fill::before { content: "\f2bb"; }
+.bi-cloud-snow::before { content: "\f2bc"; }
+.bi-cloud-sun-fill::before { content: "\f2bd"; }
+.bi-cloud-sun::before { content: "\f2be"; }
+.bi-cloud-upload-fill::before { content: "\f2bf"; }
+.bi-cloud-upload::before { content: "\f2c0"; }
+.bi-cloud::before { content: "\f2c1"; }
+.bi-clouds-fill::before { content: "\f2c2"; }
+.bi-clouds::before { content: "\f2c3"; }
+.bi-cloudy-fill::before { content: "\f2c4"; }
+.bi-cloudy::before { content: "\f2c5"; }
+.bi-code-slash::before { content: "\f2c6"; }
+.bi-code-square::before { content: "\f2c7"; }
+.bi-code::before { content: "\f2c8"; }
+.bi-collection-fill::before { content: "\f2c9"; }
+.bi-collection-play-fill::before { content: "\f2ca"; }
+.bi-collection-play::before { content: "\f2cb"; }
+.bi-collection::before { content: "\f2cc"; }
+.bi-columns-gap::before { content: "\f2cd"; }
+.bi-columns::before { content: "\f2ce"; }
+.bi-command::before { content: "\f2cf"; }
+.bi-compass-fill::before { content: "\f2d0"; }
+.bi-compass::before { content: "\f2d1"; }
+.bi-cone-striped::before { content: "\f2d2"; }
+.bi-cone::before { content: "\f2d3"; }
+.bi-controller::before { content: "\f2d4"; }
+.bi-cpu-fill::before { content: "\f2d5"; }
+.bi-cpu::before { content: "\f2d6"; }
+.bi-credit-card-2-back-fill::before { content: "\f2d7"; }
+.bi-credit-card-2-back::before { content: "\f2d8"; }
+.bi-credit-card-2-front-fill::before { content: "\f2d9"; }
+.bi-credit-card-2-front::before { content: "\f2da"; }
+.bi-credit-card-fill::before { content: "\f2db"; }
+.bi-credit-card::before { content: "\f2dc"; }
+.bi-crop::before { content: "\f2dd"; }
+.bi-cup-fill::before { content: "\f2de"; }
+.bi-cup-straw::before { content: "\f2df"; }
+.bi-cup::before { content: "\f2e0"; }
+.bi-cursor-fill::before { content: "\f2e1"; }
+.bi-cursor-text::before { content: "\f2e2"; }
+.bi-cursor::before { content: "\f2e3"; }
+.bi-dash-circle-dotted::before { content: "\f2e4"; }
+.bi-dash-circle-fill::before { content: "\f2e5"; }
+.bi-dash-circle::before { content: "\f2e6"; }
+.bi-dash-square-dotted::before { content: "\f2e7"; }
+.bi-dash-square-fill::before { content: "\f2e8"; }
+.bi-dash-square::before { content: "\f2e9"; }
+.bi-dash::before { content: "\f2ea"; }
+.bi-diagram-2-fill::before { content: "\f2eb"; }
+.bi-diagram-2::before { content: "\f2ec"; }
+.bi-diagram-3-fill::before { content: "\f2ed"; }
+.bi-diagram-3::before { content: "\f2ee"; }
+.bi-diamond-fill::before { content: "\f2ef"; }
+.bi-diamond-half::before { content: "\f2f0"; }
+.bi-diamond::before { content: "\f2f1"; }
+.bi-dice-1-fill::before { content: "\f2f2"; }
+.bi-dice-1::before { content: "\f2f3"; }
+.bi-dice-2-fill::before { content: "\f2f4"; }
+.bi-dice-2::before { content: "\f2f5"; }
+.bi-dice-3-fill::before { content: "\f2f6"; }
+.bi-dice-3::before { content: "\f2f7"; }
+.bi-dice-4-fill::before { content: "\f2f8"; }
+.bi-dice-4::before { content: "\f2f9"; }
+.bi-dice-5-fill::before { content: "\f2fa"; }
+.bi-dice-5::before { content: "\f2fb"; }
+.bi-dice-6-fill::before { content: "\f2fc"; }
+.bi-dice-6::before { content: "\f2fd"; }
+.bi-disc-fill::before { content: "\f2fe"; }
+.bi-disc::before { content: "\f2ff"; }
+.bi-discord::before { content: "\f300"; }
+.bi-display-fill::before { content: "\f301"; }
+.bi-display::before { content: "\f302"; }
+.bi-distribute-horizontal::before { content: "\f303"; }
+.bi-distribute-vertical::before { content: "\f304"; }
+.bi-door-closed-fill::before { content: "\f305"; }
+.bi-door-closed::before { content: "\f306"; }
+.bi-door-open-fill::before { content: "\f307"; }
+.bi-door-open::before { content: "\f308"; }
+.bi-dot::before { content: "\f309"; }
+.bi-download::before { content: "\f30a"; }
+.bi-droplet-fill::before { content: "\f30b"; }
+.bi-droplet-half::before { content: "\f30c"; }
+.bi-droplet::before { content: "\f30d"; }
+.bi-earbuds::before { content: "\f30e"; }
+.bi-easel-fill::before { content: "\f30f"; }
+.bi-easel::before { content: "\f310"; }
+.bi-egg-fill::before { content: "\f311"; }
+.bi-egg-fried::before { content: "\f312"; }
+.bi-egg::before { content: "\f313"; }
+.bi-eject-fill::before { content: "\f314"; }
+.bi-eject::before { content: "\f315"; }
+.bi-emoji-angry-fill::before { content: "\f316"; }
+.bi-emoji-angry::before { content: "\f317"; }
+.bi-emoji-dizzy-fill::before { content: "\f318"; }
+.bi-emoji-dizzy::before { content: "\f319"; }
+.bi-emoji-expressionless-fill::before { content: "\f31a"; }
+.bi-emoji-expressionless::before { content: "\f31b"; }
+.bi-emoji-frown-fill::before { content: "\f31c"; }
+.bi-emoji-frown::before { content: "\f31d"; }
+.bi-emoji-heart-eyes-fill::before { content: "\f31e"; }
+.bi-emoji-heart-eyes::before { content: "\f31f"; }
+.bi-emoji-laughing-fill::before { content: "\f320"; }
+.bi-emoji-laughing::before { content: "\f321"; }
+.bi-emoji-neutral-fill::before { content: "\f322"; }
+.bi-emoji-neutral::before { content: "\f323"; }
+.bi-emoji-smile-fill::before { content: "\f324"; }
+.bi-emoji-smile-upside-down-fill::before { content: "\f325"; }
+.bi-emoji-smile-upside-down::before { content: "\f326"; }
+.bi-emoji-smile::before { content: "\f327"; }
+.bi-emoji-sunglasses-fill::before { content: "\f328"; }
+.bi-emoji-sunglasses::before { content: "\f329"; }
+.bi-emoji-wink-fill::before { content: "\f32a"; }
+.bi-emoji-wink::before { content: "\f32b"; }
+.bi-envelope-fill::before { content: "\f32c"; }
+.bi-envelope-open-fill::before { content: "\f32d"; }
+.bi-envelope-open::before { content: "\f32e"; }
+.bi-envelope::before { content: "\f32f"; }
+.bi-eraser-fill::before { content: "\f330"; }
+.bi-eraser::before { content: "\f331"; }
+.bi-exclamation-circle-fill::before { content: "\f332"; }
+.bi-exclamation-circle::before { content: "\f333"; }
+.bi-exclamation-diamond-fill::before { content: "\f334"; }
+.bi-exclamation-diamond::before { content: "\f335"; }
+.bi-exclamation-octagon-fill::before { content: "\f336"; }
+.bi-exclamation-octagon::before { content: "\f337"; }
+.bi-exclamation-square-fill::before { content: "\f338"; }
+.bi-exclamation-square::before { content: "\f339"; }
+.bi-exclamation-triangle-fill::before { content: "\f33a"; }
+.bi-exclamation-triangle::before { content: "\f33b"; }
+.bi-exclamation::before { content: "\f33c"; }
+.bi-exclude::before { content: "\f33d"; }
+.bi-eye-fill::before { content: "\f33e"; }
+.bi-eye-slash-fill::before { content: "\f33f"; }
+.bi-eye-slash::before { content: "\f340"; }
+.bi-eye::before { content: "\f341"; }
+.bi-eyedropper::before { content: "\f342"; }
+.bi-eyeglasses::before { content: "\f343"; }
+.bi-facebook::before { content: "\f344"; }
+.bi-file-arrow-down-fill::before { content: "\f345"; }
+.bi-file-arrow-down::before { content: "\f346"; }
+.bi-file-arrow-up-fill::before { content: "\f347"; }
+.bi-file-arrow-up::before { content: "\f348"; }
+.bi-file-bar-graph-fill::before { content: "\f349"; }
+.bi-file-bar-graph::before { content: "\f34a"; }
+.bi-file-binary-fill::before { content: "\f34b"; }
+.bi-file-binary::before { content: "\f34c"; }
+.bi-file-break-fill::before { content: "\f34d"; }
+.bi-file-break::before { content: "\f34e"; }
+.bi-file-check-fill::before { content: "\f34f"; }
+.bi-file-check::before { content: "\f350"; }
+.bi-file-code-fill::before { content: "\f351"; }
+.bi-file-code::before { content: "\f352"; }
+.bi-file-diff-fill::before { content: "\f353"; }
+.bi-file-diff::before { content: "\f354"; }
+.bi-file-earmark-arrow-down-fill::before { content: "\f355"; }
+.bi-file-earmark-arrow-down::before { content: "\f356"; }
+.bi-file-earmark-arrow-up-fill::before { content: "\f357"; }
+.bi-file-earmark-arrow-up::before { content: "\f358"; }
+.bi-file-earmark-bar-graph-fill::before { content: "\f359"; }
+.bi-file-earmark-bar-graph::before { content: "\f35a"; }
+.bi-file-earmark-binary-fill::before { content: "\f35b"; }
+.bi-file-earmark-binary::before { content: "\f35c"; }
+.bi-file-earmark-break-fill::before { content: "\f35d"; }
+.bi-file-earmark-break::before { content: "\f35e"; }
+.bi-file-earmark-check-fill::before { content: "\f35f"; }
+.bi-file-earmark-check::before { content: "\f360"; }
+.bi-file-earmark-code-fill::before { content: "\f361"; }
+.bi-file-earmark-code::before { content: "\f362"; }
+.bi-file-earmark-diff-fill::before { content: "\f363"; }
+.bi-file-earmark-diff::before { content: "\f364"; }
+.bi-file-earmark-easel-fill::before { content: "\f365"; }
+.bi-file-earmark-easel::before { content: "\f366"; }
+.bi-file-earmark-excel-fill::before { content: "\f367"; }
+.bi-file-earmark-excel::before { content: "\f368"; }
+.bi-file-earmark-fill::before { content: "\f369"; }
+.bi-file-earmark-font-fill::before { content: "\f36a"; }
+.bi-file-earmark-font::before { content: "\f36b"; }
+.bi-file-earmark-image-fill::before { content: "\f36c"; }
+.bi-file-earmark-image::before { content: "\f36d"; }
+.bi-file-earmark-lock-fill::before { content: "\f36e"; }
+.bi-file-earmark-lock::before { content: "\f36f"; }
+.bi-file-earmark-lock2-fill::before { content: "\f370"; }
+.bi-file-earmark-lock2::before { content: "\f371"; }
+.bi-file-earmark-medical-fill::before { content: "\f372"; }
+.bi-file-earmark-medical::before { content: "\f373"; }
+.bi-file-earmark-minus-fill::before { content: "\f374"; }
+.bi-file-earmark-minus::before { content: "\f375"; }
+.bi-file-earmark-music-fill::before { content: "\f376"; }
+.bi-file-earmark-music::before { content: "\f377"; }
+.bi-file-earmark-person-fill::before { content: "\f378"; }
+.bi-file-earmark-person::before { content: "\f379"; }
+.bi-file-earmark-play-fill::before { content: "\f37a"; }
+.bi-file-earmark-play::before { content: "\f37b"; }
+.bi-file-earmark-plus-fill::before { content: "\f37c"; }
+.bi-file-earmark-plus::before { content: "\f37d"; }
+.bi-file-earmark-post-fill::before { content: "\f37e"; }
+.bi-file-earmark-post::before { content: "\f37f"; }
+.bi-file-earmark-ppt-fill::before { content: "\f380"; }
+.bi-file-earmark-ppt::before { content: "\f381"; }
+.bi-file-earmark-richtext-fill::before { content: "\f382"; }
+.bi-file-earmark-richtext::before { content: "\f383"; }
+.bi-file-earmark-ruled-fill::before { content: "\f384"; }
+.bi-file-earmark-ruled::before { content: "\f385"; }
+.bi-file-earmark-slides-fill::before { content: "\f386"; }
+.bi-file-earmark-slides::before { content: "\f387"; }
+.bi-file-earmark-spreadsheet-fill::before { content: "\f388"; }
+.bi-file-earmark-spreadsheet::before { content: "\f389"; }
+.bi-file-earmark-text-fill::before { content: "\f38a"; }
+.bi-file-earmark-text::before { content: "\f38b"; }
+.bi-file-earmark-word-fill::before { content: "\f38c"; }
+.bi-file-earmark-word::before { content: "\f38d"; }
+.bi-file-earmark-x-fill::before { content: "\f38e"; }
+.bi-file-earmark-x::before { content: "\f38f"; }
+.bi-file-earmark-zip-fill::before { content: "\f390"; }
+.bi-file-earmark-zip::before { content: "\f391"; }
+.bi-file-earmark::before { content: "\f392"; }
+.bi-file-easel-fill::before { content: "\f393"; }
+.bi-file-easel::before { content: "\f394"; }
+.bi-file-excel-fill::before { content: "\f395"; }
+.bi-file-excel::before { content: "\f396"; }
+.bi-file-fill::before { content: "\f397"; }
+.bi-file-font-fill::before { content: "\f398"; }
+.bi-file-font::before { content: "\f399"; }
+.bi-file-image-fill::before { content: "\f39a"; }
+.bi-file-image::before { content: "\f39b"; }
+.bi-file-lock-fill::before { content: "\f39c"; }
+.bi-file-lock::before { content: "\f39d"; }
+.bi-file-lock2-fill::before { content: "\f39e"; }
+.bi-file-lock2::before { content: "\f39f"; }
+.bi-file-medical-fill::before { content: "\f3a0"; }
+.bi-file-medical::before { content: "\f3a1"; }
+.bi-file-minus-fill::before { content: "\f3a2"; }
+.bi-file-minus::before { content: "\f3a3"; }
+.bi-file-music-fill::before { content: "\f3a4"; }
+.bi-file-music::before { content: "\f3a5"; }
+.bi-file-person-fill::before { content: "\f3a6"; }
+.bi-file-person::before { content: "\f3a7"; }
+.bi-file-play-fill::before { content: "\f3a8"; }
+.bi-file-play::before { content: "\f3a9"; }
+.bi-file-plus-fill::before { content: "\f3aa"; }
+.bi-file-plus::before { content: "\f3ab"; }
+.bi-file-post-fill::before { content: "\f3ac"; }
+.bi-file-post::before { content: "\f3ad"; }
+.bi-file-ppt-fill::before { content: "\f3ae"; }
+.bi-file-ppt::before { content: "\f3af"; }
+.bi-file-richtext-fill::before { content: "\f3b0"; }
+.bi-file-richtext::before { content: "\f3b1"; }
+.bi-file-ruled-fill::before { content: "\f3b2"; }
+.bi-file-ruled::before { content: "\f3b3"; }
+.bi-file-slides-fill::before { content: "\f3b4"; }
+.bi-file-slides::before { content: "\f3b5"; }
+.bi-file-spreadsheet-fill::before { content: "\f3b6"; }
+.bi-file-spreadsheet::before { content: "\f3b7"; }
+.bi-file-text-fill::before { content: "\f3b8"; }
+.bi-file-text::before { content: "\f3b9"; }
+.bi-file-word-fill::before { content: "\f3ba"; }
+.bi-file-word::before { content: "\f3bb"; }
+.bi-file-x-fill::before { content: "\f3bc"; }
+.bi-file-x::before { content: "\f3bd"; }
+.bi-file-zip-fill::before { content: "\f3be"; }
+.bi-file-zip::before { content: "\f3bf"; }
+.bi-file::before { content: "\f3c0"; }
+.bi-files-alt::before { content: "\f3c1"; }
+.bi-files::before { content: "\f3c2"; }
+.bi-film::before { content: "\f3c3"; }
+.bi-filter-circle-fill::before { content: "\f3c4"; }
+.bi-filter-circle::before { content: "\f3c5"; }
+.bi-filter-left::before { content: "\f3c6"; }
+.bi-filter-right::before { content: "\f3c7"; }
+.bi-filter-square-fill::before { content: "\f3c8"; }
+.bi-filter-square::before { content: "\f3c9"; }
+.bi-filter::before { content: "\f3ca"; }
+.bi-flag-fill::before { content: "\f3cb"; }
+.bi-flag::before { content: "\f3cc"; }
+.bi-flower1::before { content: "\f3cd"; }
+.bi-flower2::before { content: "\f3ce"; }
+.bi-flower3::before { content: "\f3cf"; }
+.bi-folder-check::before { content: "\f3d0"; }
+.bi-folder-fill::before { content: "\f3d1"; }
+.bi-folder-minus::before { content: "\f3d2"; }
+.bi-folder-plus::before { content: "\f3d3"; }
+.bi-folder-symlink-fill::before { content: "\f3d4"; }
+.bi-folder-symlink::before { content: "\f3d5"; }
+.bi-folder-x::before { content: "\f3d6"; }
+.bi-folder::before { content: "\f3d7"; }
+.bi-folder2-open::before { content: "\f3d8"; }
+.bi-folder2::before { content: "\f3d9"; }
+.bi-fonts::before { content: "\f3da"; }
+.bi-forward-fill::before { content: "\f3db"; }
+.bi-forward::before { content: "\f3dc"; }
+.bi-front::before { content: "\f3dd"; }
+.bi-fullscreen-exit::before { content: "\f3de"; }
+.bi-fullscreen::before { content: "\f3df"; }
+.bi-funnel-fill::before { content: "\f3e0"; }
+.bi-funnel::before { content: "\f3e1"; }
+.bi-gear-fill::before { content: "\f3e2"; }
+.bi-gear-wide-connected::before { content: "\f3e3"; }
+.bi-gear-wide::before { content: "\f3e4"; }
+.bi-gear::before { content: "\f3e5"; }
+.bi-gem::before { content: "\f3e6"; }
+.bi-geo-alt-fill::before { content: "\f3e7"; }
+.bi-geo-alt::before { content: "\f3e8"; }
+.bi-geo-fill::before { content: "\f3e9"; }
+.bi-geo::before { content: "\f3ea"; }
+.bi-gift-fill::before { content: "\f3eb"; }
+.bi-gift::before { content: "\f3ec"; }
+.bi-github::before { content: "\f3ed"; }
+.bi-globe::before { content: "\f3ee"; }
+.bi-globe2::before { content: "\f3ef"; }
+.bi-google::before { content: "\f3f0"; }
+.bi-graph-down::before { content: "\f3f1"; }
+.bi-graph-up::before { content: "\f3f2"; }
+.bi-grid-1x2-fill::before { content: "\f3f3"; }
+.bi-grid-1x2::before { content: "\f3f4"; }
+.bi-grid-3x2-gap-fill::before { content: "\f3f5"; }
+.bi-grid-3x2-gap::before { content: "\f3f6"; }
+.bi-grid-3x2::before { content: "\f3f7"; }
+.bi-grid-3x3-gap-fill::before { content: "\f3f8"; }
+.bi-grid-3x3-gap::before { content: "\f3f9"; }
+.bi-grid-3x3::before { content: "\f3fa"; }
+.bi-grid-fill::before { content: "\f3fb"; }
+.bi-grid::before { content: "\f3fc"; }
+.bi-grip-horizontal::before { content: "\f3fd"; }
+.bi-grip-vertical::before { content: "\f3fe"; }
+.bi-hammer::before { content: "\f3ff"; }
+.bi-hand-index-fill::before { content: "\f400"; }
+.bi-hand-index-thumb-fill::before { content: "\f401"; }
+.bi-hand-index-thumb::before { content: "\f402"; }
+.bi-hand-index::before { content: "\f403"; }
+.bi-hand-thumbs-down-fill::before { content: "\f404"; }
+.bi-hand-thumbs-down::before { content: "\f405"; }
+.bi-hand-thumbs-up-fill::before { content: "\f406"; }
+.bi-hand-thumbs-up::before { content: "\f407"; }
+.bi-handbag-fill::before { content: "\f408"; }
+.bi-handbag::before { content: "\f409"; }
+.bi-hash::before { content: "\f40a"; }
+.bi-hdd-fill::before { content: "\f40b"; }
+.bi-hdd-network-fill::before { content: "\f40c"; }
+.bi-hdd-network::before { content: "\f40d"; }
+.bi-hdd-rack-fill::before { content: "\f40e"; }
+.bi-hdd-rack::before { content: "\f40f"; }
+.bi-hdd-stack-fill::before { content: "\f410"; }
+.bi-hdd-stack::before { content: "\f411"; }
+.bi-hdd::before { content: "\f412"; }
+.bi-headphones::before { content: "\f413"; }
+.bi-headset::before { content: "\f414"; }
+.bi-heart-fill::before { content: "\f415"; }
+.bi-heart-half::before { content: "\f416"; }
+.bi-heart::before { content: "\f417"; }
+.bi-heptagon-fill::before { content: "\f418"; }
+.bi-heptagon-half::before { content: "\f419"; }
+.bi-heptagon::before { content: "\f41a"; }
+.bi-hexagon-fill::before { content: "\f41b"; }
+.bi-hexagon-half::before { content: "\f41c"; }
+.bi-hexagon::before { content: "\f41d"; }
+.bi-hourglass-bottom::before { content: "\f41e"; }
+.bi-hourglass-split::before { content: "\f41f"; }
+.bi-hourglass-top::before { content: "\f420"; }
+.bi-hourglass::before { content: "\f421"; }
+.bi-house-door-fill::before { content: "\f422"; }
+.bi-house-door::before { content: "\f423"; }
+.bi-house-fill::before { content: "\f424"; }
+.bi-house::before { content: "\f425"; }
+.bi-hr::before { content: "\f426"; }
+.bi-hurricane::before { content: "\f427"; }
+.bi-image-alt::before { content: "\f428"; }
+.bi-image-fill::before { content: "\f429"; }
+.bi-image::before { content: "\f42a"; }
+.bi-images::before { content: "\f42b"; }
+.bi-inbox-fill::before { content: "\f42c"; }
+.bi-inbox::before { content: "\f42d"; }
+.bi-inboxes-fill::before { content: "\f42e"; }
+.bi-inboxes::before { content: "\f42f"; }
+.bi-info-circle-fill::before { content: "\f430"; }
+.bi-info-circle::before { content: "\f431"; }
+.bi-info-square-fill::before { content: "\f432"; }
+.bi-info-square::before { content: "\f433"; }
+.bi-info::before { content: "\f434"; }
+.bi-input-cursor-text::before { content: "\f435"; }
+.bi-input-cursor::before { content: "\f436"; }
+.bi-instagram::before { content: "\f437"; }
+.bi-intersect::before { content: "\f438"; }
+.bi-journal-album::before { content: "\f439"; }
+.bi-journal-arrow-down::before { content: "\f43a"; }
+.bi-journal-arrow-up::before { content: "\f43b"; }
+.bi-journal-bookmark-fill::before { content: "\f43c"; }
+.bi-journal-bookmark::before { content: "\f43d"; }
+.bi-journal-check::before { content: "\f43e"; }
+.bi-journal-code::before { content: "\f43f"; }
+.bi-journal-medical::before { content: "\f440"; }
+.bi-journal-minus::before { content: "\f441"; }
+.bi-journal-plus::before { content: "\f442"; }
+.bi-journal-richtext::before { content: "\f443"; }
+.bi-journal-text::before { content: "\f444"; }
+.bi-journal-x::before { content: "\f445"; }
+.bi-journal::before { content: "\f446"; }
+.bi-journals::before { content: "\f447"; }
+.bi-joystick::before { content: "\f448"; }
+.bi-justify-left::before { content: "\f449"; }
+.bi-justify-right::before { content: "\f44a"; }
+.bi-justify::before { content: "\f44b"; }
+.bi-kanban-fill::before { content: "\f44c"; }
+.bi-kanban::before { content: "\f44d"; }
+.bi-key-fill::before { content: "\f44e"; }
+.bi-key::before { content: "\f44f"; }
+.bi-keyboard-fill::before { content: "\f450"; }
+.bi-keyboard::before { content: "\f451"; }
+.bi-ladder::before { content: "\f452"; }
+.bi-lamp-fill::before { content: "\f453"; }
+.bi-lamp::before { content: "\f454"; }
+.bi-laptop-fill::before { content: "\f455"; }
+.bi-laptop::before { content: "\f456"; }
+.bi-layer-backward::before { content: "\f457"; }
+.bi-layer-forward::before { content: "\f458"; }
+.bi-layers-fill::before { content: "\f459"; }
+.bi-layers-half::before { content: "\f45a"; }
+.bi-layers::before { content: "\f45b"; }
+.bi-layout-sidebar-inset-reverse::before { content: "\f45c"; }
+.bi-layout-sidebar-inset::before { content: "\f45d"; }
+.bi-layout-sidebar-reverse::before { content: "\f45e"; }
+.bi-layout-sidebar::before { content: "\f45f"; }
+.bi-layout-split::before { content: "\f460"; }
+.bi-layout-text-sidebar-reverse::before { content: "\f461"; }
+.bi-layout-text-sidebar::before { content: "\f462"; }
+.bi-layout-text-window-reverse::before { content: "\f463"; }
+.bi-layout-text-window::before { content: "\f464"; }
+.bi-layout-three-columns::before { content: "\f465"; }
+.bi-layout-wtf::before { content: "\f466"; }
+.bi-life-preserver::before { content: "\f467"; }
+.bi-lightbulb-fill::before { content: "\f468"; }
+.bi-lightbulb-off-fill::before { content: "\f469"; }
+.bi-lightbulb-off::before { content: "\f46a"; }
+.bi-lightbulb::before { content: "\f46b"; }
+.bi-lightning-charge-fill::before { content: "\f46c"; }
+.bi-lightning-charge::before { content: "\f46d"; }
+.bi-lightning-fill::before { content: "\f46e"; }
+.bi-lightning::before { content: "\f46f"; }
+.bi-link-45deg::before { content: "\f470"; }
+.bi-link::before { content: "\f471"; }
+.bi-linkedin::before { content: "\f472"; }
+.bi-list-check::before { content: "\f473"; }
+.bi-list-nested::before { content: "\f474"; }
+.bi-list-ol::before { content: "\f475"; }
+.bi-list-stars::before { content: "\f476"; }
+.bi-list-task::before { content: "\f477"; }
+.bi-list-ul::before { content: "\f478"; }
+.bi-list::before { content: "\f479"; }
+.bi-lock-fill::before { content: "\f47a"; }
+.bi-lock::before { content: "\f47b"; }
+.bi-mailbox::before { content: "\f47c"; }
+.bi-mailbox2::before { content: "\f47d"; }
+.bi-map-fill::before { content: "\f47e"; }
+.bi-map::before { content: "\f47f"; }
+.bi-markdown-fill::before { content: "\f480"; }
+.bi-markdown::before { content: "\f481"; }
+.bi-mask::before { content: "\f482"; }
+.bi-megaphone-fill::before { content: "\f483"; }
+.bi-megaphone::before { content: "\f484"; }
+.bi-menu-app-fill::before { content: "\f485"; }
+.bi-menu-app::before { content: "\f486"; }
+.bi-menu-button-fill::before { content: "\f487"; }
+.bi-menu-button-wide-fill::before { content: "\f488"; }
+.bi-menu-button-wide::before { content: "\f489"; }
+.bi-menu-button::before { content: "\f48a"; }
+.bi-menu-down::before { content: "\f48b"; }
+.bi-menu-up::before { content: "\f48c"; }
+.bi-mic-fill::before { content: "\f48d"; }
+.bi-mic-mute-fill::before { content: "\f48e"; }
+.bi-mic-mute::before { content: "\f48f"; }
+.bi-mic::before { content: "\f490"; }
+.bi-minecart-loaded::before { content: "\f491"; }
+.bi-minecart::before { content: "\f492"; }
+.bi-moisture::before { content: "\f493"; }
+.bi-moon-fill::before { content: "\f494"; }
+.bi-moon-stars-fill::before { content: "\f495"; }
+.bi-moon-stars::before { content: "\f496"; }
+.bi-moon::before { content: "\f497"; }
+.bi-mouse-fill::before { content: "\f498"; }
+.bi-mouse::before { content: "\f499"; }
+.bi-mouse2-fill::before { content: "\f49a"; }
+.bi-mouse2::before { content: "\f49b"; }
+.bi-mouse3-fill::before { content: "\f49c"; }
+.bi-mouse3::before { content: "\f49d"; }
+.bi-music-note-beamed::before { content: "\f49e"; }
+.bi-music-note-list::before { content: "\f49f"; }
+.bi-music-note::before { content: "\f4a0"; }
+.bi-music-player-fill::before { content: "\f4a1"; }
+.bi-music-player::before { content: "\f4a2"; }
+.bi-newspaper::before { content: "\f4a3"; }
+.bi-node-minus-fill::before { content: "\f4a4"; }
+.bi-node-minus::before { content: "\f4a5"; }
+.bi-node-plus-fill::before { content: "\f4a6"; }
+.bi-node-plus::before { content: "\f4a7"; }
+.bi-nut-fill::before { content: "\f4a8"; }
+.bi-nut::before { content: "\f4a9"; }
+.bi-octagon-fill::before { content: "\f4aa"; }
+.bi-octagon-half::before { content: "\f4ab"; }
+.bi-octagon::before { content: "\f4ac"; }
+.bi-option::before { content: "\f4ad"; }
+.bi-outlet::before { content: "\f4ae"; }
+.bi-paint-bucket::before { content: "\f4af"; }
+.bi-palette-fill::before { content: "\f4b0"; }
+.bi-palette::before { content: "\f4b1"; }
+.bi-palette2::before { content: "\f4b2"; }
+.bi-paperclip::before { content: "\f4b3"; }
+.bi-paragraph::before { content: "\f4b4"; }
+.bi-patch-check-fill::before { content: "\f4b5"; }
+.bi-patch-check::before { content: "\f4b6"; }
+.bi-patch-exclamation-fill::before { content: "\f4b7"; }
+.bi-patch-exclamation::before { content: "\f4b8"; }
+.bi-patch-minus-fill::before { content: "\f4b9"; }
+.bi-patch-minus::before { content: "\f4ba"; }
+.bi-patch-plus-fill::before { content: "\f4bb"; }
+.bi-patch-plus::before { content: "\f4bc"; }
+.bi-patch-question-fill::before { content: "\f4bd"; }
+.bi-patch-question::before { content: "\f4be"; }
+.bi-pause-btn-fill::before { content: "\f4bf"; }
+.bi-pause-btn::before { content: "\f4c0"; }
+.bi-pause-circle-fill::before { content: "\f4c1"; }
+.bi-pause-circle::before { content: "\f4c2"; }
+.bi-pause-fill::before { content: "\f4c3"; }
+.bi-pause::before { content: "\f4c4"; }
+.bi-peace-fill::before { content: "\f4c5"; }
+.bi-peace::before { content: "\f4c6"; }
+.bi-pen-fill::before { content: "\f4c7"; }
+.bi-pen::before { content: "\f4c8"; }
+.bi-pencil-fill::before { content: "\f4c9"; }
+.bi-pencil-square::before { content: "\f4ca"; }
+.bi-pencil::before { content: "\f4cb"; }
+.bi-pentagon-fill::before { content: "\f4cc"; }
+.bi-pentagon-half::before { content: "\f4cd"; }
+.bi-pentagon::before { content: "\f4ce"; }
+.bi-people-fill::before { content: "\f4cf"; }
+.bi-people::before { content: "\f4d0"; }
+.bi-percent::before { content: "\f4d1"; }
+.bi-person-badge-fill::before { content: "\f4d2"; }
+.bi-person-badge::before { content: "\f4d3"; }
+.bi-person-bounding-box::before { content: "\f4d4"; }
+.bi-person-check-fill::before { content: "\f4d5"; }
+.bi-person-check::before { content: "\f4d6"; }
+.bi-person-circle::before { content: "\f4d7"; }
+.bi-person-dash-fill::before { content: "\f4d8"; }
+.bi-person-dash::before { content: "\f4d9"; }
+.bi-person-fill::before { content: "\f4da"; }
+.bi-person-lines-fill::before { content: "\f4db"; }
+.bi-person-plus-fill::before { content: "\f4dc"; }
+.bi-person-plus::before { content: "\f4dd"; }
+.bi-person-square::before { content: "\f4de"; }
+.bi-person-x-fill::before { content: "\f4df"; }
+.bi-person-x::before { content: "\f4e0"; }
+.bi-person::before { content: "\f4e1"; }
+.bi-phone-fill::before { content: "\f4e2"; }
+.bi-phone-landscape-fill::before { content: "\f4e3"; }
+.bi-phone-landscape::before { content: "\f4e4"; }
+.bi-phone-vibrate-fill::before { content: "\f4e5"; }
+.bi-phone-vibrate::before { content: "\f4e6"; }
+.bi-phone::before { content: "\f4e7"; }
+.bi-pie-chart-fill::before { content: "\f4e8"; }
+.bi-pie-chart::before { content: "\f4e9"; }
+.bi-pin-angle-fill::before { content: "\f4ea"; }
+.bi-pin-angle::before { content: "\f4eb"; }
+.bi-pin-fill::before { content: "\f4ec"; }
+.bi-pin::before { content: "\f4ed"; }
+.bi-pip-fill::before { content: "\f4ee"; }
+.bi-pip::before { content: "\f4ef"; }
+.bi-play-btn-fill::before { content: "\f4f0"; }
+.bi-play-btn::before { content: "\f4f1"; }
+.bi-play-circle-fill::before { content: "\f4f2"; }
+.bi-play-circle::before { content: "\f4f3"; }
+.bi-play-fill::before { content: "\f4f4"; }
+.bi-play::before { content: "\f4f5"; }
+.bi-plug-fill::before { content: "\f4f6"; }
+.bi-plug::before { content: "\f4f7"; }
+.bi-plus-circle-dotted::before { content: "\f4f8"; }
+.bi-plus-circle-fill::before { content: "\f4f9"; }
+.bi-plus-circle::before { content: "\f4fa"; }
+.bi-plus-square-dotted::before { content: "\f4fb"; }
+.bi-plus-square-fill::before { content: "\f4fc"; }
+.bi-plus-square::before { content: "\f4fd"; }
+.bi-plus::before { content: "\f4fe"; }
+.bi-power::before { content: "\f4ff"; }
+.bi-printer-fill::before { content: "\f500"; }
+.bi-printer::before { content: "\f501"; }
+.bi-puzzle-fill::before { content: "\f502"; }
+.bi-puzzle::before { content: "\f503"; }
+.bi-question-circle-fill::before { content: "\f504"; }
+.bi-question-circle::before { content: "\f505"; }
+.bi-question-diamond-fill::before { content: "\f506"; }
+.bi-question-diamond::before { content: "\f507"; }
+.bi-question-octagon-fill::before { content: "\f508"; }
+.bi-question-octagon::before { content: "\f509"; }
+.bi-question-square-fill::before { content: "\f50a"; }
+.bi-question-square::before { content: "\f50b"; }
+.bi-question::before { content: "\f50c"; }
+.bi-rainbow::before { content: "\f50d"; }
+.bi-receipt-cutoff::before { content: "\f50e"; }
+.bi-receipt::before { content: "\f50f"; }
+.bi-reception-0::before { content: "\f510"; }
+.bi-reception-1::before { content: "\f511"; }
+.bi-reception-2::before { content: "\f512"; }
+.bi-reception-3::before { content: "\f513"; }
+.bi-reception-4::before { content: "\f514"; }
+.bi-record-btn-fill::before { content: "\f515"; }
+.bi-record-btn::before { content: "\f516"; }
+.bi-record-circle-fill::before { content: "\f517"; }
+.bi-record-circle::before { content: "\f518"; }
+.bi-record-fill::before { content: "\f519"; }
+.bi-record::before { content: "\f51a"; }
+.bi-record2-fill::before { content: "\f51b"; }
+.bi-record2::before { content: "\f51c"; }
+.bi-reply-all-fill::before { content: "\f51d"; }
+.bi-reply-all::before { content: "\f51e"; }
+.bi-reply-fill::before { content: "\f51f"; }
+.bi-reply::before { content: "\f520"; }
+.bi-rss-fill::before { content: "\f521"; }
+.bi-rss::before { content: "\f522"; }
+.bi-rulers::before { content: "\f523"; }
+.bi-save-fill::before { content: "\f524"; }
+.bi-save::before { content: "\f525"; }
+.bi-save2-fill::before { content: "\f526"; }
+.bi-save2::before { content: "\f527"; }
+.bi-scissors::before { content: "\f528"; }
+.bi-screwdriver::before { content: "\f529"; }
+.bi-search::before { content: "\f52a"; }
+.bi-segmented-nav::before { content: "\f52b"; }
+.bi-server::before { content: "\f52c"; }
+.bi-share-fill::before { content: "\f52d"; }
+.bi-share::before { content: "\f52e"; }
+.bi-shield-check::before { content: "\f52f"; }
+.bi-shield-exclamation::before { content: "\f530"; }
+.bi-shield-fill-check::before { content: "\f531"; }
+.bi-shield-fill-exclamation::before { content: "\f532"; }
+.bi-shield-fill-minus::before { content: "\f533"; }
+.bi-shield-fill-plus::before { content: "\f534"; }
+.bi-shield-fill-x::before { content: "\f535"; }
+.bi-shield-fill::before { content: "\f536"; }
+.bi-shield-lock-fill::before { content: "\f537"; }
+.bi-shield-lock::before { content: "\f538"; }
+.bi-shield-minus::before { content: "\f539"; }
+.bi-shield-plus::before { content: "\f53a"; }
+.bi-shield-shaded::before { content: "\f53b"; }
+.bi-shield-slash-fill::before { content: "\f53c"; }
+.bi-shield-slash::before { content: "\f53d"; }
+.bi-shield-x::before { content: "\f53e"; }
+.bi-shield::before { content: "\f53f"; }
+.bi-shift-fill::before { content: "\f540"; }
+.bi-shift::before { content: "\f541"; }
+.bi-shop-window::before { content: "\f542"; }
+.bi-shop::before { content: "\f543"; }
+.bi-shuffle::before { content: "\f544"; }
+.bi-signpost-2-fill::before { content: "\f545"; }
+.bi-signpost-2::before { content: "\f546"; }
+.bi-signpost-fill::before { content: "\f547"; }
+.bi-signpost-split-fill::before { content: "\f548"; }
+.bi-signpost-split::before { content: "\f549"; }
+.bi-signpost::before { content: "\f54a"; }
+.bi-sim-fill::before { content: "\f54b"; }
+.bi-sim::before { content: "\f54c"; }
+.bi-skip-backward-btn-fill::before { content: "\f54d"; }
+.bi-skip-backward-btn::before { content: "\f54e"; }
+.bi-skip-backward-circle-fill::before { content: "\f54f"; }
+.bi-skip-backward-circle::before { content: "\f550"; }
+.bi-skip-backward-fill::before { content: "\f551"; }
+.bi-skip-backward::before { content: "\f552"; }
+.bi-skip-end-btn-fill::before { content: "\f553"; }
+.bi-skip-end-btn::before { content: "\f554"; }
+.bi-skip-end-circle-fill::before { content: "\f555"; }
+.bi-skip-end-circle::before { content: "\f556"; }
+.bi-skip-end-fill::before { content: "\f557"; }
+.bi-skip-end::before { content: "\f558"; }
+.bi-skip-forward-btn-fill::before { content: "\f559"; }
+.bi-skip-forward-btn::before { content: "\f55a"; }
+.bi-skip-forward-circle-fill::before { content: "\f55b"; }
+.bi-skip-forward-circle::before { content: "\f55c"; }
+.bi-skip-forward-fill::before { content: "\f55d"; }
+.bi-skip-forward::before { content: "\f55e"; }
+.bi-skip-start-btn-fill::before { content: "\f55f"; }
+.bi-skip-start-btn::before { content: "\f560"; }
+.bi-skip-start-circle-fill::before { content: "\f561"; }
+.bi-skip-start-circle::before { content: "\f562"; }
+.bi-skip-start-fill::before { content: "\f563"; }
+.bi-skip-start::before { content: "\f564"; }
+.bi-slack::before { content: "\f565"; }
+.bi-slash-circle-fill::before { content: "\f566"; }
+.bi-slash-circle::before { content: "\f567"; }
+.bi-slash-square-fill::before { content: "\f568"; }
+.bi-slash-square::before { content: "\f569"; }
+.bi-slash::before { content: "\f56a"; }
+.bi-sliders::before { content: "\f56b"; }
+.bi-smartwatch::before { content: "\f56c"; }
+.bi-snow::before { content: "\f56d"; }
+.bi-snow2::before { content: "\f56e"; }
+.bi-snow3::before { content: "\f56f"; }
+.bi-sort-alpha-down-alt::before { content: "\f570"; }
+.bi-sort-alpha-down::before { content: "\f571"; }
+.bi-sort-alpha-up-alt::before { content: "\f572"; }
+.bi-sort-alpha-up::before { content: "\f573"; }
+.bi-sort-down-alt::before { content: "\f574"; }
+.bi-sort-down::before { content: "\f575"; }
+.bi-sort-numeric-down-alt::before { content: "\f576"; }
+.bi-sort-numeric-down::before { content: "\f577"; }
+.bi-sort-numeric-up-alt::before { content: "\f578"; }
+.bi-sort-numeric-up::before { content: "\f579"; }
+.bi-sort-up-alt::before { content: "\f57a"; }
+.bi-sort-up::before { content: "\f57b"; }
+.bi-soundwave::before { content: "\f57c"; }
+.bi-speaker-fill::before { content: "\f57d"; }
+.bi-speaker::before { content: "\f57e"; }
+.bi-speedometer::before { content: "\f57f"; }
+.bi-speedometer2::before { content: "\f580"; }
+.bi-spellcheck::before { content: "\f581"; }
+.bi-square-fill::before { content: "\f582"; }
+.bi-square-half::before { content: "\f583"; }
+.bi-square::before { content: "\f584"; }
+.bi-stack::before { content: "\f585"; }
+.bi-star-fill::before { content: "\f586"; }
+.bi-star-half::before { content: "\f587"; }
+.bi-star::before { content: "\f588"; }
+.bi-stars::before { content: "\f589"; }
+.bi-stickies-fill::before { content: "\f58a"; }
+.bi-stickies::before { content: "\f58b"; }
+.bi-sticky-fill::before { content: "\f58c"; }
+.bi-sticky::before { content: "\f58d"; }
+.bi-stop-btn-fill::before { content: "\f58e"; }
+.bi-stop-btn::before { content: "\f58f"; }
+.bi-stop-circle-fill::before { content: "\f590"; }
+.bi-stop-circle::before { content: "\f591"; }
+.bi-stop-fill::before { content: "\f592"; }
+.bi-stop::before { content: "\f593"; }
+.bi-stoplights-fill::before { content: "\f594"; }
+.bi-stoplights::before { content: "\f595"; }
+.bi-stopwatch-fill::before { content: "\f596"; }
+.bi-stopwatch::before { content: "\f597"; }
+.bi-subtract::before { content: "\f598"; }
+.bi-suit-club-fill::before { content: "\f599"; }
+.bi-suit-club::before { content: "\f59a"; }
+.bi-suit-diamond-fill::before { content: "\f59b"; }
+.bi-suit-diamond::before { content: "\f59c"; }
+.bi-suit-heart-fill::before { content: "\f59d"; }
+.bi-suit-heart::before { content: "\f59e"; }
+.bi-suit-spade-fill::before { content: "\f59f"; }
+.bi-suit-spade::before { content: "\f5a0"; }
+.bi-sun-fill::before { content: "\f5a1"; }
+.bi-sun::before { content: "\f5a2"; }
+.bi-sunglasses::before { content: "\f5a3"; }
+.bi-sunrise-fill::before { content: "\f5a4"; }
+.bi-sunrise::before { content: "\f5a5"; }
+.bi-sunset-fill::before { content: "\f5a6"; }
+.bi-sunset::before { content: "\f5a7"; }
+.bi-symmetry-horizontal::before { content: "\f5a8"; }
+.bi-symmetry-vertical::before { content: "\f5a9"; }
+.bi-table::before { content: "\f5aa"; }
+.bi-tablet-fill::before { content: "\f5ab"; }
+.bi-tablet-landscape-fill::before { content: "\f5ac"; }
+.bi-tablet-landscape::before { content: "\f5ad"; }
+.bi-tablet::before { content: "\f5ae"; }
+.bi-tag-fill::before { content: "\f5af"; }
+.bi-tag::before { content: "\f5b0"; }
+.bi-tags-fill::before { content: "\f5b1"; }
+.bi-tags::before { content: "\f5b2"; }
+.bi-telegram::before { content: "\f5b3"; }
+.bi-telephone-fill::before { content: "\f5b4"; }
+.bi-telephone-forward-fill::before { content: "\f5b5"; }
+.bi-telephone-forward::before { content: "\f5b6"; }
+.bi-telephone-inbound-fill::before { content: "\f5b7"; }
+.bi-telephone-inbound::before { content: "\f5b8"; }
+.bi-telephone-minus-fill::before { content: "\f5b9"; }
+.bi-telephone-minus::before { content: "\f5ba"; }
+.bi-telephone-outbound-fill::before { content: "\f5bb"; }
+.bi-telephone-outbound::before { content: "\f5bc"; }
+.bi-telephone-plus-fill::before { content: "\f5bd"; }
+.bi-telephone-plus::before { content: "\f5be"; }
+.bi-telephone-x-fill::before { content: "\f5bf"; }
+.bi-telephone-x::before { content: "\f5c0"; }
+.bi-telephone::before { content: "\f5c1"; }
+.bi-terminal-fill::before { content: "\f5c2"; }
+.bi-terminal::before { content: "\f5c3"; }
+.bi-text-center::before { content: "\f5c4"; }
+.bi-text-indent-left::before { content: "\f5c5"; }
+.bi-text-indent-right::before { content: "\f5c6"; }
+.bi-text-left::before { content: "\f5c7"; }
+.bi-text-paragraph::before { content: "\f5c8"; }
+.bi-text-right::before { content: "\f5c9"; }
+.bi-textarea-resize::before { content: "\f5ca"; }
+.bi-textarea-t::before { content: "\f5cb"; }
+.bi-textarea::before { content: "\f5cc"; }
+.bi-thermometer-half::before { content: "\f5cd"; }
+.bi-thermometer-high::before { content: "\f5ce"; }
+.bi-thermometer-low::before { content: "\f5cf"; }
+.bi-thermometer-snow::before { content: "\f5d0"; }
+.bi-thermometer-sun::before { content: "\f5d1"; }
+.bi-thermometer::before { content: "\f5d2"; }
+.bi-three-dots-vertical::before { content: "\f5d3"; }
+.bi-three-dots::before { content: "\f5d4"; }
+.bi-toggle-off::before { content: "\f5d5"; }
+.bi-toggle-on::before { content: "\f5d6"; }
+.bi-toggle2-off::before { content: "\f5d7"; }
+.bi-toggle2-on::before { content: "\f5d8"; }
+.bi-toggles::before { content: "\f5d9"; }
+.bi-toggles2::before { content: "\f5da"; }
+.bi-tools::before { content: "\f5db"; }
+.bi-tornado::before { content: "\f5dc"; }
+.bi-trash-fill::before { content: "\f5dd"; }
+.bi-trash::before { content: "\f5de"; }
+.bi-trash2-fill::before { content: "\f5df"; }
+.bi-trash2::before { content: "\f5e0"; }
+.bi-tree-fill::before { content: "\f5e1"; }
+.bi-tree::before { content: "\f5e2"; }
+.bi-triangle-fill::before { content: "\f5e3"; }
+.bi-triangle-half::before { content: "\f5e4"; }
+.bi-triangle::before { content: "\f5e5"; }
+.bi-trophy-fill::before { content: "\f5e6"; }
+.bi-trophy::before { content: "\f5e7"; }
+.bi-tropical-storm::before { content: "\f5e8"; }
+.bi-truck-flatbed::before { content: "\f5e9"; }
+.bi-truck::before { content: "\f5ea"; }
+.bi-tsunami::before { content: "\f5eb"; }
+.bi-tv-fill::before { content: "\f5ec"; }
+.bi-tv::before { content: "\f5ed"; }
+.bi-twitch::before { content: "\f5ee"; }
+.bi-twitter::before { content: "\f5ef"; }
+.bi-type-bold::before { content: "\f5f0"; }
+.bi-type-h1::before { content: "\f5f1"; }
+.bi-type-h2::before { content: "\f5f2"; }
+.bi-type-h3::before { content: "\f5f3"; }
+.bi-type-italic::before { content: "\f5f4"; }
+.bi-type-strikethrough::before { content: "\f5f5"; }
+.bi-type-underline::before { content: "\f5f6"; }
+.bi-type::before { content: "\f5f7"; }
+.bi-ui-checks-grid::before { content: "\f5f8"; }
+.bi-ui-checks::before { content: "\f5f9"; }
+.bi-ui-radios-grid::before { content: "\f5fa"; }
+.bi-ui-radios::before { content: "\f5fb"; }
+.bi-umbrella-fill::before { content: "\f5fc"; }
+.bi-umbrella::before { content: "\f5fd"; }
+.bi-union::before { content: "\f5fe"; }
+.bi-unlock-fill::before { content: "\f5ff"; }
+.bi-unlock::before { content: "\f600"; }
+.bi-upc-scan::before { content: "\f601"; }
+.bi-upc::before { content: "\f602"; }
+.bi-upload::before { content: "\f603"; }
+.bi-vector-pen::before { content: "\f604"; }
+.bi-view-list::before { content: "\f605"; }
+.bi-view-stacked::before { content: "\f606"; }
+.bi-vinyl-fill::before { content: "\f607"; }
+.bi-vinyl::before { content: "\f608"; }
+.bi-voicemail::before { content: "\f609"; }
+.bi-volume-down-fill::before { content: "\f60a"; }
+.bi-volume-down::before { content: "\f60b"; }
+.bi-volume-mute-fill::before { content: "\f60c"; }
+.bi-volume-mute::before { content: "\f60d"; }
+.bi-volume-off-fill::before { content: "\f60e"; }
+.bi-volume-off::before { content: "\f60f"; }
+.bi-volume-up-fill::before { content: "\f610"; }
+.bi-volume-up::before { content: "\f611"; }
+.bi-vr::before { content: "\f612"; }
+.bi-wallet-fill::before { content: "\f613"; }
+.bi-wallet::before { content: "\f614"; }
+.bi-wallet2::before { content: "\f615"; }
+.bi-watch::before { content: "\f616"; }
+.bi-water::before { content: "\f617"; }
+.bi-whatsapp::before { content: "\f618"; }
+.bi-wifi-1::before { content: "\f619"; }
+.bi-wifi-2::before { content: "\f61a"; }
+.bi-wifi-off::before { content: "\f61b"; }
+.bi-wifi::before { content: "\f61c"; }
+.bi-wind::before { content: "\f61d"; }
+.bi-window-dock::before { content: "\f61e"; }
+.bi-window-sidebar::before { content: "\f61f"; }
+.bi-window::before { content: "\f620"; }
+.bi-wrench::before { content: "\f621"; }
+.bi-x-circle-fill::before { content: "\f622"; }
+.bi-x-circle::before { content: "\f623"; }
+.bi-x-diamond-fill::before { content: "\f624"; }
+.bi-x-diamond::before { content: "\f625"; }
+.bi-x-octagon-fill::before { content: "\f626"; }
+.bi-x-octagon::before { content: "\f627"; }
+.bi-x-square-fill::before { content: "\f628"; }
+.bi-x-square::before { content: "\f629"; }
+.bi-x::before { content: "\f62a"; }
+.bi-youtube::before { content: "\f62b"; }
+.bi-zoom-in::before { content: "\f62c"; }
+.bi-zoom-out::before { content: "\f62d"; }
+.bi-bank::before { content: "\f62e"; }
+.bi-bank2::before { content: "\f62f"; }
+.bi-bell-slash-fill::before { content: "\f630"; }
+.bi-bell-slash::before { content: "\f631"; }
+.bi-cash-coin::before { content: "\f632"; }
+.bi-check-lg::before { content: "\f633"; }
+.bi-coin::before { content: "\f634"; }
+.bi-currency-bitcoin::before { content: "\f635"; }
+.bi-currency-dollar::before { content: "\f636"; }
+.bi-currency-euro::before { content: "\f637"; }
+.bi-currency-exchange::before { content: "\f638"; }
+.bi-currency-pound::before { content: "\f639"; }
+.bi-currency-yen::before { content: "\f63a"; }
+.bi-dash-lg::before { content: "\f63b"; }
+.bi-exclamation-lg::before { content: "\f63c"; }
+.bi-file-earmark-pdf-fill::before { content: "\f63d"; }
+.bi-file-earmark-pdf::before { content: "\f63e"; }
+.bi-file-pdf-fill::before { content: "\f63f"; }
+.bi-file-pdf::before { content: "\f640"; }
+.bi-gender-ambiguous::before { content: "\f641"; }
+.bi-gender-female::before { content: "\f642"; }
+.bi-gender-male::before { content: "\f643"; }
+.bi-gender-trans::before { content: "\f644"; }
+.bi-headset-vr::before { content: "\f645"; }
+.bi-info-lg::before { content: "\f646"; }
+.bi-mastodon::before { content: "\f647"; }
+.bi-messenger::before { content: "\f648"; }
+.bi-piggy-bank-fill::before { content: "\f649"; }
+.bi-piggy-bank::before { content: "\f64a"; }
+.bi-pin-map-fill::before { content: "\f64b"; }
+.bi-pin-map::before { content: "\f64c"; }
+.bi-plus-lg::before { content: "\f64d"; }
+.bi-question-lg::before { content: "\f64e"; }
+.bi-recycle::before { content: "\f64f"; }
+.bi-reddit::before { content: "\f650"; }
+.bi-safe-fill::before { content: "\f651"; }
+.bi-safe2-fill::before { content: "\f652"; }
+.bi-safe2::before { content: "\f653"; }
+.bi-sd-card-fill::before { content: "\f654"; }
+.bi-sd-card::before { content: "\f655"; }
+.bi-skype::before { content: "\f656"; }
+.bi-slash-lg::before { content: "\f657"; }
+.bi-translate::before { content: "\f658"; }
+.bi-x-lg::before { content: "\f659"; }
+.bi-safe::before { content: "\f65a"; }
+.bi-apple::before { content: "\f65b"; }
+.bi-microsoft::before { content: "\f65d"; }
+.bi-windows::before { content: "\f65e"; }
+.bi-behance::before { content: "\f65c"; }
+.bi-dribbble::before { content: "\f65f"; }
+.bi-line::before { content: "\f660"; }
+.bi-medium::before { content: "\f661"; }
+.bi-paypal::before { content: "\f662"; }
+.bi-pinterest::before { content: "\f663"; }
+.bi-signal::before { content: "\f664"; }
+.bi-snapchat::before { content: "\f665"; }
+.bi-spotify::before { content: "\f666"; }
+.bi-stack-overflow::before { content: "\f667"; }
+.bi-strava::before { content: "\f668"; }
+.bi-wordpress::before { content: "\f669"; }
+.bi-vimeo::before { content: "\f66a"; }
+.bi-activity::before { content: "\f66b"; }
+.bi-easel2-fill::before { content: "\f66c"; }
+.bi-easel2::before { content: "\f66d"; }
+.bi-easel3-fill::before { content: "\f66e"; }
+.bi-easel3::before { content: "\f66f"; }
+.bi-fan::before { content: "\f670"; }
+.bi-fingerprint::before { content: "\f671"; }
+.bi-graph-down-arrow::before { content: "\f672"; }
+.bi-graph-up-arrow::before { content: "\f673"; }
+.bi-hypnotize::before { content: "\f674"; }
+.bi-magic::before { content: "\f675"; }
+.bi-person-rolodex::before { content: "\f676"; }
+.bi-person-video::before { content: "\f677"; }
+.bi-person-video2::before { content: "\f678"; }
+.bi-person-video3::before { content: "\f679"; }
+.bi-person-workspace::before { content: "\f67a"; }
+.bi-radioactive::before { content: "\f67b"; }
+.bi-webcam-fill::before { content: "\f67c"; }
+.bi-webcam::before { content: "\f67d"; }
+.bi-yin-yang::before { content: "\f67e"; }
+.bi-bandaid-fill::before { content: "\f680"; }
+.bi-bandaid::before { content: "\f681"; }
+.bi-bluetooth::before { content: "\f682"; }
+.bi-body-text::before { content: "\f683"; }
+.bi-boombox::before { content: "\f684"; }
+.bi-boxes::before { content: "\f685"; }
+.bi-dpad-fill::before { content: "\f686"; }
+.bi-dpad::before { content: "\f687"; }
+.bi-ear-fill::before { content: "\f688"; }
+.bi-ear::before { content: "\f689"; }
+.bi-envelope-check-fill::before { content: "\f68b"; }
+.bi-envelope-check::before { content: "\f68c"; }
+.bi-envelope-dash-fill::before { content: "\f68e"; }
+.bi-envelope-dash::before { content: "\f68f"; }
+.bi-envelope-exclamation-fill::before { content: "\f691"; }
+.bi-envelope-exclamation::before { content: "\f692"; }
+.bi-envelope-plus-fill::before { content: "\f693"; }
+.bi-envelope-plus::before { content: "\f694"; }
+.bi-envelope-slash-fill::before { content: "\f696"; }
+.bi-envelope-slash::before { content: "\f697"; }
+.bi-envelope-x-fill::before { content: "\f699"; }
+.bi-envelope-x::before { content: "\f69a"; }
+.bi-explicit-fill::before { content: "\f69b"; }
+.bi-explicit::before { content: "\f69c"; }
+.bi-git::before { content: "\f69d"; }
+.bi-infinity::before { content: "\f69e"; }
+.bi-list-columns-reverse::before { content: "\f69f"; }
+.bi-list-columns::before { content: "\f6a0"; }
+.bi-meta::before { content: "\f6a1"; }
+.bi-nintendo-switch::before { content: "\f6a4"; }
+.bi-pc-display-horizontal::before { content: "\f6a5"; }
+.bi-pc-display::before { content: "\f6a6"; }
+.bi-pc-horizontal::before { content: "\f6a7"; }
+.bi-pc::before { content: "\f6a8"; }
+.bi-playstation::before { content: "\f6a9"; }
+.bi-plus-slash-minus::before { content: "\f6aa"; }
+.bi-projector-fill::before { content: "\f6ab"; }
+.bi-projector::before { content: "\f6ac"; }
+.bi-qr-code-scan::before { content: "\f6ad"; }
+.bi-qr-code::before { content: "\f6ae"; }
+.bi-quora::before { content: "\f6af"; }
+.bi-quote::before { content: "\f6b0"; }
+.bi-robot::before { content: "\f6b1"; }
+.bi-send-check-fill::before { content: "\f6b2"; }
+.bi-send-check::before { content: "\f6b3"; }
+.bi-send-dash-fill::before { content: "\f6b4"; }
+.bi-send-dash::before { content: "\f6b5"; }
+.bi-send-exclamation-fill::before { content: "\f6b7"; }
+.bi-send-exclamation::before { content: "\f6b8"; }
+.bi-send-fill::before { content: "\f6b9"; }
+.bi-send-plus-fill::before { content: "\f6ba"; }
+.bi-send-plus::before { content: "\f6bb"; }
+.bi-send-slash-fill::before { content: "\f6bc"; }
+.bi-send-slash::before { content: "\f6bd"; }
+.bi-send-x-fill::before { content: "\f6be"; }
+.bi-send-x::before { content: "\f6bf"; }
+.bi-send::before { content: "\f6c0"; }
+.bi-steam::before { content: "\f6c1"; }
+.bi-terminal-dash::before { content: "\f6c3"; }
+.bi-terminal-plus::before { content: "\f6c4"; }
+.bi-terminal-split::before { content: "\f6c5"; }
+.bi-ticket-detailed-fill::before { content: "\f6c6"; }
+.bi-ticket-detailed::before { content: "\f6c7"; }
+.bi-ticket-fill::before { content: "\f6c8"; }
+.bi-ticket-perforated-fill::before { content: "\f6c9"; }
+.bi-ticket-perforated::before { content: "\f6ca"; }
+.bi-ticket::before { content: "\f6cb"; }
+.bi-tiktok::before { content: "\f6cc"; }
+.bi-window-dash::before { content: "\f6cd"; }
+.bi-window-desktop::before { content: "\f6ce"; }
+.bi-window-fullscreen::before { content: "\f6cf"; }
+.bi-window-plus::before { content: "\f6d0"; }
+.bi-window-split::before { content: "\f6d1"; }
+.bi-window-stack::before { content: "\f6d2"; }
+.bi-window-x::before { content: "\f6d3"; }
+.bi-xbox::before { content: "\f6d4"; }
+.bi-ethernet::before { content: "\f6d5"; }
+.bi-hdmi-fill::before { content: "\f6d6"; }
+.bi-hdmi::before { content: "\f6d7"; }
+.bi-usb-c-fill::before { content: "\f6d8"; }
+.bi-usb-c::before { content: "\f6d9"; }
+.bi-usb-fill::before { content: "\f6da"; }
+.bi-usb-plug-fill::before { content: "\f6db"; }
+.bi-usb-plug::before { content: "\f6dc"; }
+.bi-usb-symbol::before { content: "\f6dd"; }
+.bi-usb::before { content: "\f6de"; }
+.bi-boombox-fill::before { content: "\f6df"; }
+.bi-displayport::before { content: "\f6e1"; }
+.bi-gpu-card::before { content: "\f6e2"; }
+.bi-memory::before { content: "\f6e3"; }
+.bi-modem-fill::before { content: "\f6e4"; }
+.bi-modem::before { content: "\f6e5"; }
+.bi-motherboard-fill::before { content: "\f6e6"; }
+.bi-motherboard::before { content: "\f6e7"; }
+.bi-optical-audio-fill::before { content: "\f6e8"; }
+.bi-optical-audio::before { content: "\f6e9"; }
+.bi-pci-card::before { content: "\f6ea"; }
+.bi-router-fill::before { content: "\f6eb"; }
+.bi-router::before { content: "\f6ec"; }
+.bi-thunderbolt-fill::before { content: "\f6ef"; }
+.bi-thunderbolt::before { content: "\f6f0"; }
+.bi-usb-drive-fill::before { content: "\f6f1"; }
+.bi-usb-drive::before { content: "\f6f2"; }
+.bi-usb-micro-fill::before { content: "\f6f3"; }
+.bi-usb-micro::before { content: "\f6f4"; }
+.bi-usb-mini-fill::before { content: "\f6f5"; }
+.bi-usb-mini::before { content: "\f6f6"; }
+.bi-cloud-haze2::before { content: "\f6f7"; }
+.bi-device-hdd-fill::before { content: "\f6f8"; }
+.bi-device-hdd::before { content: "\f6f9"; }
+.bi-device-ssd-fill::before { content: "\f6fa"; }
+.bi-device-ssd::before { content: "\f6fb"; }
+.bi-displayport-fill::before { content: "\f6fc"; }
+.bi-mortarboard-fill::before { content: "\f6fd"; }
+.bi-mortarboard::before { content: "\f6fe"; }
+.bi-terminal-x::before { content: "\f6ff"; }
+.bi-arrow-through-heart-fill::before { content: "\f700"; }
+.bi-arrow-through-heart::before { content: "\f701"; }
+.bi-badge-sd-fill::before { content: "\f702"; }
+.bi-badge-sd::before { content: "\f703"; }
+.bi-bag-heart-fill::before { content: "\f704"; }
+.bi-bag-heart::before { content: "\f705"; }
+.bi-balloon-fill::before { content: "\f706"; }
+.bi-balloon-heart-fill::before { content: "\f707"; }
+.bi-balloon-heart::before { content: "\f708"; }
+.bi-balloon::before { content: "\f709"; }
+.bi-box2-fill::before { content: "\f70a"; }
+.bi-box2-heart-fill::before { content: "\f70b"; }
+.bi-box2-heart::before { content: "\f70c"; }
+.bi-box2::before { content: "\f70d"; }
+.bi-braces-asterisk::before { content: "\f70e"; }
+.bi-calendar-heart-fill::before { content: "\f70f"; }
+.bi-calendar-heart::before { content: "\f710"; }
+.bi-calendar2-heart-fill::before { content: "\f711"; }
+.bi-calendar2-heart::before { content: "\f712"; }
+.bi-chat-heart-fill::before { content: "\f713"; }
+.bi-chat-heart::before { content: "\f714"; }
+.bi-chat-left-heart-fill::before { content: "\f715"; }
+.bi-chat-left-heart::before { content: "\f716"; }
+.bi-chat-right-heart-fill::before { content: "\f717"; }
+.bi-chat-right-heart::before { content: "\f718"; }
+.bi-chat-square-heart-fill::before { content: "\f719"; }
+.bi-chat-square-heart::before { content: "\f71a"; }
+.bi-clipboard-check-fill::before { content: "\f71b"; }
+.bi-clipboard-data-fill::before { content: "\f71c"; }
+.bi-clipboard-fill::before { content: "\f71d"; }
+.bi-clipboard-heart-fill::before { content: "\f71e"; }
+.bi-clipboard-heart::before { content: "\f71f"; }
+.bi-clipboard-minus-fill::before { content: "\f720"; }
+.bi-clipboard-plus-fill::before { content: "\f721"; }
+.bi-clipboard-pulse::before { content: "\f722"; }
+.bi-clipboard-x-fill::before { content: "\f723"; }
+.bi-clipboard2-check-fill::before { content: "\f724"; }
+.bi-clipboard2-check::before { content: "\f725"; }
+.bi-clipboard2-data-fill::before { content: "\f726"; }
+.bi-clipboard2-data::before { content: "\f727"; }
+.bi-clipboard2-fill::before { content: "\f728"; }
+.bi-clipboard2-heart-fill::before { content: "\f729"; }
+.bi-clipboard2-heart::before { content: "\f72a"; }
+.bi-clipboard2-minus-fill::before { content: "\f72b"; }
+.bi-clipboard2-minus::before { content: "\f72c"; }
+.bi-clipboard2-plus-fill::before { content: "\f72d"; }
+.bi-clipboard2-plus::before { content: "\f72e"; }
+.bi-clipboard2-pulse-fill::before { content: "\f72f"; }
+.bi-clipboard2-pulse::before { content: "\f730"; }
+.bi-clipboard2-x-fill::before { content: "\f731"; }
+.bi-clipboard2-x::before { content: "\f732"; }
+.bi-clipboard2::before { content: "\f733"; }
+.bi-emoji-kiss-fill::before { content: "\f734"; }
+.bi-emoji-kiss::before { content: "\f735"; }
+.bi-envelope-heart-fill::before { content: "\f736"; }
+.bi-envelope-heart::before { content: "\f737"; }
+.bi-envelope-open-heart-fill::before { content: "\f738"; }
+.bi-envelope-open-heart::before { content: "\f739"; }
+.bi-envelope-paper-fill::before { content: "\f73a"; }
+.bi-envelope-paper-heart-fill::before { content: "\f73b"; }
+.bi-envelope-paper-heart::before { content: "\f73c"; }
+.bi-envelope-paper::before { content: "\f73d"; }
+.bi-filetype-aac::before { content: "\f73e"; }
+.bi-filetype-ai::before { content: "\f73f"; }
+.bi-filetype-bmp::before { content: "\f740"; }
+.bi-filetype-cs::before { content: "\f741"; }
+.bi-filetype-css::before { content: "\f742"; }
+.bi-filetype-csv::before { content: "\f743"; }
+.bi-filetype-doc::before { content: "\f744"; }
+.bi-filetype-docx::before { content: "\f745"; }
+.bi-filetype-exe::before { content: "\f746"; }
+.bi-filetype-gif::before { content: "\f747"; }
+.bi-filetype-heic::before { content: "\f748"; }
+.bi-filetype-html::before { content: "\f749"; }
+.bi-filetype-java::before { content: "\f74a"; }
+.bi-filetype-jpg::before { content: "\f74b"; }
+.bi-filetype-js::before { content: "\f74c"; }
+.bi-filetype-jsx::before { content: "\f74d"; }
+.bi-filetype-key::before { content: "\f74e"; }
+.bi-filetype-m4p::before { content: "\f74f"; }
+.bi-filetype-md::before { content: "\f750"; }
+.bi-filetype-mdx::before { content: "\f751"; }
+.bi-filetype-mov::before { content: "\f752"; }
+.bi-filetype-mp3::before { content: "\f753"; }
+.bi-filetype-mp4::before { content: "\f754"; }
+.bi-filetype-otf::before { content: "\f755"; }
+.bi-filetype-pdf::before { content: "\f756"; }
+.bi-filetype-php::before { content: "\f757"; }
+.bi-filetype-png::before { content: "\f758"; }
+.bi-filetype-ppt::before { content: "\f75a"; }
+.bi-filetype-psd::before { content: "\f75b"; }
+.bi-filetype-py::before { content: "\f75c"; }
+.bi-filetype-raw::before { content: "\f75d"; }
+.bi-filetype-rb::before { content: "\f75e"; }
+.bi-filetype-sass::before { content: "\f75f"; }
+.bi-filetype-scss::before { content: "\f760"; }
+.bi-filetype-sh::before { content: "\f761"; }
+.bi-filetype-svg::before { content: "\f762"; }
+.bi-filetype-tiff::before { content: "\f763"; }
+.bi-filetype-tsx::before { content: "\f764"; }
+.bi-filetype-ttf::before { content: "\f765"; }
+.bi-filetype-txt::before { content: "\f766"; }
+.bi-filetype-wav::before { content: "\f767"; }
+.bi-filetype-woff::before { content: "\f768"; }
+.bi-filetype-xls::before { content: "\f76a"; }
+.bi-filetype-xml::before { content: "\f76b"; }
+.bi-filetype-yml::before { content: "\f76c"; }
+.bi-heart-arrow::before { content: "\f76d"; }
+.bi-heart-pulse-fill::before { content: "\f76e"; }
+.bi-heart-pulse::before { content: "\f76f"; }
+.bi-heartbreak-fill::before { content: "\f770"; }
+.bi-heartbreak::before { content: "\f771"; }
+.bi-hearts::before { content: "\f772"; }
+.bi-hospital-fill::before { content: "\f773"; }
+.bi-hospital::before { content: "\f774"; }
+.bi-house-heart-fill::before { content: "\f775"; }
+.bi-house-heart::before { content: "\f776"; }
+.bi-incognito::before { content: "\f777"; }
+.bi-magnet-fill::before { content: "\f778"; }
+.bi-magnet::before { content: "\f779"; }
+.bi-person-heart::before { content: "\f77a"; }
+.bi-person-hearts::before { content: "\f77b"; }
+.bi-phone-flip::before { content: "\f77c"; }
+.bi-plugin::before { content: "\f77d"; }
+.bi-postage-fill::before { content: "\f77e"; }
+.bi-postage-heart-fill::before { content: "\f77f"; }
+.bi-postage-heart::before { content: "\f780"; }
+.bi-postage::before { content: "\f781"; }
+.bi-postcard-fill::before { content: "\f782"; }
+.bi-postcard-heart-fill::before { content: "\f783"; }
+.bi-postcard-heart::before { content: "\f784"; }
+.bi-postcard::before { content: "\f785"; }
+.bi-search-heart-fill::before { content: "\f786"; }
+.bi-search-heart::before { content: "\f787"; }
+.bi-sliders2-vertical::before { content: "\f788"; }
+.bi-sliders2::before { content: "\f789"; }
+.bi-trash3-fill::before { content: "\f78a"; }
+.bi-trash3::before { content: "\f78b"; }
+.bi-valentine::before { content: "\f78c"; }
+.bi-valentine2::before { content: "\f78d"; }
+.bi-wrench-adjustable-circle-fill::before { content: "\f78e"; }
+.bi-wrench-adjustable-circle::before { content: "\f78f"; }
+.bi-wrench-adjustable::before { content: "\f790"; }
+.bi-filetype-json::before { content: "\f791"; }
+.bi-filetype-pptx::before { content: "\f792"; }
+.bi-filetype-xlsx::before { content: "\f793"; }
+.bi-1-circle-fill::before { content: "\f796"; }
+.bi-1-circle::before { content: "\f797"; }
+.bi-1-square-fill::before { content: "\f798"; }
+.bi-1-square::before { content: "\f799"; }
+.bi-2-circle-fill::before { content: "\f79c"; }
+.bi-2-circle::before { content: "\f79d"; }
+.bi-2-square-fill::before { content: "\f79e"; }
+.bi-2-square::before { content: "\f79f"; }
+.bi-3-circle-fill::before { content: "\f7a2"; }
+.bi-3-circle::before { content: "\f7a3"; }
+.bi-3-square-fill::before { content: "\f7a4"; }
+.bi-3-square::before { content: "\f7a5"; }
+.bi-4-circle-fill::before { content: "\f7a8"; }
+.bi-4-circle::before { content: "\f7a9"; }
+.bi-4-square-fill::before { content: "\f7aa"; }
+.bi-4-square::before { content: "\f7ab"; }
+.bi-5-circle-fill::before { content: "\f7ae"; }
+.bi-5-circle::before { content: "\f7af"; }
+.bi-5-square-fill::before { content: "\f7b0"; }
+.bi-5-square::before { content: "\f7b1"; }
+.bi-6-circle-fill::before { content: "\f7b4"; }
+.bi-6-circle::before { content: "\f7b5"; }
+.bi-6-square-fill::before { content: "\f7b6"; }
+.bi-6-square::before { content: "\f7b7"; }
+.bi-7-circle-fill::before { content: "\f7ba"; }
+.bi-7-circle::before { content: "\f7bb"; }
+.bi-7-square-fill::before { content: "\f7bc"; }
+.bi-7-square::before { content: "\f7bd"; }
+.bi-8-circle-fill::before { content: "\f7c0"; }
+.bi-8-circle::before { content: "\f7c1"; }
+.bi-8-square-fill::before { content: "\f7c2"; }
+.bi-8-square::before { content: "\f7c3"; }
+.bi-9-circle-fill::before { content: "\f7c6"; }
+.bi-9-circle::before { content: "\f7c7"; }
+.bi-9-square-fill::before { content: "\f7c8"; }
+.bi-9-square::before { content: "\f7c9"; }
+.bi-airplane-engines-fill::before { content: "\f7ca"; }
+.bi-airplane-engines::before { content: "\f7cb"; }
+.bi-airplane-fill::before { content: "\f7cc"; }
+.bi-airplane::before { content: "\f7cd"; }
+.bi-alexa::before { content: "\f7ce"; }
+.bi-alipay::before { content: "\f7cf"; }
+.bi-android::before { content: "\f7d0"; }
+.bi-android2::before { content: "\f7d1"; }
+.bi-box-fill::before { content: "\f7d2"; }
+.bi-box-seam-fill::before { content: "\f7d3"; }
+.bi-browser-chrome::before { content: "\f7d4"; }
+.bi-browser-edge::before { content: "\f7d5"; }
+.bi-browser-firefox::before { content: "\f7d6"; }
+.bi-browser-safari::before { content: "\f7d7"; }
+.bi-c-circle-fill::before { content: "\f7da"; }
+.bi-c-circle::before { content: "\f7db"; }
+.bi-c-square-fill::before { content: "\f7dc"; }
+.bi-c-square::before { content: "\f7dd"; }
+.bi-capsule-pill::before { content: "\f7de"; }
+.bi-capsule::before { content: "\f7df"; }
+.bi-car-front-fill::before { content: "\f7e0"; }
+.bi-car-front::before { content: "\f7e1"; }
+.bi-cassette-fill::before { content: "\f7e2"; }
+.bi-cassette::before { content: "\f7e3"; }
+.bi-cc-circle-fill::before { content: "\f7e6"; }
+.bi-cc-circle::before { content: "\f7e7"; }
+.bi-cc-square-fill::before { content: "\f7e8"; }
+.bi-cc-square::before { content: "\f7e9"; }
+.bi-cup-hot-fill::before { content: "\f7ea"; }
+.bi-cup-hot::before { content: "\f7eb"; }
+.bi-currency-rupee::before { content: "\f7ec"; }
+.bi-dropbox::before { content: "\f7ed"; }
+.bi-escape::before { content: "\f7ee"; }
+.bi-fast-forward-btn-fill::before { content: "\f7ef"; }
+.bi-fast-forward-btn::before { content: "\f7f0"; }
+.bi-fast-forward-circle-fill::before { content: "\f7f1"; }
+.bi-fast-forward-circle::before { content: "\f7f2"; }
+.bi-fast-forward-fill::before { content: "\f7f3"; }
+.bi-fast-forward::before { content: "\f7f4"; }
+.bi-filetype-sql::before { content: "\f7f5"; }
+.bi-fire::before { content: "\f7f6"; }
+.bi-google-play::before { content: "\f7f7"; }
+.bi-h-circle-fill::before { content: "\f7fa"; }
+.bi-h-circle::before { content: "\f7fb"; }
+.bi-h-square-fill::before { content: "\f7fc"; }
+.bi-h-square::before { content: "\f7fd"; }
+.bi-indent::before { content: "\f7fe"; }
+.bi-lungs-fill::before { content: "\f7ff"; }
+.bi-lungs::before { content: "\f800"; }
+.bi-microsoft-teams::before { content: "\f801"; }
+.bi-p-circle-fill::before { content: "\f804"; }
+.bi-p-circle::before { content: "\f805"; }
+.bi-p-square-fill::before { content: "\f806"; }
+.bi-p-square::before { content: "\f807"; }
+.bi-pass-fill::before { content: "\f808"; }
+.bi-pass::before { content: "\f809"; }
+.bi-prescription::before { content: "\f80a"; }
+.bi-prescription2::before { content: "\f80b"; }
+.bi-r-circle-fill::before { content: "\f80e"; }
+.bi-r-circle::before { content: "\f80f"; }
+.bi-r-square-fill::before { content: "\f810"; }
+.bi-r-square::before { content: "\f811"; }
+.bi-repeat-1::before { content: "\f812"; }
+.bi-repeat::before { content: "\f813"; }
+.bi-rewind-btn-fill::before { content: "\f814"; }
+.bi-rewind-btn::before { content: "\f815"; }
+.bi-rewind-circle-fill::before { content: "\f816"; }
+.bi-rewind-circle::before { content: "\f817"; }
+.bi-rewind-fill::before { content: "\f818"; }
+.bi-rewind::before { content: "\f819"; }
+.bi-train-freight-front-fill::before { content: "\f81a"; }
+.bi-train-freight-front::before { content: "\f81b"; }
+.bi-train-front-fill::before { content: "\f81c"; }
+.bi-train-front::before { content: "\f81d"; }
+.bi-train-lightrail-front-fill::before { content: "\f81e"; }
+.bi-train-lightrail-front::before { content: "\f81f"; }
+.bi-truck-front-fill::before { content: "\f820"; }
+.bi-truck-front::before { content: "\f821"; }
+.bi-ubuntu::before { content: "\f822"; }
+.bi-unindent::before { content: "\f823"; }
+.bi-unity::before { content: "\f824"; }
+.bi-universal-access-circle::before { content: "\f825"; }
+.bi-universal-access::before { content: "\f826"; }
+.bi-virus::before { content: "\f827"; }
+.bi-virus2::before { content: "\f828"; }
+.bi-wechat::before { content: "\f829"; }
+.bi-yelp::before { content: "\f82a"; }
+.bi-sign-stop-fill::before { content: "\f82b"; }
+.bi-sign-stop-lights-fill::before { content: "\f82c"; }
+.bi-sign-stop-lights::before { content: "\f82d"; }
+.bi-sign-stop::before { content: "\f82e"; }
+.bi-sign-turn-left-fill::before { content: "\f82f"; }
+.bi-sign-turn-left::before { content: "\f830"; }
+.bi-sign-turn-right-fill::before { content: "\f831"; }
+.bi-sign-turn-right::before { content: "\f832"; }
+.bi-sign-turn-slight-left-fill::before { content: "\f833"; }
+.bi-sign-turn-slight-left::before { content: "\f834"; }
+.bi-sign-turn-slight-right-fill::before { content: "\f835"; }
+.bi-sign-turn-slight-right::before { content: "\f836"; }
+.bi-sign-yield-fill::before { content: "\f837"; }
+.bi-sign-yield::before { content: "\f838"; }
+.bi-ev-station-fill::before { content: "\f839"; }
+.bi-ev-station::before { content: "\f83a"; }
+.bi-fuel-pump-diesel-fill::before { content: "\f83b"; }
+.bi-fuel-pump-diesel::before { content: "\f83c"; }
+.bi-fuel-pump-fill::before { content: "\f83d"; }
+.bi-fuel-pump::before { content: "\f83e"; }
+.bi-0-circle-fill::before { content: "\f83f"; }
+.bi-0-circle::before { content: "\f840"; }
+.bi-0-square-fill::before { content: "\f841"; }
+.bi-0-square::before { content: "\f842"; }
+.bi-rocket-fill::before { content: "\f843"; }
+.bi-rocket-takeoff-fill::before { content: "\f844"; }
+.bi-rocket-takeoff::before { content: "\f845"; }
+.bi-rocket::before { content: "\f846"; }
+.bi-stripe::before { content: "\f847"; }
+.bi-subscript::before { content: "\f848"; }
+.bi-superscript::before { content: "\f849"; }
+.bi-trello::before { content: "\f84a"; }
+.bi-envelope-at-fill::before { content: "\f84b"; }
+.bi-envelope-at::before { content: "\f84c"; }
+.bi-regex::before { content: "\f84d"; }
+.bi-text-wrap::before { content: "\f84e"; }
+.bi-sign-dead-end-fill::before { content: "\f84f"; }
+.bi-sign-dead-end::before { content: "\f850"; }
+.bi-sign-do-not-enter-fill::before { content: "\f851"; }
+.bi-sign-do-not-enter::before { content: "\f852"; }
+.bi-sign-intersection-fill::before { content: "\f853"; }
+.bi-sign-intersection-side-fill::before { content: "\f854"; }
+.bi-sign-intersection-side::before { content: "\f855"; }
+.bi-sign-intersection-t-fill::before { content: "\f856"; }
+.bi-sign-intersection-t::before { content: "\f857"; }
+.bi-sign-intersection-y-fill::before { content: "\f858"; }
+.bi-sign-intersection-y::before { content: "\f859"; }
+.bi-sign-intersection::before { content: "\f85a"; }
+.bi-sign-merge-left-fill::before { content: "\f85b"; }
+.bi-sign-merge-left::before { content: "\f85c"; }
+.bi-sign-merge-right-fill::before { content: "\f85d"; }
+.bi-sign-merge-right::before { content: "\f85e"; }
+.bi-sign-no-left-turn-fill::before { content: "\f85f"; }
+.bi-sign-no-left-turn::before { content: "\f860"; }
+.bi-sign-no-parking-fill::before { content: "\f861"; }
+.bi-sign-no-parking::before { content: "\f862"; }
+.bi-sign-no-right-turn-fill::before { content: "\f863"; }
+.bi-sign-no-right-turn::before { content: "\f864"; }
+.bi-sign-railroad-fill::before { content: "\f865"; }
+.bi-sign-railroad::before { content: "\f866"; }
+.bi-building-add::before { content: "\f867"; }
+.bi-building-check::before { content: "\f868"; }
+.bi-building-dash::before { content: "\f869"; }
+.bi-building-down::before { content: "\f86a"; }
+.bi-building-exclamation::before { content: "\f86b"; }
+.bi-building-fill-add::before { content: "\f86c"; }
+.bi-building-fill-check::before { content: "\f86d"; }
+.bi-building-fill-dash::before { content: "\f86e"; }
+.bi-building-fill-down::before { content: "\f86f"; }
+.bi-building-fill-exclamation::before { content: "\f870"; }
+.bi-building-fill-gear::before { content: "\f871"; }
+.bi-building-fill-lock::before { content: "\f872"; }
+.bi-building-fill-slash::before { content: "\f873"; }
+.bi-building-fill-up::before { content: "\f874"; }
+.bi-building-fill-x::before { content: "\f875"; }
+.bi-building-fill::before { content: "\f876"; }
+.bi-building-gear::before { content: "\f877"; }
+.bi-building-lock::before { content: "\f878"; }
+.bi-building-slash::before { content: "\f879"; }
+.bi-building-up::before { content: "\f87a"; }
+.bi-building-x::before { content: "\f87b"; }
+.bi-buildings-fill::before { content: "\f87c"; }
+.bi-buildings::before { content: "\f87d"; }
+.bi-bus-front-fill::before { content: "\f87e"; }
+.bi-bus-front::before { content: "\f87f"; }
+.bi-ev-front-fill::before { content: "\f880"; }
+.bi-ev-front::before { content: "\f881"; }
+.bi-globe-americas::before { content: "\f882"; }
+.bi-globe-asia-australia::before { content: "\f883"; }
+.bi-globe-central-south-asia::before { content: "\f884"; }
+.bi-globe-europe-africa::before { content: "\f885"; }
+.bi-house-add-fill::before { content: "\f886"; }
+.bi-house-add::before { content: "\f887"; }
+.bi-house-check-fill::before { content: "\f888"; }
+.bi-house-check::before { content: "\f889"; }
+.bi-house-dash-fill::before { content: "\f88a"; }
+.bi-house-dash::before { content: "\f88b"; }
+.bi-house-down-fill::before { content: "\f88c"; }
+.bi-house-down::before { content: "\f88d"; }
+.bi-house-exclamation-fill::before { content: "\f88e"; }
+.bi-house-exclamation::before { content: "\f88f"; }
+.bi-house-gear-fill::before { content: "\f890"; }
+.bi-house-gear::before { content: "\f891"; }
+.bi-house-lock-fill::before { content: "\f892"; }
+.bi-house-lock::before { content: "\f893"; }
+.bi-house-slash-fill::before { content: "\f894"; }
+.bi-house-slash::before { content: "\f895"; }
+.bi-house-up-fill::before { content: "\f896"; }
+.bi-house-up::before { content: "\f897"; }
+.bi-house-x-fill::before { content: "\f898"; }
+.bi-house-x::before { content: "\f899"; }
+.bi-person-add::before { content: "\f89a"; }
+.bi-person-down::before { content: "\f89b"; }
+.bi-person-exclamation::before { content: "\f89c"; }
+.bi-person-fill-add::before { content: "\f89d"; }
+.bi-person-fill-check::before { content: "\f89e"; }
+.bi-person-fill-dash::before { content: "\f89f"; }
+.bi-person-fill-down::before { content: "\f8a0"; }
+.bi-person-fill-exclamation::before { content: "\f8a1"; }
+.bi-person-fill-gear::before { content: "\f8a2"; }
+.bi-person-fill-lock::before { content: "\f8a3"; }
+.bi-person-fill-slash::before { content: "\f8a4"; }
+.bi-person-fill-up::before { content: "\f8a5"; }
+.bi-person-fill-x::before { content: "\f8a6"; }
+.bi-person-gear::before { content: "\f8a7"; }
+.bi-person-lock::before { content: "\f8a8"; }
+.bi-person-slash::before { content: "\f8a9"; }
+.bi-person-up::before { content: "\f8aa"; }
+.bi-scooter::before { content: "\f8ab"; }
+.bi-taxi-front-fill::before { content: "\f8ac"; }
+.bi-taxi-front::before { content: "\f8ad"; }
+.bi-amd::before { content: "\f8ae"; }
+.bi-database-add::before { content: "\f8af"; }
+.bi-database-check::before { content: "\f8b0"; }
+.bi-database-dash::before { content: "\f8b1"; }
+.bi-database-down::before { content: "\f8b2"; }
+.bi-database-exclamation::before { content: "\f8b3"; }
+.bi-database-fill-add::before { content: "\f8b4"; }
+.bi-database-fill-check::before { content: "\f8b5"; }
+.bi-database-fill-dash::before { content: "\f8b6"; }
+.bi-database-fill-down::before { content: "\f8b7"; }
+.bi-database-fill-exclamation::before { content: "\f8b8"; }
+.bi-database-fill-gear::before { content: "\f8b9"; }
+.bi-database-fill-lock::before { content: "\f8ba"; }
+.bi-database-fill-slash::before { content: "\f8bb"; }
+.bi-database-fill-up::before { content: "\f8bc"; }
+.bi-database-fill-x::before { content: "\f8bd"; }
+.bi-database-fill::before { content: "\f8be"; }
+.bi-database-gear::before { content: "\f8bf"; }
+.bi-database-lock::before { content: "\f8c0"; }
+.bi-database-slash::before { content: "\f8c1"; }
+.bi-database-up::before { content: "\f8c2"; }
+.bi-database-x::before { content: "\f8c3"; }
+.bi-database::before { content: "\f8c4"; }
+.bi-houses-fill::before { content: "\f8c5"; }
+.bi-houses::before { content: "\f8c6"; }
+.bi-nvidia::before { content: "\f8c7"; }
+.bi-person-vcard-fill::before { content: "\f8c8"; }
+.bi-person-vcard::before { content: "\f8c9"; }
+.bi-sina-weibo::before { content: "\f8ca"; }
+.bi-tencent-qq::before { content: "\f8cb"; }
+.bi-wikipedia::before { content: "\f8cc"; }
+.bi-alphabet-uppercase::before { content: "\f2a5"; }
+.bi-alphabet::before { content: "\f68a"; }
+.bi-amazon::before { content: "\f68d"; }
+.bi-arrows-collapse-vertical::before { content: "\f690"; }
+.bi-arrows-expand-vertical::before { content: "\f695"; }
+.bi-arrows-vertical::before { content: "\f698"; }
+.bi-arrows::before { content: "\f6a2"; }
+.bi-ban-fill::before { content: "\f6a3"; }
+.bi-ban::before { content: "\f6b6"; }
+.bi-bing::before { content: "\f6c2"; }
+.bi-cake::before { content: "\f6e0"; }
+.bi-cake2::before { content: "\f6ed"; }
+.bi-cookie::before { content: "\f6ee"; }
+.bi-copy::before { content: "\f759"; }
+.bi-crosshair::before { content: "\f769"; }
+.bi-crosshair2::before { content: "\f794"; }
+.bi-emoji-astonished-fill::before { content: "\f795"; }
+.bi-emoji-astonished::before { content: "\f79a"; }
+.bi-emoji-grimace-fill::before { content: "\f79b"; }
+.bi-emoji-grimace::before { content: "\f7a0"; }
+.bi-emoji-grin-fill::before { content: "\f7a1"; }
+.bi-emoji-grin::before { content: "\f7a6"; }
+.bi-emoji-surprise-fill::before { content: "\f7a7"; }
+.bi-emoji-surprise::before { content: "\f7ac"; }
+.bi-emoji-tear-fill::before { content: "\f7ad"; }
+.bi-emoji-tear::before { content: "\f7b2"; }
+.bi-envelope-arrow-down-fill::before { content: "\f7b3"; }
+.bi-envelope-arrow-down::before { content: "\f7b8"; }
+.bi-envelope-arrow-up-fill::before { content: "\f7b9"; }
+.bi-envelope-arrow-up::before { content: "\f7be"; }
+.bi-feather::before { content: "\f7bf"; }
+.bi-feather2::before { content: "\f7c4"; }
+.bi-floppy-fill::before { content: "\f7c5"; }
+.bi-floppy::before { content: "\f7d8"; }
+.bi-floppy2-fill::before { content: "\f7d9"; }
+.bi-floppy2::before { content: "\f7e4"; }
+.bi-gitlab::before { content: "\f7e5"; }
+.bi-highlighter::before { content: "\f7f8"; }
+.bi-marker-tip::before { content: "\f802"; }
+.bi-nvme-fill::before { content: "\f803"; }
+.bi-nvme::before { content: "\f80c"; }
+.bi-opencollective::before { content: "\f80d"; }
+.bi-pci-card-network::before { content: "\f8cd"; }
+.bi-pci-card-sound::before { content: "\f8ce"; }
+.bi-radar::before { content: "\f8cf"; }
+.bi-send-arrow-down-fill::before { content: "\f8d0"; }
+.bi-send-arrow-down::before { content: "\f8d1"; }
+.bi-send-arrow-up-fill::before { content: "\f8d2"; }
+.bi-send-arrow-up::before { content: "\f8d3"; }
+.bi-sim-slash-fill::before { content: "\f8d4"; }
+.bi-sim-slash::before { content: "\f8d5"; }
+.bi-sourceforge::before { content: "\f8d6"; }
+.bi-substack::before { content: "\f8d7"; }
+.bi-threads-fill::before { content: "\f8d8"; }
+.bi-threads::before { content: "\f8d9"; }
+.bi-transparency::before { content: "\f8da"; }
+.bi-twitter-x::before { content: "\f8db"; }
+.bi-type-h4::before { content: "\f8dc"; }
+.bi-type-h5::before { content: "\f8dd"; }
+.bi-type-h6::before { content: "\f8de"; }
+.bi-backpack-fill::before { content: "\f8df"; }
+.bi-backpack::before { content: "\f8e0"; }
+.bi-backpack2-fill::before { content: "\f8e1"; }
+.bi-backpack2::before { content: "\f8e2"; }
+.bi-backpack3-fill::before { content: "\f8e3"; }
+.bi-backpack3::before { content: "\f8e4"; }
+.bi-backpack4-fill::before { content: "\f8e5"; }
+.bi-backpack4::before { content: "\f8e6"; }
+.bi-brilliance::before { content: "\f8e7"; }
+.bi-cake-fill::before { content: "\f8e8"; }
+.bi-cake2-fill::before { content: "\f8e9"; }
+.bi-duffle-fill::before { content: "\f8ea"; }
+.bi-duffle::before { content: "\f8eb"; }
+.bi-exposure::before { content: "\f8ec"; }
+.bi-gender-neuter::before { content: "\f8ed"; }
+.bi-highlights::before { content: "\f8ee"; }
+.bi-luggage-fill::before { content: "\f8ef"; }
+.bi-luggage::before { content: "\f8f0"; }
+.bi-mailbox-flag::before { content: "\f8f1"; }
+.bi-mailbox2-flag::before { content: "\f8f2"; }
+.bi-noise-reduction::before { content: "\f8f3"; }
+.bi-passport-fill::before { content: "\f8f4"; }
+.bi-passport::before { content: "\f8f5"; }
+.bi-person-arms-up::before { content: "\f8f6"; }
+.bi-person-raised-hand::before { content: "\f8f7"; }
+.bi-person-standing-dress::before { content: "\f8f8"; }
+.bi-person-standing::before { content: "\f8f9"; }
+.bi-person-walking::before { content: "\f8fa"; }
+.bi-person-wheelchair::before { content: "\f8fb"; }
+.bi-shadows::before { content: "\f8fc"; }
+.bi-suitcase-fill::before { content: "\f8fd"; }
+.bi-suitcase-lg-fill::before { content: "\f8fe"; }
+.bi-suitcase-lg::before { content: "\f8ff"; }
+.bi-suitcase::before { content: "\f900"; }
+.bi-suitcase2-fill::before { content: "\f901"; }
+.bi-suitcase2::before { content: "\f902"; }
+.bi-vignette::before { content: "\f903"; }
diff --git a/site_libs/bootstrap/bootstrap-icons.woff b/site_libs/bootstrap/bootstrap-icons.woff
new file mode 100644
index 0000000..dbeeb05
Binary files /dev/null and b/site_libs/bootstrap/bootstrap-icons.woff differ
diff --git a/site_libs/bootstrap/bootstrap.min.css b/site_libs/bootstrap/bootstrap.min.css
new file mode 100644
index 0000000..0ea18ef
--- /dev/null
+++ b/site_libs/bootstrap/bootstrap.min.css
@@ -0,0 +1,12 @@
+/*!
+ * Bootstrap v5.3.1 (https://getbootstrap.com/)
+ * Copyright 2011-2023 The Bootstrap Authors
+ * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
+ */@import"https://fonts.googleapis.com/css2?family=Source+Sans+Pro:wght@300;400;700&display=swap";:root,[data-bs-theme=light]{--bs-blue: #2780e3;--bs-indigo: #6610f2;--bs-purple: #613d7c;--bs-pink: #e83e8c;--bs-red: #ff0039;--bs-orange: #f0ad4e;--bs-yellow: #ff7518;--bs-green: #3fb618;--bs-teal: #20c997;--bs-cyan: #9954bb;--bs-black: #000;--bs-white: #fff;--bs-gray: #6c757d;--bs-gray-dark: #343a40;--bs-gray-100: #f8f9fa;--bs-gray-200: #e9ecef;--bs-gray-300: #dee2e6;--bs-gray-400: #ced4da;--bs-gray-500: #adb5bd;--bs-gray-600: #6c757d;--bs-gray-700: #495057;--bs-gray-800: #343a40;--bs-gray-900: #212529;--bs-default: #343a40;--bs-primary: #2780e3;--bs-secondary: #343a40;--bs-success: #3fb618;--bs-info: #9954bb;--bs-warning: #ff7518;--bs-danger: #ff0039;--bs-light: #f8f9fa;--bs-dark: #343a40;--bs-default-rgb: 52, 58, 64;--bs-primary-rgb: 39, 128, 227;--bs-secondary-rgb: 52, 58, 64;--bs-success-rgb: 63, 182, 24;--bs-info-rgb: 153, 84, 187;--bs-warning-rgb: 255, 117, 24;--bs-danger-rgb: 255, 0, 57;--bs-light-rgb: 248, 249, 250;--bs-dark-rgb: 52, 58, 64;--bs-primary-text-emphasis: #10335b;--bs-secondary-text-emphasis: #15171a;--bs-success-text-emphasis: #19490a;--bs-info-text-emphasis: #3d224b;--bs-warning-text-emphasis: #662f0a;--bs-danger-text-emphasis: #660017;--bs-light-text-emphasis: #495057;--bs-dark-text-emphasis: #495057;--bs-primary-bg-subtle: #d4e6f9;--bs-secondary-bg-subtle: #d6d8d9;--bs-success-bg-subtle: #d9f0d1;--bs-info-bg-subtle: #ebddf1;--bs-warning-bg-subtle: #ffe3d1;--bs-danger-bg-subtle: #ffccd7;--bs-light-bg-subtle: #fcfcfd;--bs-dark-bg-subtle: #ced4da;--bs-primary-border-subtle: #a9ccf4;--bs-secondary-border-subtle: #aeb0b3;--bs-success-border-subtle: #b2e2a3;--bs-info-border-subtle: #d6bbe4;--bs-warning-border-subtle: #ffc8a3;--bs-danger-border-subtle: #ff99b0;--bs-light-border-subtle: #e9ecef;--bs-dark-border-subtle: #adb5bd;--bs-white-rgb: 255, 255, 255;--bs-black-rgb: 0, 0, 0;--bs-font-sans-serif: "Source Sans Pro", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";--bs-font-monospace: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;--bs-gradient: linear-gradient(180deg, rgba(255, 255, 255, 0.15), rgba(255, 255, 255, 0));--bs-root-font-size: 17px;--bs-body-font-family: "Source Sans Pro", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";--bs-body-font-size:1rem;--bs-body-font-weight: 400;--bs-body-line-height: 1.5;--bs-body-color: #343a40;--bs-body-color-rgb: 52, 58, 64;--bs-body-bg: #fff;--bs-body-bg-rgb: 255, 255, 255;--bs-emphasis-color: #000;--bs-emphasis-color-rgb: 0, 0, 0;--bs-secondary-color: rgba(52, 58, 64, 0.75);--bs-secondary-color-rgb: 52, 58, 64;--bs-secondary-bg: #e9ecef;--bs-secondary-bg-rgb: 233, 236, 239;--bs-tertiary-color: rgba(52, 58, 64, 0.5);--bs-tertiary-color-rgb: 52, 58, 64;--bs-tertiary-bg: #f8f9fa;--bs-tertiary-bg-rgb: 248, 249, 250;--bs-heading-color: inherit;--bs-link-color: #2761e3;--bs-link-color-rgb: 39, 97, 227;--bs-link-decoration: underline;--bs-link-hover-color: #1f4eb6;--bs-link-hover-color-rgb: 31, 78, 182;--bs-code-color: #7d12ba;--bs-highlight-bg: #ffe3d1;--bs-border-width: 1px;--bs-border-style: solid;--bs-border-color: #dee2e6;--bs-border-color-translucent: rgba(0, 0, 0, 0.175);--bs-border-radius: 0.25rem;--bs-border-radius-sm: 0.2em;--bs-border-radius-lg: 0.5rem;--bs-border-radius-xl: 1rem;--bs-border-radius-xxl: 2rem;--bs-border-radius-2xl: var(--bs-border-radius-xxl);--bs-border-radius-pill: 50rem;--bs-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15);--bs-box-shadow-sm: 0 0.125rem 0.25rem rgba(0, 0, 0, 0.075);--bs-box-shadow-lg: 0 1rem 3rem rgba(0, 0, 0, 0.175);--bs-box-shadow-inset: inset 0 1px 2px rgba(0, 0, 0, 0.075);--bs-focus-ring-width: 0.25rem;--bs-focus-ring-opacity: 0.25;--bs-focus-ring-color: rgba(39, 128, 227, 0.25);--bs-form-valid-color: #3fb618;--bs-form-valid-border-color: #3fb618;--bs-form-invalid-color: #ff0039;--bs-form-invalid-border-color: #ff0039}[data-bs-theme=dark]{color-scheme:dark;--bs-body-color: #dee2e6;--bs-body-color-rgb: 222, 226, 230;--bs-body-bg: #212529;--bs-body-bg-rgb: 33, 37, 41;--bs-emphasis-color: #fff;--bs-emphasis-color-rgb: 255, 255, 255;--bs-secondary-color: rgba(222, 226, 230, 0.75);--bs-secondary-color-rgb: 222, 226, 230;--bs-secondary-bg: #343a40;--bs-secondary-bg-rgb: 52, 58, 64;--bs-tertiary-color: rgba(222, 226, 230, 0.5);--bs-tertiary-color-rgb: 222, 226, 230;--bs-tertiary-bg: #2b3035;--bs-tertiary-bg-rgb: 43, 48, 53;--bs-primary-text-emphasis: #7db3ee;--bs-secondary-text-emphasis: #85898c;--bs-success-text-emphasis: #8cd374;--bs-info-text-emphasis: #c298d6;--bs-warning-text-emphasis: #ffac74;--bs-danger-text-emphasis: #ff6688;--bs-light-text-emphasis: #f8f9fa;--bs-dark-text-emphasis: #dee2e6;--bs-primary-bg-subtle: #081a2d;--bs-secondary-bg-subtle: #0a0c0d;--bs-success-bg-subtle: #0d2405;--bs-info-bg-subtle: #1f1125;--bs-warning-bg-subtle: #331705;--bs-danger-bg-subtle: #33000b;--bs-light-bg-subtle: #343a40;--bs-dark-bg-subtle: #1a1d20;--bs-primary-border-subtle: #174d88;--bs-secondary-border-subtle: #1f2326;--bs-success-border-subtle: #266d0e;--bs-info-border-subtle: #5c3270;--bs-warning-border-subtle: #99460e;--bs-danger-border-subtle: #990022;--bs-light-border-subtle: #495057;--bs-dark-border-subtle: #343a40;--bs-heading-color: inherit;--bs-link-color: #7db3ee;--bs-link-hover-color: #97c2f1;--bs-link-color-rgb: 125, 179, 238;--bs-link-hover-color-rgb: 151, 194, 241;--bs-code-color: white;--bs-border-color: #495057;--bs-border-color-translucent: rgba(255, 255, 255, 0.15);--bs-form-valid-color: #8cd374;--bs-form-valid-border-color: #8cd374;--bs-form-invalid-color: #ff6688;--bs-form-invalid-border-color: #ff6688}*,*::before,*::after{box-sizing:border-box}:root{font-size:var(--bs-root-font-size)}body{margin:0;font-family:var(--bs-body-font-family);font-size:var(--bs-body-font-size);font-weight:var(--bs-body-font-weight);line-height:var(--bs-body-line-height);color:var(--bs-body-color);text-align:var(--bs-body-text-align);background-color:var(--bs-body-bg);-webkit-text-size-adjust:100%;-webkit-tap-highlight-color:rgba(0,0,0,0)}hr{margin:1rem 0;color:inherit;border:0;border-top:1px solid;opacity:.25}h6,.h6,h5,.h5,h4,.h4,h3,.h3,h2,.h2,h1,.h1{margin-top:0;margin-bottom:.5rem;font-weight:400;line-height:1.2;color:var(--bs-heading-color)}h1,.h1{font-size:calc(1.325rem + 0.9vw)}@media(min-width: 1200px){h1,.h1{font-size:2rem}}h2,.h2{font-size:calc(1.29rem + 0.48vw)}@media(min-width: 1200px){h2,.h2{font-size:1.65rem}}h3,.h3{font-size:calc(1.27rem + 0.24vw)}@media(min-width: 1200px){h3,.h3{font-size:1.45rem}}h4,.h4{font-size:1.25rem}h5,.h5{font-size:1.1rem}h6,.h6{font-size:1rem}p{margin-top:0;margin-bottom:1rem}abbr[title]{text-decoration:underline dotted;-webkit-text-decoration:underline dotted;-moz-text-decoration:underline dotted;-ms-text-decoration:underline dotted;-o-text-decoration:underline dotted;cursor:help;text-decoration-skip-ink:none}address{margin-bottom:1rem;font-style:normal;line-height:inherit}ol,ul{padding-left:2rem}ol,ul,dl{margin-top:0;margin-bottom:1rem}ol ol,ul ul,ol ul,ul ol{margin-bottom:0}dt{font-weight:700}dd{margin-bottom:.5rem;margin-left:0}blockquote{margin:0 0 1rem;padding:.625rem 1.25rem;border-left:.25rem solid #e9ecef}blockquote p:last-child,blockquote ul:last-child,blockquote ol:last-child{margin-bottom:0}b,strong{font-weight:bolder}small,.small{font-size:0.875em}mark,.mark{padding:.1875em;background-color:var(--bs-highlight-bg)}sub,sup{position:relative;font-size:0.75em;line-height:0;vertical-align:baseline}sub{bottom:-0.25em}sup{top:-0.5em}a{color:rgba(var(--bs-link-color-rgb), var(--bs-link-opacity, 1));text-decoration:underline;-webkit-text-decoration:underline;-moz-text-decoration:underline;-ms-text-decoration:underline;-o-text-decoration:underline}a:hover{--bs-link-color-rgb: var(--bs-link-hover-color-rgb)}a:not([href]):not([class]),a:not([href]):not([class]):hover{color:inherit;text-decoration:none}pre,code,kbd,samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;font-size:1em}pre{display:block;margin-top:0;margin-bottom:1rem;overflow:auto;font-size:0.875em;color:#000;background-color:#f8f9fa;padding:.5rem;border:1px solid var(--bs-border-color, #dee2e6)}pre code{background-color:rgba(0,0,0,0);font-size:inherit;color:inherit;word-break:normal}code{font-size:0.875em;color:var(--bs-code-color);background-color:#f8f9fa;padding:.125rem .25rem;word-wrap:break-word}a>code{color:inherit}kbd{padding:.4rem .4rem;font-size:0.875em;color:#fff;background-color:#343a40}kbd kbd{padding:0;font-size:1em}figure{margin:0 0 1rem}img,svg{vertical-align:middle}table{caption-side:bottom;border-collapse:collapse}caption{padding-top:.5rem;padding-bottom:.5rem;color:rgba(52,58,64,.75);text-align:left}th{text-align:inherit;text-align:-webkit-match-parent}thead,tbody,tfoot,tr,td,th{border-color:inherit;border-style:solid;border-width:0}label{display:inline-block}button{border-radius:0}button:focus:not(:focus-visible){outline:0}input,button,select,optgroup,textarea{margin:0;font-family:inherit;font-size:inherit;line-height:inherit}button,select{text-transform:none}[role=button]{cursor:pointer}select{word-wrap:normal}select:disabled{opacity:1}[list]:not([type=date]):not([type=datetime-local]):not([type=month]):not([type=week]):not([type=time])::-webkit-calendar-picker-indicator{display:none !important}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button}button:not(:disabled),[type=button]:not(:disabled),[type=reset]:not(:disabled),[type=submit]:not(:disabled){cursor:pointer}::-moz-focus-inner{padding:0;border-style:none}textarea{resize:vertical}fieldset{min-width:0;padding:0;margin:0;border:0}legend{float:left;width:100%;padding:0;margin-bottom:.5rem;font-size:calc(1.275rem + 0.3vw);line-height:inherit}@media(min-width: 1200px){legend{font-size:1.5rem}}legend+*{clear:left}::-webkit-datetime-edit-fields-wrapper,::-webkit-datetime-edit-text,::-webkit-datetime-edit-minute,::-webkit-datetime-edit-hour-field,::-webkit-datetime-edit-day-field,::-webkit-datetime-edit-month-field,::-webkit-datetime-edit-year-field{padding:0}::-webkit-inner-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-color-swatch-wrapper{padding:0}::file-selector-button{font:inherit;-webkit-appearance:button}output{display:inline-block}iframe{border:0}summary{display:list-item;cursor:pointer}progress{vertical-align:baseline}[hidden]{display:none !important}.lead{font-size:1.25rem;font-weight:300}.display-1{font-size:calc(1.625rem + 4.5vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-1{font-size:5rem}}.display-2{font-size:calc(1.575rem + 3.9vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-2{font-size:4.5rem}}.display-3{font-size:calc(1.525rem + 3.3vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-3{font-size:4rem}}.display-4{font-size:calc(1.475rem + 2.7vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-4{font-size:3.5rem}}.display-5{font-size:calc(1.425rem + 2.1vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-5{font-size:3rem}}.display-6{font-size:calc(1.375rem + 1.5vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-6{font-size:2.5rem}}.list-unstyled{padding-left:0;list-style:none}.list-inline{padding-left:0;list-style:none}.list-inline-item{display:inline-block}.list-inline-item:not(:last-child){margin-right:.5rem}.initialism{font-size:0.875em;text-transform:uppercase}.blockquote{margin-bottom:1rem;font-size:1.25rem}.blockquote>:last-child{margin-bottom:0}.blockquote-footer{margin-top:-1rem;margin-bottom:1rem;font-size:0.875em;color:#6c757d}.blockquote-footer::before{content:"— "}.img-fluid{max-width:100%;height:auto}.img-thumbnail{padding:.25rem;background-color:#fff;border:1px solid #dee2e6;max-width:100%;height:auto}.figure{display:inline-block}.figure-img{margin-bottom:.5rem;line-height:1}.figure-caption{font-size:0.875em;color:rgba(52,58,64,.75)}.container,.container-fluid,.container-xxl,.container-xl,.container-lg,.container-md,.container-sm{--bs-gutter-x: 1.5rem;--bs-gutter-y: 0;width:100%;padding-right:calc(var(--bs-gutter-x)*.5);padding-left:calc(var(--bs-gutter-x)*.5);margin-right:auto;margin-left:auto}@media(min-width: 576px){.container-sm,.container{max-width:540px}}@media(min-width: 768px){.container-md,.container-sm,.container{max-width:720px}}@media(min-width: 992px){.container-lg,.container-md,.container-sm,.container{max-width:960px}}@media(min-width: 1200px){.container-xl,.container-lg,.container-md,.container-sm,.container{max-width:1140px}}@media(min-width: 1400px){.container-xxl,.container-xl,.container-lg,.container-md,.container-sm,.container{max-width:1320px}}:root{--bs-breakpoint-xs: 0;--bs-breakpoint-sm: 576px;--bs-breakpoint-md: 768px;--bs-breakpoint-lg: 992px;--bs-breakpoint-xl: 1200px;--bs-breakpoint-xxl: 1400px}.grid{display:grid;grid-template-rows:repeat(var(--bs-rows, 1), 1fr);grid-template-columns:repeat(var(--bs-columns, 12), 1fr);gap:var(--bs-gap, 1.5rem)}.grid .g-col-1{grid-column:auto/span 1}.grid .g-col-2{grid-column:auto/span 2}.grid .g-col-3{grid-column:auto/span 3}.grid .g-col-4{grid-column:auto/span 4}.grid .g-col-5{grid-column:auto/span 5}.grid .g-col-6{grid-column:auto/span 6}.grid .g-col-7{grid-column:auto/span 7}.grid .g-col-8{grid-column:auto/span 8}.grid .g-col-9{grid-column:auto/span 9}.grid .g-col-10{grid-column:auto/span 10}.grid .g-col-11{grid-column:auto/span 11}.grid .g-col-12{grid-column:auto/span 12}.grid .g-start-1{grid-column-start:1}.grid .g-start-2{grid-column-start:2}.grid .g-start-3{grid-column-start:3}.grid .g-start-4{grid-column-start:4}.grid .g-start-5{grid-column-start:5}.grid .g-start-6{grid-column-start:6}.grid .g-start-7{grid-column-start:7}.grid .g-start-8{grid-column-start:8}.grid .g-start-9{grid-column-start:9}.grid .g-start-10{grid-column-start:10}.grid .g-start-11{grid-column-start:11}@media(min-width: 576px){.grid .g-col-sm-1{grid-column:auto/span 1}.grid .g-col-sm-2{grid-column:auto/span 2}.grid .g-col-sm-3{grid-column:auto/span 3}.grid .g-col-sm-4{grid-column:auto/span 4}.grid .g-col-sm-5{grid-column:auto/span 5}.grid .g-col-sm-6{grid-column:auto/span 6}.grid .g-col-sm-7{grid-column:auto/span 7}.grid .g-col-sm-8{grid-column:auto/span 8}.grid .g-col-sm-9{grid-column:auto/span 9}.grid .g-col-sm-10{grid-column:auto/span 10}.grid .g-col-sm-11{grid-column:auto/span 11}.grid .g-col-sm-12{grid-column:auto/span 12}.grid .g-start-sm-1{grid-column-start:1}.grid .g-start-sm-2{grid-column-start:2}.grid .g-start-sm-3{grid-column-start:3}.grid .g-start-sm-4{grid-column-start:4}.grid .g-start-sm-5{grid-column-start:5}.grid .g-start-sm-6{grid-column-start:6}.grid .g-start-sm-7{grid-column-start:7}.grid .g-start-sm-8{grid-column-start:8}.grid .g-start-sm-9{grid-column-start:9}.grid .g-start-sm-10{grid-column-start:10}.grid .g-start-sm-11{grid-column-start:11}}@media(min-width: 768px){.grid .g-col-md-1{grid-column:auto/span 1}.grid .g-col-md-2{grid-column:auto/span 2}.grid .g-col-md-3{grid-column:auto/span 3}.grid .g-col-md-4{grid-column:auto/span 4}.grid .g-col-md-5{grid-column:auto/span 5}.grid .g-col-md-6{grid-column:auto/span 6}.grid .g-col-md-7{grid-column:auto/span 7}.grid .g-col-md-8{grid-column:auto/span 8}.grid .g-col-md-9{grid-column:auto/span 9}.grid .g-col-md-10{grid-column:auto/span 10}.grid .g-col-md-11{grid-column:auto/span 11}.grid .g-col-md-12{grid-column:auto/span 12}.grid .g-start-md-1{grid-column-start:1}.grid .g-start-md-2{grid-column-start:2}.grid .g-start-md-3{grid-column-start:3}.grid .g-start-md-4{grid-column-start:4}.grid .g-start-md-5{grid-column-start:5}.grid .g-start-md-6{grid-column-start:6}.grid .g-start-md-7{grid-column-start:7}.grid .g-start-md-8{grid-column-start:8}.grid .g-start-md-9{grid-column-start:9}.grid .g-start-md-10{grid-column-start:10}.grid .g-start-md-11{grid-column-start:11}}@media(min-width: 992px){.grid .g-col-lg-1{grid-column:auto/span 1}.grid .g-col-lg-2{grid-column:auto/span 2}.grid .g-col-lg-3{grid-column:auto/span 3}.grid .g-col-lg-4{grid-column:auto/span 4}.grid .g-col-lg-5{grid-column:auto/span 5}.grid .g-col-lg-6{grid-column:auto/span 6}.grid .g-col-lg-7{grid-column:auto/span 7}.grid .g-col-lg-8{grid-column:auto/span 8}.grid .g-col-lg-9{grid-column:auto/span 9}.grid .g-col-lg-10{grid-column:auto/span 10}.grid .g-col-lg-11{grid-column:auto/span 11}.grid .g-col-lg-12{grid-column:auto/span 12}.grid .g-start-lg-1{grid-column-start:1}.grid .g-start-lg-2{grid-column-start:2}.grid .g-start-lg-3{grid-column-start:3}.grid .g-start-lg-4{grid-column-start:4}.grid .g-start-lg-5{grid-column-start:5}.grid .g-start-lg-6{grid-column-start:6}.grid .g-start-lg-7{grid-column-start:7}.grid .g-start-lg-8{grid-column-start:8}.grid .g-start-lg-9{grid-column-start:9}.grid .g-start-lg-10{grid-column-start:10}.grid .g-start-lg-11{grid-column-start:11}}@media(min-width: 1200px){.grid .g-col-xl-1{grid-column:auto/span 1}.grid .g-col-xl-2{grid-column:auto/span 2}.grid .g-col-xl-3{grid-column:auto/span 3}.grid .g-col-xl-4{grid-column:auto/span 4}.grid .g-col-xl-5{grid-column:auto/span 5}.grid .g-col-xl-6{grid-column:auto/span 6}.grid .g-col-xl-7{grid-column:auto/span 7}.grid .g-col-xl-8{grid-column:auto/span 8}.grid .g-col-xl-9{grid-column:auto/span 9}.grid .g-col-xl-10{grid-column:auto/span 10}.grid .g-col-xl-11{grid-column:auto/span 11}.grid .g-col-xl-12{grid-column:auto/span 12}.grid .g-start-xl-1{grid-column-start:1}.grid .g-start-xl-2{grid-column-start:2}.grid .g-start-xl-3{grid-column-start:3}.grid .g-start-xl-4{grid-column-start:4}.grid .g-start-xl-5{grid-column-start:5}.grid .g-start-xl-6{grid-column-start:6}.grid .g-start-xl-7{grid-column-start:7}.grid .g-start-xl-8{grid-column-start:8}.grid .g-start-xl-9{grid-column-start:9}.grid .g-start-xl-10{grid-column-start:10}.grid .g-start-xl-11{grid-column-start:11}}@media(min-width: 1400px){.grid .g-col-xxl-1{grid-column:auto/span 1}.grid .g-col-xxl-2{grid-column:auto/span 2}.grid .g-col-xxl-3{grid-column:auto/span 3}.grid .g-col-xxl-4{grid-column:auto/span 4}.grid .g-col-xxl-5{grid-column:auto/span 5}.grid .g-col-xxl-6{grid-column:auto/span 6}.grid .g-col-xxl-7{grid-column:auto/span 7}.grid .g-col-xxl-8{grid-column:auto/span 8}.grid .g-col-xxl-9{grid-column:auto/span 9}.grid .g-col-xxl-10{grid-column:auto/span 10}.grid .g-col-xxl-11{grid-column:auto/span 11}.grid .g-col-xxl-12{grid-column:auto/span 12}.grid .g-start-xxl-1{grid-column-start:1}.grid .g-start-xxl-2{grid-column-start:2}.grid .g-start-xxl-3{grid-column-start:3}.grid .g-start-xxl-4{grid-column-start:4}.grid .g-start-xxl-5{grid-column-start:5}.grid .g-start-xxl-6{grid-column-start:6}.grid .g-start-xxl-7{grid-column-start:7}.grid .g-start-xxl-8{grid-column-start:8}.grid .g-start-xxl-9{grid-column-start:9}.grid .g-start-xxl-10{grid-column-start:10}.grid .g-start-xxl-11{grid-column-start:11}}.table{--bs-table-color-type: initial;--bs-table-bg-type: initial;--bs-table-color-state: initial;--bs-table-bg-state: initial;--bs-table-color: #343a40;--bs-table-bg: #fff;--bs-table-border-color: #dee2e6;--bs-table-accent-bg: transparent;--bs-table-striped-color: #343a40;--bs-table-striped-bg: rgba(0, 0, 0, 0.05);--bs-table-active-color: #343a40;--bs-table-active-bg: rgba(0, 0, 0, 0.1);--bs-table-hover-color: #343a40;--bs-table-hover-bg: rgba(0, 0, 0, 0.075);width:100%;margin-bottom:1rem;vertical-align:top;border-color:var(--bs-table-border-color)}.table>:not(caption)>*>*{padding:.5rem .5rem;color:var(--bs-table-color-state, var(--bs-table-color-type, var(--bs-table-color)));background-color:var(--bs-table-bg);border-bottom-width:1px;box-shadow:inset 0 0 0 9999px var(--bs-table-bg-state, var(--bs-table-bg-type, var(--bs-table-accent-bg)))}.table>tbody{vertical-align:inherit}.table>thead{vertical-align:bottom}.table-group-divider{border-top:calc(1px*2) solid #b2bac1}.caption-top{caption-side:top}.table-sm>:not(caption)>*>*{padding:.25rem .25rem}.table-bordered>:not(caption)>*{border-width:1px 0}.table-bordered>:not(caption)>*>*{border-width:0 1px}.table-borderless>:not(caption)>*>*{border-bottom-width:0}.table-borderless>:not(:first-child){border-top-width:0}.table-striped>tbody>tr:nth-of-type(odd)>*{--bs-table-color-type: var(--bs-table-striped-color);--bs-table-bg-type: var(--bs-table-striped-bg)}.table-striped-columns>:not(caption)>tr>:nth-child(even){--bs-table-color-type: var(--bs-table-striped-color);--bs-table-bg-type: var(--bs-table-striped-bg)}.table-active{--bs-table-color-state: var(--bs-table-active-color);--bs-table-bg-state: var(--bs-table-active-bg)}.table-hover>tbody>tr:hover>*{--bs-table-color-state: var(--bs-table-hover-color);--bs-table-bg-state: var(--bs-table-hover-bg)}.table-primary{--bs-table-color: #000;--bs-table-bg: #d4e6f9;--bs-table-border-color: #bfcfe0;--bs-table-striped-bg: #c9dbed;--bs-table-striped-color: #000;--bs-table-active-bg: #bfcfe0;--bs-table-active-color: #000;--bs-table-hover-bg: #c4d5e6;--bs-table-hover-color: #000;color:var(--bs-table-color);border-color:var(--bs-table-border-color)}.table-secondary{--bs-table-color: #000;--bs-table-bg: #d6d8d9;--bs-table-border-color: #c1c2c3;--bs-table-striped-bg: #cbcdce;--bs-table-striped-color: #000;--bs-table-active-bg: #c1c2c3;--bs-table-active-color: #000;--bs-table-hover-bg: #c6c8c9;--bs-table-hover-color: #000;color:var(--bs-table-color);border-color:var(--bs-table-border-color)}.table-success{--bs-table-color: #000;--bs-table-bg: #d9f0d1;--bs-table-border-color: #c3d8bc;--bs-table-striped-bg: #cee4c7;--bs-table-striped-color: #000;--bs-table-active-bg: #c3d8bc;--bs-table-active-color: #000;--bs-table-hover-bg: #c9dec1;--bs-table-hover-color: #000;color:var(--bs-table-color);border-color:var(--bs-table-border-color)}.table-info{--bs-table-color: #000;--bs-table-bg: #ebddf1;--bs-table-border-color: #d4c7d9;--bs-table-striped-bg: #dfd2e5;--bs-table-striped-color: #000;--bs-table-active-bg: #d4c7d9;--bs-table-active-color: #000;--bs-table-hover-bg: #d9ccdf;--bs-table-hover-color: #000;color:var(--bs-table-color);border-color:var(--bs-table-border-color)}.table-warning{--bs-table-color: #000;--bs-table-bg: #ffe3d1;--bs-table-border-color: #e6ccbc;--bs-table-striped-bg: #f2d8c7;--bs-table-striped-color: #000;--bs-table-active-bg: #e6ccbc;--bs-table-active-color: #000;--bs-table-hover-bg: #ecd2c1;--bs-table-hover-color: #000;color:var(--bs-table-color);border-color:var(--bs-table-border-color)}.table-danger{--bs-table-color: #000;--bs-table-bg: #ffccd7;--bs-table-border-color: #e6b8c2;--bs-table-striped-bg: #f2c2cc;--bs-table-striped-color: #000;--bs-table-active-bg: #e6b8c2;--bs-table-active-color: #000;--bs-table-hover-bg: #ecbdc7;--bs-table-hover-color: #000;color:var(--bs-table-color);border-color:var(--bs-table-border-color)}.table-light{--bs-table-color: #000;--bs-table-bg: #f8f9fa;--bs-table-border-color: #dfe0e1;--bs-table-striped-bg: #ecedee;--bs-table-striped-color: #000;--bs-table-active-bg: #dfe0e1;--bs-table-active-color: #000;--bs-table-hover-bg: #e5e6e7;--bs-table-hover-color: #000;color:var(--bs-table-color);border-color:var(--bs-table-border-color)}.table-dark{--bs-table-color: #fff;--bs-table-bg: #343a40;--bs-table-border-color: #484e53;--bs-table-striped-bg: #3e444a;--bs-table-striped-color: #fff;--bs-table-active-bg: #484e53;--bs-table-active-color: #fff;--bs-table-hover-bg: #43494e;--bs-table-hover-color: #fff;color:var(--bs-table-color);border-color:var(--bs-table-border-color)}.table-responsive{overflow-x:auto;-webkit-overflow-scrolling:touch}@media(max-width: 575.98px){.table-responsive-sm{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media(max-width: 767.98px){.table-responsive-md{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media(max-width: 991.98px){.table-responsive-lg{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media(max-width: 1199.98px){.table-responsive-xl{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media(max-width: 1399.98px){.table-responsive-xxl{overflow-x:auto;-webkit-overflow-scrolling:touch}}.form-label,.shiny-input-container .control-label{margin-bottom:.5rem}.col-form-label{padding-top:calc(0.375rem + 1px);padding-bottom:calc(0.375rem + 1px);margin-bottom:0;font-size:inherit;line-height:1.5}.col-form-label-lg{padding-top:calc(0.5rem + 1px);padding-bottom:calc(0.5rem + 1px);font-size:1.25rem}.col-form-label-sm{padding-top:calc(0.25rem + 1px);padding-bottom:calc(0.25rem + 1px);font-size:0.875rem}.form-text{margin-top:.25rem;font-size:0.875em;color:rgba(52,58,64,.75)}.form-control{display:block;width:100%;padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#343a40;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none;background-color:#fff;background-clip:padding-box;border:1px solid #dee2e6;border-radius:0;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-control{transition:none}}.form-control[type=file]{overflow:hidden}.form-control[type=file]:not(:disabled):not([readonly]){cursor:pointer}.form-control:focus{color:#343a40;background-color:#fff;border-color:#93c0f1;outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.form-control::-webkit-date-and-time-value{min-width:85px;height:1.5em;margin:0}.form-control::-webkit-datetime-edit{display:block;padding:0}.form-control::placeholder{color:rgba(52,58,64,.75);opacity:1}.form-control:disabled{background-color:#e9ecef;opacity:1}.form-control::file-selector-button{padding:.375rem .75rem;margin:-0.375rem -0.75rem;margin-inline-end:.75rem;color:#343a40;background-color:#f8f9fa;pointer-events:none;border-color:inherit;border-style:solid;border-width:0;border-inline-end-width:1px;border-radius:0;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-control::file-selector-button{transition:none}}.form-control:hover:not(:disabled):not([readonly])::file-selector-button{background-color:#e9ecef}.form-control-plaintext{display:block;width:100%;padding:.375rem 0;margin-bottom:0;line-height:1.5;color:#343a40;background-color:rgba(0,0,0,0);border:solid rgba(0,0,0,0);border-width:1px 0}.form-control-plaintext:focus{outline:0}.form-control-plaintext.form-control-sm,.form-control-plaintext.form-control-lg{padding-right:0;padding-left:0}.form-control-sm{min-height:calc(1.5em + 0.5rem + calc(1px * 2));padding:.25rem .5rem;font-size:0.875rem}.form-control-sm::file-selector-button{padding:.25rem .5rem;margin:-0.25rem -0.5rem;margin-inline-end:.5rem}.form-control-lg{min-height:calc(1.5em + 1rem + calc(1px * 2));padding:.5rem 1rem;font-size:1.25rem}.form-control-lg::file-selector-button{padding:.5rem 1rem;margin:-0.5rem -1rem;margin-inline-end:1rem}textarea.form-control{min-height:calc(1.5em + 0.75rem + calc(1px * 2))}textarea.form-control-sm{min-height:calc(1.5em + 0.5rem + calc(1px * 2))}textarea.form-control-lg{min-height:calc(1.5em + 1rem + calc(1px * 2))}.form-control-color{width:3rem;height:calc(1.5em + 0.75rem + calc(1px * 2));padding:.375rem}.form-control-color:not(:disabled):not([readonly]){cursor:pointer}.form-control-color::-moz-color-swatch{border:0 !important}.form-control-color::-webkit-color-swatch{border:0 !important}.form-control-color.form-control-sm{height:calc(1.5em + 0.5rem + calc(1px * 2))}.form-control-color.form-control-lg{height:calc(1.5em + 1rem + calc(1px * 2))}.form-select{--bs-form-select-bg-img: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23343a40' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='m2 5 6 6 6-6'/%3e%3c/svg%3e");display:block;width:100%;padding:.375rem 2.25rem .375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#343a40;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none;background-color:#fff;background-image:var(--bs-form-select-bg-img),var(--bs-form-select-bg-icon, none);background-repeat:no-repeat;background-position:right .75rem center;background-size:16px 12px;border:1px solid #dee2e6;border-radius:0;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-select{transition:none}}.form-select:focus{border-color:#93c0f1;outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.form-select[multiple],.form-select[size]:not([size="1"]){padding-right:.75rem;background-image:none}.form-select:disabled{background-color:#e9ecef}.form-select:-moz-focusring{color:rgba(0,0,0,0);text-shadow:0 0 0 #343a40}.form-select-sm{padding-top:.25rem;padding-bottom:.25rem;padding-left:.5rem;font-size:0.875rem}.form-select-lg{padding-top:.5rem;padding-bottom:.5rem;padding-left:1rem;font-size:1.25rem}[data-bs-theme=dark] .form-select{--bs-form-select-bg-img: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23dee2e6' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='m2 5 6 6 6-6'/%3e%3c/svg%3e")}.form-check,.shiny-input-container .checkbox,.shiny-input-container .radio{display:block;min-height:1.5rem;padding-left:0;margin-bottom:.125rem}.form-check .form-check-input,.form-check .shiny-input-container .checkbox input,.form-check .shiny-input-container .radio input,.shiny-input-container .checkbox .form-check-input,.shiny-input-container .checkbox .shiny-input-container .checkbox input,.shiny-input-container .checkbox .shiny-input-container .radio input,.shiny-input-container .radio .form-check-input,.shiny-input-container .radio .shiny-input-container .checkbox input,.shiny-input-container .radio .shiny-input-container .radio input{float:left;margin-left:0}.form-check-reverse{padding-right:0;padding-left:0;text-align:right}.form-check-reverse .form-check-input{float:right;margin-right:0;margin-left:0}.form-check-input,.shiny-input-container .checkbox input,.shiny-input-container .checkbox-inline input,.shiny-input-container .radio input,.shiny-input-container .radio-inline input{--bs-form-check-bg: #fff;width:1em;height:1em;margin-top:.25em;vertical-align:top;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none;background-color:var(--bs-form-check-bg);background-image:var(--bs-form-check-bg-image);background-repeat:no-repeat;background-position:center;background-size:contain;border:1px solid #dee2e6;print-color-adjust:exact}.form-check-input[type=radio],.shiny-input-container .checkbox input[type=radio],.shiny-input-container .checkbox-inline input[type=radio],.shiny-input-container .radio input[type=radio],.shiny-input-container .radio-inline input[type=radio]{border-radius:50%}.form-check-input:active,.shiny-input-container .checkbox input:active,.shiny-input-container .checkbox-inline input:active,.shiny-input-container .radio input:active,.shiny-input-container .radio-inline input:active{filter:brightness(90%)}.form-check-input:focus,.shiny-input-container .checkbox input:focus,.shiny-input-container .checkbox-inline input:focus,.shiny-input-container .radio input:focus,.shiny-input-container .radio-inline input:focus{border-color:#93c0f1;outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.form-check-input:checked,.shiny-input-container .checkbox input:checked,.shiny-input-container .checkbox-inline input:checked,.shiny-input-container .radio input:checked,.shiny-input-container .radio-inline input:checked{background-color:#2780e3;border-color:#2780e3}.form-check-input:checked[type=checkbox],.shiny-input-container .checkbox input:checked[type=checkbox],.shiny-input-container .checkbox-inline input:checked[type=checkbox],.shiny-input-container .radio input:checked[type=checkbox],.shiny-input-container .radio-inline input:checked[type=checkbox]{--bs-form-check-bg-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='m6 10 3 3 6-6'/%3e%3c/svg%3e")}.form-check-input:checked[type=radio],.shiny-input-container .checkbox input:checked[type=radio],.shiny-input-container .checkbox-inline input:checked[type=radio],.shiny-input-container .radio input:checked[type=radio],.shiny-input-container .radio-inline input:checked[type=radio]{--bs-form-check-bg-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='2' fill='%23fff'/%3e%3c/svg%3e")}.form-check-input[type=checkbox]:indeterminate,.shiny-input-container .checkbox input[type=checkbox]:indeterminate,.shiny-input-container .checkbox-inline input[type=checkbox]:indeterminate,.shiny-input-container .radio input[type=checkbox]:indeterminate,.shiny-input-container .radio-inline input[type=checkbox]:indeterminate{background-color:#2780e3;border-color:#2780e3;--bs-form-check-bg-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='M6 10h8'/%3e%3c/svg%3e")}.form-check-input:disabled,.shiny-input-container .checkbox input:disabled,.shiny-input-container .checkbox-inline input:disabled,.shiny-input-container .radio input:disabled,.shiny-input-container .radio-inline input:disabled{pointer-events:none;filter:none;opacity:.5}.form-check-input[disabled]~.form-check-label,.form-check-input[disabled]~span,.form-check-input:disabled~.form-check-label,.form-check-input:disabled~span,.shiny-input-container .checkbox input[disabled]~.form-check-label,.shiny-input-container .checkbox input[disabled]~span,.shiny-input-container .checkbox input:disabled~.form-check-label,.shiny-input-container .checkbox input:disabled~span,.shiny-input-container .checkbox-inline input[disabled]~.form-check-label,.shiny-input-container .checkbox-inline input[disabled]~span,.shiny-input-container .checkbox-inline input:disabled~.form-check-label,.shiny-input-container .checkbox-inline input:disabled~span,.shiny-input-container .radio input[disabled]~.form-check-label,.shiny-input-container .radio input[disabled]~span,.shiny-input-container .radio input:disabled~.form-check-label,.shiny-input-container .radio input:disabled~span,.shiny-input-container .radio-inline input[disabled]~.form-check-label,.shiny-input-container .radio-inline input[disabled]~span,.shiny-input-container .radio-inline input:disabled~.form-check-label,.shiny-input-container .radio-inline input:disabled~span{cursor:default;opacity:.5}.form-check-label,.shiny-input-container .checkbox label,.shiny-input-container .checkbox-inline label,.shiny-input-container .radio label,.shiny-input-container .radio-inline label{cursor:pointer}.form-switch{padding-left:2.5em}.form-switch .form-check-input{--bs-form-switch-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='rgba%280, 0, 0, 0.25%29'/%3e%3c/svg%3e");width:2em;margin-left:-2.5em;background-image:var(--bs-form-switch-bg);background-position:left center;transition:background-position .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-switch .form-check-input{transition:none}}.form-switch .form-check-input:focus{--bs-form-switch-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%2393c0f1'/%3e%3c/svg%3e")}.form-switch .form-check-input:checked{background-position:right center;--bs-form-switch-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e")}.form-switch.form-check-reverse{padding-right:2.5em;padding-left:0}.form-switch.form-check-reverse .form-check-input{margin-right:-2.5em;margin-left:0}.form-check-inline{display:inline-block;margin-right:1rem}.btn-check{position:absolute;clip:rect(0, 0, 0, 0);pointer-events:none}.btn-check[disabled]+.btn,.btn-check:disabled+.btn{pointer-events:none;filter:none;opacity:.65}[data-bs-theme=dark] .form-switch .form-check-input:not(:checked):not(:focus){--bs-form-switch-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='rgba%28255, 255, 255, 0.25%29'/%3e%3c/svg%3e")}.form-range{width:100%;height:1.5rem;padding:0;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none;background-color:rgba(0,0,0,0)}.form-range:focus{outline:0}.form-range:focus::-webkit-slider-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .25rem rgba(39,128,227,.25)}.form-range:focus::-moz-range-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .25rem rgba(39,128,227,.25)}.form-range::-moz-focus-outer{border:0}.form-range::-webkit-slider-thumb{width:1rem;height:1rem;margin-top:-0.25rem;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none;background-color:#2780e3;border:0;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-range::-webkit-slider-thumb{transition:none}}.form-range::-webkit-slider-thumb:active{background-color:#bed9f7}.form-range::-webkit-slider-runnable-track{width:100%;height:.5rem;color:rgba(0,0,0,0);cursor:pointer;background-color:#f8f9fa;border-color:rgba(0,0,0,0)}.form-range::-moz-range-thumb{width:1rem;height:1rem;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none;background-color:#2780e3;border:0;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-range::-moz-range-thumb{transition:none}}.form-range::-moz-range-thumb:active{background-color:#bed9f7}.form-range::-moz-range-track{width:100%;height:.5rem;color:rgba(0,0,0,0);cursor:pointer;background-color:#f8f9fa;border-color:rgba(0,0,0,0)}.form-range:disabled{pointer-events:none}.form-range:disabled::-webkit-slider-thumb{background-color:rgba(52,58,64,.75)}.form-range:disabled::-moz-range-thumb{background-color:rgba(52,58,64,.75)}.form-floating{position:relative}.form-floating>.form-control,.form-floating>.form-control-plaintext,.form-floating>.form-select{height:calc(3.5rem + calc(1px * 2));min-height:calc(3.5rem + calc(1px * 2));line-height:1.25}.form-floating>label{position:absolute;top:0;left:0;z-index:2;height:100%;padding:1rem .75rem;overflow:hidden;text-align:start;text-overflow:ellipsis;white-space:nowrap;pointer-events:none;border:1px solid rgba(0,0,0,0);transform-origin:0 0;transition:opacity .1s ease-in-out,transform .1s ease-in-out}@media(prefers-reduced-motion: reduce){.form-floating>label{transition:none}}.form-floating>.form-control,.form-floating>.form-control-plaintext{padding:1rem .75rem}.form-floating>.form-control::placeholder,.form-floating>.form-control-plaintext::placeholder{color:rgba(0,0,0,0)}.form-floating>.form-control:focus,.form-floating>.form-control:not(:placeholder-shown),.form-floating>.form-control-plaintext:focus,.form-floating>.form-control-plaintext:not(:placeholder-shown){padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-control:-webkit-autofill,.form-floating>.form-control-plaintext:-webkit-autofill{padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-select{padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-control:focus~label,.form-floating>.form-control:not(:placeholder-shown)~label,.form-floating>.form-control-plaintext~label,.form-floating>.form-select~label{color:rgba(var(--bs-body-color-rgb), 0.65);transform:scale(0.85) translateY(-0.5rem) translateX(0.15rem)}.form-floating>.form-control:focus~label::after,.form-floating>.form-control:not(:placeholder-shown)~label::after,.form-floating>.form-control-plaintext~label::after,.form-floating>.form-select~label::after{position:absolute;inset:1rem .375rem;z-index:-1;height:1.5em;content:"";background-color:#fff}.form-floating>.form-control:-webkit-autofill~label{color:rgba(var(--bs-body-color-rgb), 0.65);transform:scale(0.85) translateY(-0.5rem) translateX(0.15rem)}.form-floating>.form-control-plaintext~label{border-width:1px 0}.form-floating>:disabled~label,.form-floating>.form-control:disabled~label{color:#6c757d}.form-floating>:disabled~label::after,.form-floating>.form-control:disabled~label::after{background-color:#e9ecef}.input-group{position:relative;display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;align-items:stretch;-webkit-align-items:stretch;width:100%}.input-group>.form-control,.input-group>.form-select,.input-group>.form-floating{position:relative;flex:1 1 auto;-webkit-flex:1 1 auto;width:1%;min-width:0}.input-group>.form-control:focus,.input-group>.form-select:focus,.input-group>.form-floating:focus-within{z-index:5}.input-group .btn{position:relative;z-index:2}.input-group .btn:focus{z-index:5}.input-group-text{display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#343a40;text-align:center;white-space:nowrap;background-color:#f8f9fa;border:1px solid #dee2e6}.input-group-lg>.form-control,.input-group-lg>.form-select,.input-group-lg>.input-group-text,.input-group-lg>.btn{padding:.5rem 1rem;font-size:1.25rem}.input-group-sm>.form-control,.input-group-sm>.form-select,.input-group-sm>.input-group-text,.input-group-sm>.btn{padding:.25rem .5rem;font-size:0.875rem}.input-group-lg>.form-select,.input-group-sm>.form-select{padding-right:3rem}.input-group>:not(:first-child):not(.dropdown-menu):not(.valid-tooltip):not(.valid-feedback):not(.invalid-tooltip):not(.invalid-feedback){margin-left:calc(1px*-1)}.valid-feedback{display:none;width:100%;margin-top:.25rem;font-size:0.875em;color:#3fb618}.valid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:0.875rem;color:#fff;background-color:#3fb618}.was-validated :valid~.valid-feedback,.was-validated :valid~.valid-tooltip,.is-valid~.valid-feedback,.is-valid~.valid-tooltip{display:block}.was-validated .form-control:valid,.form-control.is-valid{border-color:#3fb618;padding-right:calc(1.5em + 0.75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%233fb618' d='M2.3 6.73.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(0.375em + 0.1875rem) center;background-size:calc(0.75em + 0.375rem) calc(0.75em + 0.375rem)}.was-validated .form-control:valid:focus,.form-control.is-valid:focus{border-color:#3fb618;box-shadow:0 0 0 .25rem rgba(63,182,24,.25)}.was-validated textarea.form-control:valid,textarea.form-control.is-valid{padding-right:calc(1.5em + 0.75rem);background-position:top calc(0.375em + 0.1875rem) right calc(0.375em + 0.1875rem)}.was-validated .form-select:valid,.form-select.is-valid{border-color:#3fb618}.was-validated .form-select:valid:not([multiple]):not([size]),.was-validated .form-select:valid:not([multiple])[size="1"],.form-select.is-valid:not([multiple]):not([size]),.form-select.is-valid:not([multiple])[size="1"]{--bs-form-select-bg-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%233fb618' d='M2.3 6.73.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");padding-right:4.125rem;background-position:right .75rem center,center right 2.25rem;background-size:16px 12px,calc(0.75em + 0.375rem) calc(0.75em + 0.375rem)}.was-validated .form-select:valid:focus,.form-select.is-valid:focus{border-color:#3fb618;box-shadow:0 0 0 .25rem rgba(63,182,24,.25)}.was-validated .form-control-color:valid,.form-control-color.is-valid{width:calc(3rem + calc(1.5em + 0.75rem))}.was-validated .form-check-input:valid,.form-check-input.is-valid{border-color:#3fb618}.was-validated .form-check-input:valid:checked,.form-check-input.is-valid:checked{background-color:#3fb618}.was-validated .form-check-input:valid:focus,.form-check-input.is-valid:focus{box-shadow:0 0 0 .25rem rgba(63,182,24,.25)}.was-validated .form-check-input:valid~.form-check-label,.form-check-input.is-valid~.form-check-label{color:#3fb618}.form-check-inline .form-check-input~.valid-feedback{margin-left:.5em}.was-validated .input-group>.form-control:not(:focus):valid,.input-group>.form-control:not(:focus).is-valid,.was-validated .input-group>.form-select:not(:focus):valid,.input-group>.form-select:not(:focus).is-valid,.was-validated .input-group>.form-floating:not(:focus-within):valid,.input-group>.form-floating:not(:focus-within).is-valid{z-index:3}.invalid-feedback{display:none;width:100%;margin-top:.25rem;font-size:0.875em;color:#ff0039}.invalid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:0.875rem;color:#fff;background-color:#ff0039}.was-validated :invalid~.invalid-feedback,.was-validated :invalid~.invalid-tooltip,.is-invalid~.invalid-feedback,.is-invalid~.invalid-tooltip{display:block}.was-validated .form-control:invalid,.form-control.is-invalid{border-color:#ff0039;padding-right:calc(1.5em + 0.75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23ff0039'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23ff0039' stroke='none'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(0.375em + 0.1875rem) center;background-size:calc(0.75em + 0.375rem) calc(0.75em + 0.375rem)}.was-validated .form-control:invalid:focus,.form-control.is-invalid:focus{border-color:#ff0039;box-shadow:0 0 0 .25rem rgba(255,0,57,.25)}.was-validated textarea.form-control:invalid,textarea.form-control.is-invalid{padding-right:calc(1.5em + 0.75rem);background-position:top calc(0.375em + 0.1875rem) right calc(0.375em + 0.1875rem)}.was-validated .form-select:invalid,.form-select.is-invalid{border-color:#ff0039}.was-validated .form-select:invalid:not([multiple]):not([size]),.was-validated .form-select:invalid:not([multiple])[size="1"],.form-select.is-invalid:not([multiple]):not([size]),.form-select.is-invalid:not([multiple])[size="1"]{--bs-form-select-bg-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23ff0039'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23ff0039' stroke='none'/%3e%3c/svg%3e");padding-right:4.125rem;background-position:right .75rem center,center right 2.25rem;background-size:16px 12px,calc(0.75em + 0.375rem) calc(0.75em + 0.375rem)}.was-validated .form-select:invalid:focus,.form-select.is-invalid:focus{border-color:#ff0039;box-shadow:0 0 0 .25rem rgba(255,0,57,.25)}.was-validated .form-control-color:invalid,.form-control-color.is-invalid{width:calc(3rem + calc(1.5em + 0.75rem))}.was-validated .form-check-input:invalid,.form-check-input.is-invalid{border-color:#ff0039}.was-validated .form-check-input:invalid:checked,.form-check-input.is-invalid:checked{background-color:#ff0039}.was-validated .form-check-input:invalid:focus,.form-check-input.is-invalid:focus{box-shadow:0 0 0 .25rem rgba(255,0,57,.25)}.was-validated .form-check-input:invalid~.form-check-label,.form-check-input.is-invalid~.form-check-label{color:#ff0039}.form-check-inline .form-check-input~.invalid-feedback{margin-left:.5em}.was-validated .input-group>.form-control:not(:focus):invalid,.input-group>.form-control:not(:focus).is-invalid,.was-validated .input-group>.form-select:not(:focus):invalid,.input-group>.form-select:not(:focus).is-invalid,.was-validated .input-group>.form-floating:not(:focus-within):invalid,.input-group>.form-floating:not(:focus-within).is-invalid{z-index:4}.btn{--bs-btn-padding-x: 0.75rem;--bs-btn-padding-y: 0.375rem;--bs-btn-font-family: ;--bs-btn-font-size:1rem;--bs-btn-font-weight: 400;--bs-btn-line-height: 1.5;--bs-btn-color: #343a40;--bs-btn-bg: transparent;--bs-btn-border-width: 1px;--bs-btn-border-color: transparent;--bs-btn-border-radius: 0.25rem;--bs-btn-hover-border-color: transparent;--bs-btn-box-shadow: inset 0 1px 0 rgba(255, 255, 255, 0.15), 0 1px 1px rgba(0, 0, 0, 0.075);--bs-btn-disabled-opacity: 0.65;--bs-btn-focus-box-shadow: 0 0 0 0.25rem rgba(var(--bs-btn-focus-shadow-rgb), .5);display:inline-block;padding:var(--bs-btn-padding-y) var(--bs-btn-padding-x);font-family:var(--bs-btn-font-family);font-size:var(--bs-btn-font-size);font-weight:var(--bs-btn-font-weight);line-height:var(--bs-btn-line-height);color:var(--bs-btn-color);text-align:center;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;vertical-align:middle;cursor:pointer;user-select:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;-o-user-select:none;border:var(--bs-btn-border-width) solid var(--bs-btn-border-color);background-color:var(--bs-btn-bg);transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.btn{transition:none}}.btn:hover{color:var(--bs-btn-hover-color);background-color:var(--bs-btn-hover-bg);border-color:var(--bs-btn-hover-border-color)}.btn-check+.btn:hover{color:var(--bs-btn-color);background-color:var(--bs-btn-bg);border-color:var(--bs-btn-border-color)}.btn:focus-visible{color:var(--bs-btn-hover-color);background-color:var(--bs-btn-hover-bg);border-color:var(--bs-btn-hover-border-color);outline:0;box-shadow:var(--bs-btn-focus-box-shadow)}.btn-check:focus-visible+.btn{border-color:var(--bs-btn-hover-border-color);outline:0;box-shadow:var(--bs-btn-focus-box-shadow)}.btn-check:checked+.btn,:not(.btn-check)+.btn:active,.btn:first-child:active,.btn.active,.btn.show{color:var(--bs-btn-active-color);background-color:var(--bs-btn-active-bg);border-color:var(--bs-btn-active-border-color)}.btn-check:checked+.btn:focus-visible,:not(.btn-check)+.btn:active:focus-visible,.btn:first-child:active:focus-visible,.btn.active:focus-visible,.btn.show:focus-visible{box-shadow:var(--bs-btn-focus-box-shadow)}.btn:disabled,.btn.disabled,fieldset:disabled .btn{color:var(--bs-btn-disabled-color);pointer-events:none;background-color:var(--bs-btn-disabled-bg);border-color:var(--bs-btn-disabled-border-color);opacity:var(--bs-btn-disabled-opacity)}.btn-default{--bs-btn-color: #fff;--bs-btn-bg: #343a40;--bs-btn-border-color: #343a40;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #2c3136;--bs-btn-hover-border-color: #2a2e33;--bs-btn-focus-shadow-rgb: 82, 88, 93;--bs-btn-active-color: #fff;--bs-btn-active-bg: #2a2e33;--bs-btn-active-border-color: #272c30;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #fff;--bs-btn-disabled-bg: #343a40;--bs-btn-disabled-border-color: #343a40}.btn-primary{--bs-btn-color: #fff;--bs-btn-bg: #2780e3;--bs-btn-border-color: #2780e3;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #216dc1;--bs-btn-hover-border-color: #1f66b6;--bs-btn-focus-shadow-rgb: 71, 147, 231;--bs-btn-active-color: #fff;--bs-btn-active-bg: #1f66b6;--bs-btn-active-border-color: #1d60aa;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #fff;--bs-btn-disabled-bg: #2780e3;--bs-btn-disabled-border-color: #2780e3}.btn-secondary{--bs-btn-color: #fff;--bs-btn-bg: #343a40;--bs-btn-border-color: #343a40;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #2c3136;--bs-btn-hover-border-color: #2a2e33;--bs-btn-focus-shadow-rgb: 82, 88, 93;--bs-btn-active-color: #fff;--bs-btn-active-bg: #2a2e33;--bs-btn-active-border-color: #272c30;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #fff;--bs-btn-disabled-bg: #343a40;--bs-btn-disabled-border-color: #343a40}.btn-success{--bs-btn-color: #fff;--bs-btn-bg: #3fb618;--bs-btn-border-color: #3fb618;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #369b14;--bs-btn-hover-border-color: #329213;--bs-btn-focus-shadow-rgb: 92, 193, 59;--bs-btn-active-color: #fff;--bs-btn-active-bg: #329213;--bs-btn-active-border-color: #2f8912;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #fff;--bs-btn-disabled-bg: #3fb618;--bs-btn-disabled-border-color: #3fb618}.btn-info{--bs-btn-color: #fff;--bs-btn-bg: #9954bb;--bs-btn-border-color: #9954bb;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #82479f;--bs-btn-hover-border-color: #7a4396;--bs-btn-focus-shadow-rgb: 168, 110, 197;--bs-btn-active-color: #fff;--bs-btn-active-bg: #7a4396;--bs-btn-active-border-color: #733f8c;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #fff;--bs-btn-disabled-bg: #9954bb;--bs-btn-disabled-border-color: #9954bb}.btn-warning{--bs-btn-color: #fff;--bs-btn-bg: #ff7518;--bs-btn-border-color: #ff7518;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #d96314;--bs-btn-hover-border-color: #cc5e13;--bs-btn-focus-shadow-rgb: 255, 138, 59;--bs-btn-active-color: #fff;--bs-btn-active-bg: #cc5e13;--bs-btn-active-border-color: #bf5812;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #fff;--bs-btn-disabled-bg: #ff7518;--bs-btn-disabled-border-color: #ff7518}.btn-danger{--bs-btn-color: #fff;--bs-btn-bg: #ff0039;--bs-btn-border-color: #ff0039;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #d90030;--bs-btn-hover-border-color: #cc002e;--bs-btn-focus-shadow-rgb: 255, 38, 87;--bs-btn-active-color: #fff;--bs-btn-active-bg: #cc002e;--bs-btn-active-border-color: #bf002b;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #fff;--bs-btn-disabled-bg: #ff0039;--bs-btn-disabled-border-color: #ff0039}.btn-light{--bs-btn-color: #000;--bs-btn-bg: #f8f9fa;--bs-btn-border-color: #f8f9fa;--bs-btn-hover-color: #000;--bs-btn-hover-bg: #d3d4d5;--bs-btn-hover-border-color: #c6c7c8;--bs-btn-focus-shadow-rgb: 211, 212, 213;--bs-btn-active-color: #000;--bs-btn-active-bg: #c6c7c8;--bs-btn-active-border-color: #babbbc;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #000;--bs-btn-disabled-bg: #f8f9fa;--bs-btn-disabled-border-color: #f8f9fa}.btn-dark{--bs-btn-color: #fff;--bs-btn-bg: #343a40;--bs-btn-border-color: #343a40;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #52585d;--bs-btn-hover-border-color: #484e53;--bs-btn-focus-shadow-rgb: 82, 88, 93;--bs-btn-active-color: #fff;--bs-btn-active-bg: #5d6166;--bs-btn-active-border-color: #484e53;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #fff;--bs-btn-disabled-bg: #343a40;--bs-btn-disabled-border-color: #343a40}.btn-outline-default{--bs-btn-color: #343a40;--bs-btn-border-color: #343a40;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #343a40;--bs-btn-hover-border-color: #343a40;--bs-btn-focus-shadow-rgb: 52, 58, 64;--bs-btn-active-color: #fff;--bs-btn-active-bg: #343a40;--bs-btn-active-border-color: #343a40;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #343a40;--bs-btn-disabled-bg: transparent;--bs-btn-disabled-border-color: #343a40;--bs-btn-bg: transparent;--bs-gradient: none}.btn-outline-primary{--bs-btn-color: #2780e3;--bs-btn-border-color: #2780e3;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #2780e3;--bs-btn-hover-border-color: #2780e3;--bs-btn-focus-shadow-rgb: 39, 128, 227;--bs-btn-active-color: #fff;--bs-btn-active-bg: #2780e3;--bs-btn-active-border-color: #2780e3;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #2780e3;--bs-btn-disabled-bg: transparent;--bs-btn-disabled-border-color: #2780e3;--bs-btn-bg: transparent;--bs-gradient: none}.btn-outline-secondary{--bs-btn-color: #343a40;--bs-btn-border-color: #343a40;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #343a40;--bs-btn-hover-border-color: #343a40;--bs-btn-focus-shadow-rgb: 52, 58, 64;--bs-btn-active-color: #fff;--bs-btn-active-bg: #343a40;--bs-btn-active-border-color: #343a40;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #343a40;--bs-btn-disabled-bg: transparent;--bs-btn-disabled-border-color: #343a40;--bs-btn-bg: transparent;--bs-gradient: none}.btn-outline-success{--bs-btn-color: #3fb618;--bs-btn-border-color: #3fb618;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #3fb618;--bs-btn-hover-border-color: #3fb618;--bs-btn-focus-shadow-rgb: 63, 182, 24;--bs-btn-active-color: #fff;--bs-btn-active-bg: #3fb618;--bs-btn-active-border-color: #3fb618;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #3fb618;--bs-btn-disabled-bg: transparent;--bs-btn-disabled-border-color: #3fb618;--bs-btn-bg: transparent;--bs-gradient: none}.btn-outline-info{--bs-btn-color: #9954bb;--bs-btn-border-color: #9954bb;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #9954bb;--bs-btn-hover-border-color: #9954bb;--bs-btn-focus-shadow-rgb: 153, 84, 187;--bs-btn-active-color: #fff;--bs-btn-active-bg: #9954bb;--bs-btn-active-border-color: #9954bb;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #9954bb;--bs-btn-disabled-bg: transparent;--bs-btn-disabled-border-color: #9954bb;--bs-btn-bg: transparent;--bs-gradient: none}.btn-outline-warning{--bs-btn-color: #ff7518;--bs-btn-border-color: #ff7518;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #ff7518;--bs-btn-hover-border-color: #ff7518;--bs-btn-focus-shadow-rgb: 255, 117, 24;--bs-btn-active-color: #fff;--bs-btn-active-bg: #ff7518;--bs-btn-active-border-color: #ff7518;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #ff7518;--bs-btn-disabled-bg: transparent;--bs-btn-disabled-border-color: #ff7518;--bs-btn-bg: transparent;--bs-gradient: none}.btn-outline-danger{--bs-btn-color: #ff0039;--bs-btn-border-color: #ff0039;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #ff0039;--bs-btn-hover-border-color: #ff0039;--bs-btn-focus-shadow-rgb: 255, 0, 57;--bs-btn-active-color: #fff;--bs-btn-active-bg: #ff0039;--bs-btn-active-border-color: #ff0039;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #ff0039;--bs-btn-disabled-bg: transparent;--bs-btn-disabled-border-color: #ff0039;--bs-btn-bg: transparent;--bs-gradient: none}.btn-outline-light{--bs-btn-color: #f8f9fa;--bs-btn-border-color: #f8f9fa;--bs-btn-hover-color: #000;--bs-btn-hover-bg: #f8f9fa;--bs-btn-hover-border-color: #f8f9fa;--bs-btn-focus-shadow-rgb: 248, 249, 250;--bs-btn-active-color: #000;--bs-btn-active-bg: #f8f9fa;--bs-btn-active-border-color: #f8f9fa;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #f8f9fa;--bs-btn-disabled-bg: transparent;--bs-btn-disabled-border-color: #f8f9fa;--bs-btn-bg: transparent;--bs-gradient: none}.btn-outline-dark{--bs-btn-color: #343a40;--bs-btn-border-color: #343a40;--bs-btn-hover-color: #fff;--bs-btn-hover-bg: #343a40;--bs-btn-hover-border-color: #343a40;--bs-btn-focus-shadow-rgb: 52, 58, 64;--bs-btn-active-color: #fff;--bs-btn-active-bg: #343a40;--bs-btn-active-border-color: #343a40;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #343a40;--bs-btn-disabled-bg: transparent;--bs-btn-disabled-border-color: #343a40;--bs-btn-bg: transparent;--bs-gradient: none}.btn-link{--bs-btn-font-weight: 400;--bs-btn-color: #2761e3;--bs-btn-bg: transparent;--bs-btn-border-color: transparent;--bs-btn-hover-color: #1f4eb6;--bs-btn-hover-border-color: transparent;--bs-btn-active-color: #1f4eb6;--bs-btn-active-border-color: transparent;--bs-btn-disabled-color: #6c757d;--bs-btn-disabled-border-color: transparent;--bs-btn-box-shadow: 0 0 0 #000;--bs-btn-focus-shadow-rgb: 71, 121, 231;text-decoration:underline;-webkit-text-decoration:underline;-moz-text-decoration:underline;-ms-text-decoration:underline;-o-text-decoration:underline}.btn-link:focus-visible{color:var(--bs-btn-color)}.btn-link:hover{color:var(--bs-btn-hover-color)}.btn-lg,.btn-group-lg>.btn{--bs-btn-padding-y: 0.5rem;--bs-btn-padding-x: 1rem;--bs-btn-font-size:1.25rem;--bs-btn-border-radius: 0.5rem}.btn-sm,.btn-group-sm>.btn{--bs-btn-padding-y: 0.25rem;--bs-btn-padding-x: 0.5rem;--bs-btn-font-size:0.875rem;--bs-btn-border-radius: 0.2em}.fade{transition:opacity .15s linear}@media(prefers-reduced-motion: reduce){.fade{transition:none}}.fade:not(.show){opacity:0}.collapse:not(.show){display:none}.collapsing{height:0;overflow:hidden;transition:height .2s ease}@media(prefers-reduced-motion: reduce){.collapsing{transition:none}}.collapsing.collapse-horizontal{width:0;height:auto;transition:width .35s ease}@media(prefers-reduced-motion: reduce){.collapsing.collapse-horizontal{transition:none}}.dropup,.dropend,.dropdown,.dropstart,.dropup-center,.dropdown-center{position:relative}.dropdown-toggle{white-space:nowrap}.dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid;border-right:.3em solid rgba(0,0,0,0);border-bottom:0;border-left:.3em solid rgba(0,0,0,0)}.dropdown-toggle:empty::after{margin-left:0}.dropdown-menu{--bs-dropdown-zindex: 1000;--bs-dropdown-min-width: 10rem;--bs-dropdown-padding-x: 0;--bs-dropdown-padding-y: 0.5rem;--bs-dropdown-spacer: 0.125rem;--bs-dropdown-font-size:1rem;--bs-dropdown-color: #343a40;--bs-dropdown-bg: #fff;--bs-dropdown-border-color: rgba(0, 0, 0, 0.175);--bs-dropdown-border-radius: 0.25rem;--bs-dropdown-border-width: 1px;--bs-dropdown-inner-border-radius: calc(0.25rem - 1px);--bs-dropdown-divider-bg: rgba(0, 0, 0, 0.175);--bs-dropdown-divider-margin-y: 0.5rem;--bs-dropdown-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15);--bs-dropdown-link-color: #343a40;--bs-dropdown-link-hover-color: #343a40;--bs-dropdown-link-hover-bg: #f8f9fa;--bs-dropdown-link-active-color: #fff;--bs-dropdown-link-active-bg: #2780e3;--bs-dropdown-link-disabled-color: rgba(52, 58, 64, 0.5);--bs-dropdown-item-padding-x: 1rem;--bs-dropdown-item-padding-y: 0.25rem;--bs-dropdown-header-color: #6c757d;--bs-dropdown-header-padding-x: 1rem;--bs-dropdown-header-padding-y: 0.5rem;position:absolute;z-index:var(--bs-dropdown-zindex);display:none;min-width:var(--bs-dropdown-min-width);padding:var(--bs-dropdown-padding-y) var(--bs-dropdown-padding-x);margin:0;font-size:var(--bs-dropdown-font-size);color:var(--bs-dropdown-color);text-align:left;list-style:none;background-color:var(--bs-dropdown-bg);background-clip:padding-box;border:var(--bs-dropdown-border-width) solid var(--bs-dropdown-border-color)}.dropdown-menu[data-bs-popper]{top:100%;left:0;margin-top:var(--bs-dropdown-spacer)}.dropdown-menu-start{--bs-position: start}.dropdown-menu-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-end{--bs-position: end}.dropdown-menu-end[data-bs-popper]{right:0;left:auto}@media(min-width: 576px){.dropdown-menu-sm-start{--bs-position: start}.dropdown-menu-sm-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-sm-end{--bs-position: end}.dropdown-menu-sm-end[data-bs-popper]{right:0;left:auto}}@media(min-width: 768px){.dropdown-menu-md-start{--bs-position: start}.dropdown-menu-md-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-md-end{--bs-position: end}.dropdown-menu-md-end[data-bs-popper]{right:0;left:auto}}@media(min-width: 992px){.dropdown-menu-lg-start{--bs-position: start}.dropdown-menu-lg-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-lg-end{--bs-position: end}.dropdown-menu-lg-end[data-bs-popper]{right:0;left:auto}}@media(min-width: 1200px){.dropdown-menu-xl-start{--bs-position: start}.dropdown-menu-xl-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-xl-end{--bs-position: end}.dropdown-menu-xl-end[data-bs-popper]{right:0;left:auto}}@media(min-width: 1400px){.dropdown-menu-xxl-start{--bs-position: start}.dropdown-menu-xxl-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-xxl-end{--bs-position: end}.dropdown-menu-xxl-end[data-bs-popper]{right:0;left:auto}}.dropup .dropdown-menu[data-bs-popper]{top:auto;bottom:100%;margin-top:0;margin-bottom:var(--bs-dropdown-spacer)}.dropup .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:0;border-right:.3em solid rgba(0,0,0,0);border-bottom:.3em solid;border-left:.3em solid rgba(0,0,0,0)}.dropup .dropdown-toggle:empty::after{margin-left:0}.dropend .dropdown-menu[data-bs-popper]{top:0;right:auto;left:100%;margin-top:0;margin-left:var(--bs-dropdown-spacer)}.dropend .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid rgba(0,0,0,0);border-right:0;border-bottom:.3em solid rgba(0,0,0,0);border-left:.3em solid}.dropend .dropdown-toggle:empty::after{margin-left:0}.dropend .dropdown-toggle::after{vertical-align:0}.dropstart .dropdown-menu[data-bs-popper]{top:0;right:100%;left:auto;margin-top:0;margin-right:var(--bs-dropdown-spacer)}.dropstart .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:""}.dropstart .dropdown-toggle::after{display:none}.dropstart .dropdown-toggle::before{display:inline-block;margin-right:.255em;vertical-align:.255em;content:"";border-top:.3em solid rgba(0,0,0,0);border-right:.3em solid;border-bottom:.3em solid rgba(0,0,0,0)}.dropstart .dropdown-toggle:empty::after{margin-left:0}.dropstart .dropdown-toggle::before{vertical-align:0}.dropdown-divider{height:0;margin:var(--bs-dropdown-divider-margin-y) 0;overflow:hidden;border-top:1px solid var(--bs-dropdown-divider-bg);opacity:1}.dropdown-item{display:block;width:100%;padding:var(--bs-dropdown-item-padding-y) var(--bs-dropdown-item-padding-x);clear:both;font-weight:400;color:var(--bs-dropdown-link-color);text-align:inherit;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;white-space:nowrap;background-color:rgba(0,0,0,0);border:0}.dropdown-item:hover,.dropdown-item:focus{color:var(--bs-dropdown-link-hover-color);background-color:var(--bs-dropdown-link-hover-bg)}.dropdown-item.active,.dropdown-item:active{color:var(--bs-dropdown-link-active-color);text-decoration:none;background-color:var(--bs-dropdown-link-active-bg)}.dropdown-item.disabled,.dropdown-item:disabled{color:var(--bs-dropdown-link-disabled-color);pointer-events:none;background-color:rgba(0,0,0,0)}.dropdown-menu.show{display:block}.dropdown-header{display:block;padding:var(--bs-dropdown-header-padding-y) var(--bs-dropdown-header-padding-x);margin-bottom:0;font-size:0.875rem;color:var(--bs-dropdown-header-color);white-space:nowrap}.dropdown-item-text{display:block;padding:var(--bs-dropdown-item-padding-y) var(--bs-dropdown-item-padding-x);color:var(--bs-dropdown-link-color)}.dropdown-menu-dark{--bs-dropdown-color: #dee2e6;--bs-dropdown-bg: #343a40;--bs-dropdown-border-color: rgba(0, 0, 0, 0.175);--bs-dropdown-box-shadow: ;--bs-dropdown-link-color: #dee2e6;--bs-dropdown-link-hover-color: #fff;--bs-dropdown-divider-bg: rgba(0, 0, 0, 0.175);--bs-dropdown-link-hover-bg: rgba(255, 255, 255, 0.15);--bs-dropdown-link-active-color: #fff;--bs-dropdown-link-active-bg: #2780e3;--bs-dropdown-link-disabled-color: #adb5bd;--bs-dropdown-header-color: #adb5bd}.btn-group,.btn-group-vertical{position:relative;display:inline-flex;vertical-align:middle}.btn-group>.btn,.btn-group-vertical>.btn{position:relative;flex:1 1 auto;-webkit-flex:1 1 auto}.btn-group>.btn-check:checked+.btn,.btn-group>.btn-check:focus+.btn,.btn-group>.btn:hover,.btn-group>.btn:focus,.btn-group>.btn:active,.btn-group>.btn.active,.btn-group-vertical>.btn-check:checked+.btn,.btn-group-vertical>.btn-check:focus+.btn,.btn-group-vertical>.btn:hover,.btn-group-vertical>.btn:focus,.btn-group-vertical>.btn:active,.btn-group-vertical>.btn.active{z-index:1}.btn-toolbar{display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;justify-content:flex-start;-webkit-justify-content:flex-start}.btn-toolbar .input-group{width:auto}.btn-group>:not(.btn-check:first-child)+.btn,.btn-group>.btn-group:not(:first-child){margin-left:calc(1px*-1)}.dropdown-toggle-split{padding-right:.5625rem;padding-left:.5625rem}.dropdown-toggle-split::after,.dropup .dropdown-toggle-split::after,.dropend .dropdown-toggle-split::after{margin-left:0}.dropstart .dropdown-toggle-split::before{margin-right:0}.btn-sm+.dropdown-toggle-split,.btn-group-sm>.btn+.dropdown-toggle-split{padding-right:.375rem;padding-left:.375rem}.btn-lg+.dropdown-toggle-split,.btn-group-lg>.btn+.dropdown-toggle-split{padding-right:.75rem;padding-left:.75rem}.btn-group-vertical{flex-direction:column;-webkit-flex-direction:column;align-items:flex-start;-webkit-align-items:flex-start;justify-content:center;-webkit-justify-content:center}.btn-group-vertical>.btn,.btn-group-vertical>.btn-group{width:100%}.btn-group-vertical>.btn:not(:first-child),.btn-group-vertical>.btn-group:not(:first-child){margin-top:calc(1px*-1)}.nav{--bs-nav-link-padding-x: 1rem;--bs-nav-link-padding-y: 0.5rem;--bs-nav-link-font-weight: ;--bs-nav-link-color: #2761e3;--bs-nav-link-hover-color: #1f4eb6;--bs-nav-link-disabled-color: rgba(52, 58, 64, 0.75);display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;padding-left:0;margin-bottom:0;list-style:none}.nav-link{display:block;padding:var(--bs-nav-link-padding-y) var(--bs-nav-link-padding-x);font-size:var(--bs-nav-link-font-size);font-weight:var(--bs-nav-link-font-weight);color:var(--bs-nav-link-color);text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;background:none;border:0;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out}@media(prefers-reduced-motion: reduce){.nav-link{transition:none}}.nav-link:hover,.nav-link:focus{color:var(--bs-nav-link-hover-color)}.nav-link:focus-visible{outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.nav-link.disabled,.nav-link:disabled{color:var(--bs-nav-link-disabled-color);pointer-events:none;cursor:default}.nav-tabs{--bs-nav-tabs-border-width: 1px;--bs-nav-tabs-border-color: #dee2e6;--bs-nav-tabs-border-radius: 0.25rem;--bs-nav-tabs-link-hover-border-color: #e9ecef #e9ecef #dee2e6;--bs-nav-tabs-link-active-color: #000;--bs-nav-tabs-link-active-bg: #fff;--bs-nav-tabs-link-active-border-color: #dee2e6 #dee2e6 #fff;border-bottom:var(--bs-nav-tabs-border-width) solid var(--bs-nav-tabs-border-color)}.nav-tabs .nav-link{margin-bottom:calc(-1*var(--bs-nav-tabs-border-width));border:var(--bs-nav-tabs-border-width) solid rgba(0,0,0,0)}.nav-tabs .nav-link:hover,.nav-tabs .nav-link:focus{isolation:isolate;border-color:var(--bs-nav-tabs-link-hover-border-color)}.nav-tabs .nav-link.active,.nav-tabs .nav-item.show .nav-link{color:var(--bs-nav-tabs-link-active-color);background-color:var(--bs-nav-tabs-link-active-bg);border-color:var(--bs-nav-tabs-link-active-border-color)}.nav-tabs .dropdown-menu{margin-top:calc(-1*var(--bs-nav-tabs-border-width))}.nav-pills{--bs-nav-pills-border-radius: 0.25rem;--bs-nav-pills-link-active-color: #fff;--bs-nav-pills-link-active-bg: #2780e3}.nav-pills .nav-link.active,.nav-pills .show>.nav-link{color:var(--bs-nav-pills-link-active-color);background-color:var(--bs-nav-pills-link-active-bg)}.nav-underline{--bs-nav-underline-gap: 1rem;--bs-nav-underline-border-width: 0.125rem;--bs-nav-underline-link-active-color: #000;gap:var(--bs-nav-underline-gap)}.nav-underline .nav-link{padding-right:0;padding-left:0;border-bottom:var(--bs-nav-underline-border-width) solid rgba(0,0,0,0)}.nav-underline .nav-link:hover,.nav-underline .nav-link:focus{border-bottom-color:currentcolor}.nav-underline .nav-link.active,.nav-underline .show>.nav-link{font-weight:700;color:var(--bs-nav-underline-link-active-color);border-bottom-color:currentcolor}.nav-fill>.nav-link,.nav-fill .nav-item{flex:1 1 auto;-webkit-flex:1 1 auto;text-align:center}.nav-justified>.nav-link,.nav-justified .nav-item{flex-basis:0;-webkit-flex-basis:0;flex-grow:1;-webkit-flex-grow:1;text-align:center}.nav-fill .nav-item .nav-link,.nav-justified .nav-item .nav-link{width:100%}.tab-content>.tab-pane{display:none}.tab-content>.active{display:block}.navbar{--bs-navbar-padding-x: 0;--bs-navbar-padding-y: 0.5rem;--bs-navbar-color: #545555;--bs-navbar-hover-color: rgba(31, 78, 182, 0.8);--bs-navbar-disabled-color: rgba(84, 85, 85, 0.75);--bs-navbar-active-color: #1f4eb6;--bs-navbar-brand-padding-y: 0.3125rem;--bs-navbar-brand-margin-end: 1rem;--bs-navbar-brand-font-size: 1.25rem;--bs-navbar-brand-color: #545555;--bs-navbar-brand-hover-color: #1f4eb6;--bs-navbar-nav-link-padding-x: 0.5rem;--bs-navbar-toggler-padding-y: 0.25;--bs-navbar-toggler-padding-x: 0;--bs-navbar-toggler-font-size: 1.25rem;--bs-navbar-toggler-icon-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='%23545555' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e");--bs-navbar-toggler-border-color: rgba(84, 85, 85, 0);--bs-navbar-toggler-border-radius: 0.25rem;--bs-navbar-toggler-focus-width: 0.25rem;--bs-navbar-toggler-transition: box-shadow 0.15s ease-in-out;position:relative;display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;align-items:center;-webkit-align-items:center;justify-content:space-between;-webkit-justify-content:space-between;padding:var(--bs-navbar-padding-y) var(--bs-navbar-padding-x)}.navbar>.container,.navbar>.container-fluid,.navbar>.container-sm,.navbar>.container-md,.navbar>.container-lg,.navbar>.container-xl,.navbar>.container-xxl{display:flex;display:-webkit-flex;flex-wrap:inherit;-webkit-flex-wrap:inherit;align-items:center;-webkit-align-items:center;justify-content:space-between;-webkit-justify-content:space-between}.navbar-brand{padding-top:var(--bs-navbar-brand-padding-y);padding-bottom:var(--bs-navbar-brand-padding-y);margin-right:var(--bs-navbar-brand-margin-end);font-size:var(--bs-navbar-brand-font-size);color:var(--bs-navbar-brand-color);text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;white-space:nowrap}.navbar-brand:hover,.navbar-brand:focus{color:var(--bs-navbar-brand-hover-color)}.navbar-nav{--bs-nav-link-padding-x: 0;--bs-nav-link-padding-y: 0.5rem;--bs-nav-link-font-weight: ;--bs-nav-link-color: var(--bs-navbar-color);--bs-nav-link-hover-color: var(--bs-navbar-hover-color);--bs-nav-link-disabled-color: var(--bs-navbar-disabled-color);display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;padding-left:0;margin-bottom:0;list-style:none}.navbar-nav .nav-link.active,.navbar-nav .nav-link.show{color:var(--bs-navbar-active-color)}.navbar-nav .dropdown-menu{position:static}.navbar-text{padding-top:.5rem;padding-bottom:.5rem;color:var(--bs-navbar-color)}.navbar-text a,.navbar-text a:hover,.navbar-text a:focus{color:var(--bs-navbar-active-color)}.navbar-collapse{flex-basis:100%;-webkit-flex-basis:100%;flex-grow:1;-webkit-flex-grow:1;align-items:center;-webkit-align-items:center}.navbar-toggler{padding:var(--bs-navbar-toggler-padding-y) var(--bs-navbar-toggler-padding-x);font-size:var(--bs-navbar-toggler-font-size);line-height:1;color:var(--bs-navbar-color);background-color:rgba(0,0,0,0);border:var(--bs-border-width) solid var(--bs-navbar-toggler-border-color);transition:var(--bs-navbar-toggler-transition)}@media(prefers-reduced-motion: reduce){.navbar-toggler{transition:none}}.navbar-toggler:hover{text-decoration:none}.navbar-toggler:focus{text-decoration:none;outline:0;box-shadow:0 0 0 var(--bs-navbar-toggler-focus-width)}.navbar-toggler-icon{display:inline-block;width:1.5em;height:1.5em;vertical-align:middle;background-image:var(--bs-navbar-toggler-icon-bg);background-repeat:no-repeat;background-position:center;background-size:100%}.navbar-nav-scroll{max-height:var(--bs-scroll-height, 75vh);overflow-y:auto}@media(min-width: 576px){.navbar-expand-sm{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-sm .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-sm .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-sm .navbar-nav .nav-link{padding-right:var(--bs-navbar-nav-link-padding-x);padding-left:var(--bs-navbar-nav-link-padding-x)}.navbar-expand-sm .navbar-nav-scroll{overflow:visible}.navbar-expand-sm .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-sm .navbar-toggler{display:none}.navbar-expand-sm .offcanvas{position:static;z-index:auto;flex-grow:1;-webkit-flex-grow:1;width:auto !important;height:auto !important;visibility:visible !important;background-color:rgba(0,0,0,0) !important;border:0 !important;transform:none !important;transition:none}.navbar-expand-sm .offcanvas .offcanvas-header{display:none}.navbar-expand-sm .offcanvas .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}@media(min-width: 768px){.navbar-expand-md{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-md .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-md .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-md .navbar-nav .nav-link{padding-right:var(--bs-navbar-nav-link-padding-x);padding-left:var(--bs-navbar-nav-link-padding-x)}.navbar-expand-md .navbar-nav-scroll{overflow:visible}.navbar-expand-md .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-md .navbar-toggler{display:none}.navbar-expand-md .offcanvas{position:static;z-index:auto;flex-grow:1;-webkit-flex-grow:1;width:auto !important;height:auto !important;visibility:visible !important;background-color:rgba(0,0,0,0) !important;border:0 !important;transform:none !important;transition:none}.navbar-expand-md .offcanvas .offcanvas-header{display:none}.navbar-expand-md .offcanvas .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}@media(min-width: 992px){.navbar-expand-lg{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-lg .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-lg .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-lg .navbar-nav .nav-link{padding-right:var(--bs-navbar-nav-link-padding-x);padding-left:var(--bs-navbar-nav-link-padding-x)}.navbar-expand-lg .navbar-nav-scroll{overflow:visible}.navbar-expand-lg .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-lg .navbar-toggler{display:none}.navbar-expand-lg .offcanvas{position:static;z-index:auto;flex-grow:1;-webkit-flex-grow:1;width:auto !important;height:auto !important;visibility:visible !important;background-color:rgba(0,0,0,0) !important;border:0 !important;transform:none !important;transition:none}.navbar-expand-lg .offcanvas .offcanvas-header{display:none}.navbar-expand-lg .offcanvas .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}@media(min-width: 1200px){.navbar-expand-xl{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-xl .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-xl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xl .navbar-nav .nav-link{padding-right:var(--bs-navbar-nav-link-padding-x);padding-left:var(--bs-navbar-nav-link-padding-x)}.navbar-expand-xl .navbar-nav-scroll{overflow:visible}.navbar-expand-xl .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-xl .navbar-toggler{display:none}.navbar-expand-xl .offcanvas{position:static;z-index:auto;flex-grow:1;-webkit-flex-grow:1;width:auto !important;height:auto !important;visibility:visible !important;background-color:rgba(0,0,0,0) !important;border:0 !important;transform:none !important;transition:none}.navbar-expand-xl .offcanvas .offcanvas-header{display:none}.navbar-expand-xl .offcanvas .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}@media(min-width: 1400px){.navbar-expand-xxl{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-xxl .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-xxl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xxl .navbar-nav .nav-link{padding-right:var(--bs-navbar-nav-link-padding-x);padding-left:var(--bs-navbar-nav-link-padding-x)}.navbar-expand-xxl .navbar-nav-scroll{overflow:visible}.navbar-expand-xxl .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-xxl .navbar-toggler{display:none}.navbar-expand-xxl .offcanvas{position:static;z-index:auto;flex-grow:1;-webkit-flex-grow:1;width:auto !important;height:auto !important;visibility:visible !important;background-color:rgba(0,0,0,0) !important;border:0 !important;transform:none !important;transition:none}.navbar-expand-xxl .offcanvas .offcanvas-header{display:none}.navbar-expand-xxl .offcanvas .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}.navbar-expand{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand .navbar-nav .dropdown-menu{position:absolute}.navbar-expand .navbar-nav .nav-link{padding-right:var(--bs-navbar-nav-link-padding-x);padding-left:var(--bs-navbar-nav-link-padding-x)}.navbar-expand .navbar-nav-scroll{overflow:visible}.navbar-expand .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand .navbar-toggler{display:none}.navbar-expand .offcanvas{position:static;z-index:auto;flex-grow:1;-webkit-flex-grow:1;width:auto !important;height:auto !important;visibility:visible !important;background-color:rgba(0,0,0,0) !important;border:0 !important;transform:none !important;transition:none}.navbar-expand .offcanvas .offcanvas-header{display:none}.navbar-expand .offcanvas .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}.navbar-dark,.navbar[data-bs-theme=dark]{--bs-navbar-color: #545555;--bs-navbar-hover-color: rgba(31, 78, 182, 0.8);--bs-navbar-disabled-color: rgba(84, 85, 85, 0.75);--bs-navbar-active-color: #1f4eb6;--bs-navbar-brand-color: #545555;--bs-navbar-brand-hover-color: #1f4eb6;--bs-navbar-toggler-border-color: rgba(84, 85, 85, 0);--bs-navbar-toggler-icon-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='%23545555' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}[data-bs-theme=dark] .navbar-toggler-icon{--bs-navbar-toggler-icon-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='%23545555' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.card{--bs-card-spacer-y: 1rem;--bs-card-spacer-x: 1rem;--bs-card-title-spacer-y: 0.5rem;--bs-card-title-color: ;--bs-card-subtitle-color: ;--bs-card-border-width: 1px;--bs-card-border-color: rgba(0, 0, 0, 0.175);--bs-card-border-radius: 0.25rem;--bs-card-box-shadow: ;--bs-card-inner-border-radius: calc(0.25rem - 1px);--bs-card-cap-padding-y: 0.5rem;--bs-card-cap-padding-x: 1rem;--bs-card-cap-bg: rgba(52, 58, 64, 0.25);--bs-card-cap-color: ;--bs-card-height: ;--bs-card-color: ;--bs-card-bg: #fff;--bs-card-img-overlay-padding: 1rem;--bs-card-group-margin: 0.75rem;position:relative;display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;min-width:0;height:var(--bs-card-height);color:var(--bs-body-color);word-wrap:break-word;background-color:var(--bs-card-bg);background-clip:border-box;border:var(--bs-card-border-width) solid var(--bs-card-border-color)}.card>hr{margin-right:0;margin-left:0}.card>.list-group{border-top:inherit;border-bottom:inherit}.card>.list-group:first-child{border-top-width:0}.card>.list-group:last-child{border-bottom-width:0}.card>.card-header+.list-group,.card>.list-group+.card-footer{border-top:0}.card-body{flex:1 1 auto;-webkit-flex:1 1 auto;padding:var(--bs-card-spacer-y) var(--bs-card-spacer-x);color:var(--bs-card-color)}.card-title{margin-bottom:var(--bs-card-title-spacer-y);color:var(--bs-card-title-color)}.card-subtitle{margin-top:calc(-0.5*var(--bs-card-title-spacer-y));margin-bottom:0;color:var(--bs-card-subtitle-color)}.card-text:last-child{margin-bottom:0}.card-link+.card-link{margin-left:var(--bs-card-spacer-x)}.card-header{padding:var(--bs-card-cap-padding-y) var(--bs-card-cap-padding-x);margin-bottom:0;color:var(--bs-card-cap-color);background-color:var(--bs-card-cap-bg);border-bottom:var(--bs-card-border-width) solid var(--bs-card-border-color)}.card-footer{padding:var(--bs-card-cap-padding-y) var(--bs-card-cap-padding-x);color:var(--bs-card-cap-color);background-color:var(--bs-card-cap-bg);border-top:var(--bs-card-border-width) solid var(--bs-card-border-color)}.card-header-tabs{margin-right:calc(-0.5*var(--bs-card-cap-padding-x));margin-bottom:calc(-1*var(--bs-card-cap-padding-y));margin-left:calc(-0.5*var(--bs-card-cap-padding-x));border-bottom:0}.card-header-tabs .nav-link.active{background-color:var(--bs-card-bg);border-bottom-color:var(--bs-card-bg)}.card-header-pills{margin-right:calc(-0.5*var(--bs-card-cap-padding-x));margin-left:calc(-0.5*var(--bs-card-cap-padding-x))}.card-img-overlay{position:absolute;top:0;right:0;bottom:0;left:0;padding:var(--bs-card-img-overlay-padding)}.card-img,.card-img-top,.card-img-bottom{width:100%}.card-group>.card{margin-bottom:var(--bs-card-group-margin)}@media(min-width: 576px){.card-group{display:flex;display:-webkit-flex;flex-flow:row wrap;-webkit-flex-flow:row wrap}.card-group>.card{flex:1 0 0%;-webkit-flex:1 0 0%;margin-bottom:0}.card-group>.card+.card{margin-left:0;border-left:0}}.accordion{--bs-accordion-color: #343a40;--bs-accordion-bg: #fff;--bs-accordion-transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out, border-radius 0.15s ease;--bs-accordion-border-color: #dee2e6;--bs-accordion-border-width: 1px;--bs-accordion-border-radius: 0.25rem;--bs-accordion-inner-border-radius: calc(0.25rem - 1px);--bs-accordion-btn-padding-x: 1.25rem;--bs-accordion-btn-padding-y: 1rem;--bs-accordion-btn-color: #343a40;--bs-accordion-btn-bg: #fff;--bs-accordion-btn-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23343a40'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");--bs-accordion-btn-icon-width: 1.25rem;--bs-accordion-btn-icon-transform: rotate(-180deg);--bs-accordion-btn-icon-transition: transform 0.2s ease-in-out;--bs-accordion-btn-active-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%2310335b'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");--bs-accordion-btn-focus-border-color: #93c0f1;--bs-accordion-btn-focus-box-shadow: 0 0 0 0.25rem rgba(39, 128, 227, 0.25);--bs-accordion-body-padding-x: 1.25rem;--bs-accordion-body-padding-y: 1rem;--bs-accordion-active-color: #10335b;--bs-accordion-active-bg: #d4e6f9}.accordion-button{position:relative;display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;width:100%;padding:var(--bs-accordion-btn-padding-y) var(--bs-accordion-btn-padding-x);font-size:1rem;color:var(--bs-accordion-btn-color);text-align:left;background-color:var(--bs-accordion-btn-bg);border:0;overflow-anchor:none;transition:var(--bs-accordion-transition)}@media(prefers-reduced-motion: reduce){.accordion-button{transition:none}}.accordion-button:not(.collapsed){color:var(--bs-accordion-active-color);background-color:var(--bs-accordion-active-bg);box-shadow:inset 0 calc(-1*var(--bs-accordion-border-width)) 0 var(--bs-accordion-border-color)}.accordion-button:not(.collapsed)::after{background-image:var(--bs-accordion-btn-active-icon);transform:var(--bs-accordion-btn-icon-transform)}.accordion-button::after{flex-shrink:0;-webkit-flex-shrink:0;width:var(--bs-accordion-btn-icon-width);height:var(--bs-accordion-btn-icon-width);margin-left:auto;content:"";background-image:var(--bs-accordion-btn-icon);background-repeat:no-repeat;background-size:var(--bs-accordion-btn-icon-width);transition:var(--bs-accordion-btn-icon-transition)}@media(prefers-reduced-motion: reduce){.accordion-button::after{transition:none}}.accordion-button:hover{z-index:2}.accordion-button:focus{z-index:3;border-color:var(--bs-accordion-btn-focus-border-color);outline:0;box-shadow:var(--bs-accordion-btn-focus-box-shadow)}.accordion-header{margin-bottom:0}.accordion-item{color:var(--bs-accordion-color);background-color:var(--bs-accordion-bg);border:var(--bs-accordion-border-width) solid var(--bs-accordion-border-color)}.accordion-item:not(:first-of-type){border-top:0}.accordion-body{padding:var(--bs-accordion-body-padding-y) var(--bs-accordion-body-padding-x)}.accordion-flush .accordion-collapse{border-width:0}.accordion-flush .accordion-item{border-right:0;border-left:0}.accordion-flush .accordion-item:first-child{border-top:0}.accordion-flush .accordion-item:last-child{border-bottom:0}[data-bs-theme=dark] .accordion-button::after{--bs-accordion-btn-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%237db3ee'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");--bs-accordion-btn-active-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%237db3ee'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e")}.breadcrumb{--bs-breadcrumb-padding-x: 0;--bs-breadcrumb-padding-y: 0;--bs-breadcrumb-margin-bottom: 1rem;--bs-breadcrumb-bg: ;--bs-breadcrumb-border-radius: ;--bs-breadcrumb-divider-color: rgba(52, 58, 64, 0.75);--bs-breadcrumb-item-padding-x: 0.5rem;--bs-breadcrumb-item-active-color: rgba(52, 58, 64, 0.75);display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;padding:var(--bs-breadcrumb-padding-y) var(--bs-breadcrumb-padding-x);margin-bottom:var(--bs-breadcrumb-margin-bottom);font-size:var(--bs-breadcrumb-font-size);list-style:none;background-color:var(--bs-breadcrumb-bg)}.breadcrumb-item+.breadcrumb-item{padding-left:var(--bs-breadcrumb-item-padding-x)}.breadcrumb-item+.breadcrumb-item::before{float:left;padding-right:var(--bs-breadcrumb-item-padding-x);color:var(--bs-breadcrumb-divider-color);content:var(--bs-breadcrumb-divider, ">") /* rtl: var(--bs-breadcrumb-divider, ">") */}.breadcrumb-item.active{color:var(--bs-breadcrumb-item-active-color)}.pagination{--bs-pagination-padding-x: 0.75rem;--bs-pagination-padding-y: 0.375rem;--bs-pagination-font-size:1rem;--bs-pagination-color: #2761e3;--bs-pagination-bg: #fff;--bs-pagination-border-width: 1px;--bs-pagination-border-color: #dee2e6;--bs-pagination-border-radius: 0.25rem;--bs-pagination-hover-color: #1f4eb6;--bs-pagination-hover-bg: #f8f9fa;--bs-pagination-hover-border-color: #dee2e6;--bs-pagination-focus-color: #1f4eb6;--bs-pagination-focus-bg: #e9ecef;--bs-pagination-focus-box-shadow: 0 0 0 0.25rem rgba(39, 128, 227, 0.25);--bs-pagination-active-color: #fff;--bs-pagination-active-bg: #2780e3;--bs-pagination-active-border-color: #2780e3;--bs-pagination-disabled-color: rgba(52, 58, 64, 0.75);--bs-pagination-disabled-bg: #e9ecef;--bs-pagination-disabled-border-color: #dee2e6;display:flex;display:-webkit-flex;padding-left:0;list-style:none}.page-link{position:relative;display:block;padding:var(--bs-pagination-padding-y) var(--bs-pagination-padding-x);font-size:var(--bs-pagination-font-size);color:var(--bs-pagination-color);text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;background-color:var(--bs-pagination-bg);border:var(--bs-pagination-border-width) solid var(--bs-pagination-border-color);transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.page-link{transition:none}}.page-link:hover{z-index:2;color:var(--bs-pagination-hover-color);background-color:var(--bs-pagination-hover-bg);border-color:var(--bs-pagination-hover-border-color)}.page-link:focus{z-index:3;color:var(--bs-pagination-focus-color);background-color:var(--bs-pagination-focus-bg);outline:0;box-shadow:var(--bs-pagination-focus-box-shadow)}.page-link.active,.active>.page-link{z-index:3;color:var(--bs-pagination-active-color);background-color:var(--bs-pagination-active-bg);border-color:var(--bs-pagination-active-border-color)}.page-link.disabled,.disabled>.page-link{color:var(--bs-pagination-disabled-color);pointer-events:none;background-color:var(--bs-pagination-disabled-bg);border-color:var(--bs-pagination-disabled-border-color)}.page-item:not(:first-child) .page-link{margin-left:calc(1px*-1)}.pagination-lg{--bs-pagination-padding-x: 1.5rem;--bs-pagination-padding-y: 0.75rem;--bs-pagination-font-size:1.25rem;--bs-pagination-border-radius: 0.5rem}.pagination-sm{--bs-pagination-padding-x: 0.5rem;--bs-pagination-padding-y: 0.25rem;--bs-pagination-font-size:0.875rem;--bs-pagination-border-radius: 0.2em}.badge{--bs-badge-padding-x: 0.65em;--bs-badge-padding-y: 0.35em;--bs-badge-font-size:0.75em;--bs-badge-font-weight: 700;--bs-badge-color: #fff;--bs-badge-border-radius: 0.25rem;display:inline-block;padding:var(--bs-badge-padding-y) var(--bs-badge-padding-x);font-size:var(--bs-badge-font-size);font-weight:var(--bs-badge-font-weight);line-height:1;color:var(--bs-badge-color);text-align:center;white-space:nowrap;vertical-align:baseline}.badge:empty{display:none}.btn .badge{position:relative;top:-1px}.alert{--bs-alert-bg: transparent;--bs-alert-padding-x: 1rem;--bs-alert-padding-y: 1rem;--bs-alert-margin-bottom: 1rem;--bs-alert-color: inherit;--bs-alert-border-color: transparent;--bs-alert-border: 0 solid var(--bs-alert-border-color);--bs-alert-border-radius: 0.25rem;--bs-alert-link-color: inherit;position:relative;padding:var(--bs-alert-padding-y) var(--bs-alert-padding-x);margin-bottom:var(--bs-alert-margin-bottom);color:var(--bs-alert-color);background-color:var(--bs-alert-bg);border:var(--bs-alert-border)}.alert-heading{color:inherit}.alert-link{font-weight:700;color:var(--bs-alert-link-color)}.alert-dismissible{padding-right:3rem}.alert-dismissible .btn-close{position:absolute;top:0;right:0;z-index:2;padding:1.25rem 1rem}.alert-default{--bs-alert-color: var(--bs-default-text-emphasis);--bs-alert-bg: var(--bs-default-bg-subtle);--bs-alert-border-color: var(--bs-default-border-subtle);--bs-alert-link-color: var(--bs-default-text-emphasis)}.alert-primary{--bs-alert-color: var(--bs-primary-text-emphasis);--bs-alert-bg: var(--bs-primary-bg-subtle);--bs-alert-border-color: var(--bs-primary-border-subtle);--bs-alert-link-color: var(--bs-primary-text-emphasis)}.alert-secondary{--bs-alert-color: var(--bs-secondary-text-emphasis);--bs-alert-bg: var(--bs-secondary-bg-subtle);--bs-alert-border-color: var(--bs-secondary-border-subtle);--bs-alert-link-color: var(--bs-secondary-text-emphasis)}.alert-success{--bs-alert-color: var(--bs-success-text-emphasis);--bs-alert-bg: var(--bs-success-bg-subtle);--bs-alert-border-color: var(--bs-success-border-subtle);--bs-alert-link-color: var(--bs-success-text-emphasis)}.alert-info{--bs-alert-color: var(--bs-info-text-emphasis);--bs-alert-bg: var(--bs-info-bg-subtle);--bs-alert-border-color: var(--bs-info-border-subtle);--bs-alert-link-color: var(--bs-info-text-emphasis)}.alert-warning{--bs-alert-color: var(--bs-warning-text-emphasis);--bs-alert-bg: var(--bs-warning-bg-subtle);--bs-alert-border-color: var(--bs-warning-border-subtle);--bs-alert-link-color: var(--bs-warning-text-emphasis)}.alert-danger{--bs-alert-color: var(--bs-danger-text-emphasis);--bs-alert-bg: var(--bs-danger-bg-subtle);--bs-alert-border-color: var(--bs-danger-border-subtle);--bs-alert-link-color: var(--bs-danger-text-emphasis)}.alert-light{--bs-alert-color: var(--bs-light-text-emphasis);--bs-alert-bg: var(--bs-light-bg-subtle);--bs-alert-border-color: var(--bs-light-border-subtle);--bs-alert-link-color: var(--bs-light-text-emphasis)}.alert-dark{--bs-alert-color: var(--bs-dark-text-emphasis);--bs-alert-bg: var(--bs-dark-bg-subtle);--bs-alert-border-color: var(--bs-dark-border-subtle);--bs-alert-link-color: var(--bs-dark-text-emphasis)}@keyframes progress-bar-stripes{0%{background-position-x:.5rem}}.progress,.progress-stacked{--bs-progress-height: 0.5rem;--bs-progress-font-size:0.75rem;--bs-progress-bg: #e9ecef;--bs-progress-border-radius: 0.25rem;--bs-progress-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.075);--bs-progress-bar-color: #fff;--bs-progress-bar-bg: #2780e3;--bs-progress-bar-transition: width 0.6s ease;display:flex;display:-webkit-flex;height:var(--bs-progress-height);overflow:hidden;font-size:var(--bs-progress-font-size);background-color:var(--bs-progress-bg)}.progress-bar{display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;justify-content:center;-webkit-justify-content:center;overflow:hidden;color:var(--bs-progress-bar-color);text-align:center;white-space:nowrap;background-color:var(--bs-progress-bar-bg);transition:var(--bs-progress-bar-transition)}@media(prefers-reduced-motion: reduce){.progress-bar{transition:none}}.progress-bar-striped{background-image:linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent);background-size:var(--bs-progress-height) var(--bs-progress-height)}.progress-stacked>.progress{overflow:visible}.progress-stacked>.progress>.progress-bar{width:100%}.progress-bar-animated{animation:1s linear infinite progress-bar-stripes}@media(prefers-reduced-motion: reduce){.progress-bar-animated{animation:none}}.list-group{--bs-list-group-color: #343a40;--bs-list-group-bg: #fff;--bs-list-group-border-color: #dee2e6;--bs-list-group-border-width: 1px;--bs-list-group-border-radius: 0.25rem;--bs-list-group-item-padding-x: 1rem;--bs-list-group-item-padding-y: 0.5rem;--bs-list-group-action-color: rgba(52, 58, 64, 0.75);--bs-list-group-action-hover-color: #000;--bs-list-group-action-hover-bg: #f8f9fa;--bs-list-group-action-active-color: #343a40;--bs-list-group-action-active-bg: #e9ecef;--bs-list-group-disabled-color: rgba(52, 58, 64, 0.75);--bs-list-group-disabled-bg: #fff;--bs-list-group-active-color: #fff;--bs-list-group-active-bg: #2780e3;--bs-list-group-active-border-color: #2780e3;display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;padding-left:0;margin-bottom:0}.list-group-numbered{list-style-type:none;counter-reset:section}.list-group-numbered>.list-group-item::before{content:counters(section, ".") ". ";counter-increment:section}.list-group-item-action{width:100%;color:var(--bs-list-group-action-color);text-align:inherit}.list-group-item-action:hover,.list-group-item-action:focus{z-index:1;color:var(--bs-list-group-action-hover-color);text-decoration:none;background-color:var(--bs-list-group-action-hover-bg)}.list-group-item-action:active{color:var(--bs-list-group-action-active-color);background-color:var(--bs-list-group-action-active-bg)}.list-group-item{position:relative;display:block;padding:var(--bs-list-group-item-padding-y) var(--bs-list-group-item-padding-x);color:var(--bs-list-group-color);text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;background-color:var(--bs-list-group-bg);border:var(--bs-list-group-border-width) solid var(--bs-list-group-border-color)}.list-group-item.disabled,.list-group-item:disabled{color:var(--bs-list-group-disabled-color);pointer-events:none;background-color:var(--bs-list-group-disabled-bg)}.list-group-item.active{z-index:2;color:var(--bs-list-group-active-color);background-color:var(--bs-list-group-active-bg);border-color:var(--bs-list-group-active-border-color)}.list-group-item+.list-group-item{border-top-width:0}.list-group-item+.list-group-item.active{margin-top:calc(-1*var(--bs-list-group-border-width));border-top-width:var(--bs-list-group-border-width)}.list-group-horizontal{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal>.list-group-item.active{margin-top:0}.list-group-horizontal>.list-group-item+.list-group-item{border-top-width:var(--bs-list-group-border-width);border-left-width:0}.list-group-horizontal>.list-group-item+.list-group-item.active{margin-left:calc(-1*var(--bs-list-group-border-width));border-left-width:var(--bs-list-group-border-width)}@media(min-width: 576px){.list-group-horizontal-sm{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-sm>.list-group-item.active{margin-top:0}.list-group-horizontal-sm>.list-group-item+.list-group-item{border-top-width:var(--bs-list-group-border-width);border-left-width:0}.list-group-horizontal-sm>.list-group-item+.list-group-item.active{margin-left:calc(-1*var(--bs-list-group-border-width));border-left-width:var(--bs-list-group-border-width)}}@media(min-width: 768px){.list-group-horizontal-md{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-md>.list-group-item.active{margin-top:0}.list-group-horizontal-md>.list-group-item+.list-group-item{border-top-width:var(--bs-list-group-border-width);border-left-width:0}.list-group-horizontal-md>.list-group-item+.list-group-item.active{margin-left:calc(-1*var(--bs-list-group-border-width));border-left-width:var(--bs-list-group-border-width)}}@media(min-width: 992px){.list-group-horizontal-lg{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-lg>.list-group-item.active{margin-top:0}.list-group-horizontal-lg>.list-group-item+.list-group-item{border-top-width:var(--bs-list-group-border-width);border-left-width:0}.list-group-horizontal-lg>.list-group-item+.list-group-item.active{margin-left:calc(-1*var(--bs-list-group-border-width));border-left-width:var(--bs-list-group-border-width)}}@media(min-width: 1200px){.list-group-horizontal-xl{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-xl>.list-group-item.active{margin-top:0}.list-group-horizontal-xl>.list-group-item+.list-group-item{border-top-width:var(--bs-list-group-border-width);border-left-width:0}.list-group-horizontal-xl>.list-group-item+.list-group-item.active{margin-left:calc(-1*var(--bs-list-group-border-width));border-left-width:var(--bs-list-group-border-width)}}@media(min-width: 1400px){.list-group-horizontal-xxl{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-xxl>.list-group-item.active{margin-top:0}.list-group-horizontal-xxl>.list-group-item+.list-group-item{border-top-width:var(--bs-list-group-border-width);border-left-width:0}.list-group-horizontal-xxl>.list-group-item+.list-group-item.active{margin-left:calc(-1*var(--bs-list-group-border-width));border-left-width:var(--bs-list-group-border-width)}}.list-group-flush>.list-group-item{border-width:0 0 var(--bs-list-group-border-width)}.list-group-flush>.list-group-item:last-child{border-bottom-width:0}.list-group-item-default{--bs-list-group-color: var(--bs-default-text-emphasis);--bs-list-group-bg: var(--bs-default-bg-subtle);--bs-list-group-border-color: var(--bs-default-border-subtle);--bs-list-group-action-hover-color: var(--bs-emphasis-color);--bs-list-group-action-hover-bg: var(--bs-default-border-subtle);--bs-list-group-action-active-color: var(--bs-emphasis-color);--bs-list-group-action-active-bg: var(--bs-default-border-subtle);--bs-list-group-active-color: var(--bs-default-bg-subtle);--bs-list-group-active-bg: var(--bs-default-text-emphasis);--bs-list-group-active-border-color: var(--bs-default-text-emphasis)}.list-group-item-primary{--bs-list-group-color: var(--bs-primary-text-emphasis);--bs-list-group-bg: var(--bs-primary-bg-subtle);--bs-list-group-border-color: var(--bs-primary-border-subtle);--bs-list-group-action-hover-color: var(--bs-emphasis-color);--bs-list-group-action-hover-bg: var(--bs-primary-border-subtle);--bs-list-group-action-active-color: var(--bs-emphasis-color);--bs-list-group-action-active-bg: var(--bs-primary-border-subtle);--bs-list-group-active-color: var(--bs-primary-bg-subtle);--bs-list-group-active-bg: var(--bs-primary-text-emphasis);--bs-list-group-active-border-color: var(--bs-primary-text-emphasis)}.list-group-item-secondary{--bs-list-group-color: var(--bs-secondary-text-emphasis);--bs-list-group-bg: var(--bs-secondary-bg-subtle);--bs-list-group-border-color: var(--bs-secondary-border-subtle);--bs-list-group-action-hover-color: var(--bs-emphasis-color);--bs-list-group-action-hover-bg: var(--bs-secondary-border-subtle);--bs-list-group-action-active-color: var(--bs-emphasis-color);--bs-list-group-action-active-bg: var(--bs-secondary-border-subtle);--bs-list-group-active-color: var(--bs-secondary-bg-subtle);--bs-list-group-active-bg: var(--bs-secondary-text-emphasis);--bs-list-group-active-border-color: var(--bs-secondary-text-emphasis)}.list-group-item-success{--bs-list-group-color: var(--bs-success-text-emphasis);--bs-list-group-bg: var(--bs-success-bg-subtle);--bs-list-group-border-color: var(--bs-success-border-subtle);--bs-list-group-action-hover-color: var(--bs-emphasis-color);--bs-list-group-action-hover-bg: var(--bs-success-border-subtle);--bs-list-group-action-active-color: var(--bs-emphasis-color);--bs-list-group-action-active-bg: var(--bs-success-border-subtle);--bs-list-group-active-color: var(--bs-success-bg-subtle);--bs-list-group-active-bg: var(--bs-success-text-emphasis);--bs-list-group-active-border-color: var(--bs-success-text-emphasis)}.list-group-item-info{--bs-list-group-color: var(--bs-info-text-emphasis);--bs-list-group-bg: var(--bs-info-bg-subtle);--bs-list-group-border-color: var(--bs-info-border-subtle);--bs-list-group-action-hover-color: var(--bs-emphasis-color);--bs-list-group-action-hover-bg: var(--bs-info-border-subtle);--bs-list-group-action-active-color: var(--bs-emphasis-color);--bs-list-group-action-active-bg: var(--bs-info-border-subtle);--bs-list-group-active-color: var(--bs-info-bg-subtle);--bs-list-group-active-bg: var(--bs-info-text-emphasis);--bs-list-group-active-border-color: var(--bs-info-text-emphasis)}.list-group-item-warning{--bs-list-group-color: var(--bs-warning-text-emphasis);--bs-list-group-bg: var(--bs-warning-bg-subtle);--bs-list-group-border-color: var(--bs-warning-border-subtle);--bs-list-group-action-hover-color: var(--bs-emphasis-color);--bs-list-group-action-hover-bg: var(--bs-warning-border-subtle);--bs-list-group-action-active-color: var(--bs-emphasis-color);--bs-list-group-action-active-bg: var(--bs-warning-border-subtle);--bs-list-group-active-color: var(--bs-warning-bg-subtle);--bs-list-group-active-bg: var(--bs-warning-text-emphasis);--bs-list-group-active-border-color: var(--bs-warning-text-emphasis)}.list-group-item-danger{--bs-list-group-color: var(--bs-danger-text-emphasis);--bs-list-group-bg: var(--bs-danger-bg-subtle);--bs-list-group-border-color: var(--bs-danger-border-subtle);--bs-list-group-action-hover-color: var(--bs-emphasis-color);--bs-list-group-action-hover-bg: var(--bs-danger-border-subtle);--bs-list-group-action-active-color: var(--bs-emphasis-color);--bs-list-group-action-active-bg: var(--bs-danger-border-subtle);--bs-list-group-active-color: var(--bs-danger-bg-subtle);--bs-list-group-active-bg: var(--bs-danger-text-emphasis);--bs-list-group-active-border-color: var(--bs-danger-text-emphasis)}.list-group-item-light{--bs-list-group-color: var(--bs-light-text-emphasis);--bs-list-group-bg: var(--bs-light-bg-subtle);--bs-list-group-border-color: var(--bs-light-border-subtle);--bs-list-group-action-hover-color: var(--bs-emphasis-color);--bs-list-group-action-hover-bg: var(--bs-light-border-subtle);--bs-list-group-action-active-color: var(--bs-emphasis-color);--bs-list-group-action-active-bg: var(--bs-light-border-subtle);--bs-list-group-active-color: var(--bs-light-bg-subtle);--bs-list-group-active-bg: var(--bs-light-text-emphasis);--bs-list-group-active-border-color: var(--bs-light-text-emphasis)}.list-group-item-dark{--bs-list-group-color: var(--bs-dark-text-emphasis);--bs-list-group-bg: var(--bs-dark-bg-subtle);--bs-list-group-border-color: var(--bs-dark-border-subtle);--bs-list-group-action-hover-color: var(--bs-emphasis-color);--bs-list-group-action-hover-bg: var(--bs-dark-border-subtle);--bs-list-group-action-active-color: var(--bs-emphasis-color);--bs-list-group-action-active-bg: var(--bs-dark-border-subtle);--bs-list-group-active-color: var(--bs-dark-bg-subtle);--bs-list-group-active-bg: var(--bs-dark-text-emphasis);--bs-list-group-active-border-color: var(--bs-dark-text-emphasis)}.btn-close{--bs-btn-close-color: #000;--bs-btn-close-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23000'%3e%3cpath d='M.293.293a1 1 0 0 1 1.414 0L8 6.586 14.293.293a1 1 0 1 1 1.414 1.414L9.414 8l6.293 6.293a1 1 0 0 1-1.414 1.414L8 9.414l-6.293 6.293a1 1 0 0 1-1.414-1.414L6.586 8 .293 1.707a1 1 0 0 1 0-1.414z'/%3e%3c/svg%3e");--bs-btn-close-opacity: 0.5;--bs-btn-close-hover-opacity: 0.75;--bs-btn-close-focus-shadow: 0 0 0 0.25rem rgba(39, 128, 227, 0.25);--bs-btn-close-focus-opacity: 1;--bs-btn-close-disabled-opacity: 0.25;--bs-btn-close-white-filter: invert(1) grayscale(100%) brightness(200%);box-sizing:content-box;width:1em;height:1em;padding:.25em .25em;color:var(--bs-btn-close-color);background:rgba(0,0,0,0) var(--bs-btn-close-bg) center/1em auto no-repeat;border:0;opacity:var(--bs-btn-close-opacity)}.btn-close:hover{color:var(--bs-btn-close-color);text-decoration:none;opacity:var(--bs-btn-close-hover-opacity)}.btn-close:focus{outline:0;box-shadow:var(--bs-btn-close-focus-shadow);opacity:var(--bs-btn-close-focus-opacity)}.btn-close:disabled,.btn-close.disabled{pointer-events:none;user-select:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;-o-user-select:none;opacity:var(--bs-btn-close-disabled-opacity)}.btn-close-white{filter:var(--bs-btn-close-white-filter)}[data-bs-theme=dark] .btn-close{filter:var(--bs-btn-close-white-filter)}.toast{--bs-toast-zindex: 1090;--bs-toast-padding-x: 0.75rem;--bs-toast-padding-y: 0.5rem;--bs-toast-spacing: 1.5rem;--bs-toast-max-width: 350px;--bs-toast-font-size:0.875rem;--bs-toast-color: ;--bs-toast-bg: rgba(255, 255, 255, 0.85);--bs-toast-border-width: 1px;--bs-toast-border-color: rgba(0, 0, 0, 0.175);--bs-toast-border-radius: 0.25rem;--bs-toast-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15);--bs-toast-header-color: rgba(52, 58, 64, 0.75);--bs-toast-header-bg: rgba(255, 255, 255, 0.85);--bs-toast-header-border-color: rgba(0, 0, 0, 0.175);width:var(--bs-toast-max-width);max-width:100%;font-size:var(--bs-toast-font-size);color:var(--bs-toast-color);pointer-events:auto;background-color:var(--bs-toast-bg);background-clip:padding-box;border:var(--bs-toast-border-width) solid var(--bs-toast-border-color);box-shadow:var(--bs-toast-box-shadow)}.toast.showing{opacity:0}.toast:not(.show){display:none}.toast-container{--bs-toast-zindex: 1090;position:absolute;z-index:var(--bs-toast-zindex);width:max-content;width:-webkit-max-content;width:-moz-max-content;width:-ms-max-content;width:-o-max-content;max-width:100%;pointer-events:none}.toast-container>:not(:last-child){margin-bottom:var(--bs-toast-spacing)}.toast-header{display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;padding:var(--bs-toast-padding-y) var(--bs-toast-padding-x);color:var(--bs-toast-header-color);background-color:var(--bs-toast-header-bg);background-clip:padding-box;border-bottom:var(--bs-toast-border-width) solid var(--bs-toast-header-border-color)}.toast-header .btn-close{margin-right:calc(-0.5*var(--bs-toast-padding-x));margin-left:var(--bs-toast-padding-x)}.toast-body{padding:var(--bs-toast-padding-x);word-wrap:break-word}.modal{--bs-modal-zindex: 1055;--bs-modal-width: 500px;--bs-modal-padding: 1rem;--bs-modal-margin: 0.5rem;--bs-modal-color: ;--bs-modal-bg: #fff;--bs-modal-border-color: rgba(0, 0, 0, 0.175);--bs-modal-border-width: 1px;--bs-modal-border-radius: 0.5rem;--bs-modal-box-shadow: 0 0.125rem 0.25rem rgba(0, 0, 0, 0.075);--bs-modal-inner-border-radius: calc(0.5rem - 1px);--bs-modal-header-padding-x: 1rem;--bs-modal-header-padding-y: 1rem;--bs-modal-header-padding: 1rem 1rem;--bs-modal-header-border-color: #dee2e6;--bs-modal-header-border-width: 1px;--bs-modal-title-line-height: 1.5;--bs-modal-footer-gap: 0.5rem;--bs-modal-footer-bg: ;--bs-modal-footer-border-color: #dee2e6;--bs-modal-footer-border-width: 1px;position:fixed;top:0;left:0;z-index:var(--bs-modal-zindex);display:none;width:100%;height:100%;overflow-x:hidden;overflow-y:auto;outline:0}.modal-dialog{position:relative;width:auto;margin:var(--bs-modal-margin);pointer-events:none}.modal.fade .modal-dialog{transition:transform .3s ease-out;transform:translate(0, -50px)}@media(prefers-reduced-motion: reduce){.modal.fade .modal-dialog{transition:none}}.modal.show .modal-dialog{transform:none}.modal.modal-static .modal-dialog{transform:scale(1.02)}.modal-dialog-scrollable{height:calc(100% - var(--bs-modal-margin)*2)}.modal-dialog-scrollable .modal-content{max-height:100%;overflow:hidden}.modal-dialog-scrollable .modal-body{overflow-y:auto}.modal-dialog-centered{display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;min-height:calc(100% - var(--bs-modal-margin)*2)}.modal-content{position:relative;display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;width:100%;color:var(--bs-modal-color);pointer-events:auto;background-color:var(--bs-modal-bg);background-clip:padding-box;border:var(--bs-modal-border-width) solid var(--bs-modal-border-color);outline:0}.modal-backdrop{--bs-backdrop-zindex: 1050;--bs-backdrop-bg: #000;--bs-backdrop-opacity: 0.5;position:fixed;top:0;left:0;z-index:var(--bs-backdrop-zindex);width:100vw;height:100vh;background-color:var(--bs-backdrop-bg)}.modal-backdrop.fade{opacity:0}.modal-backdrop.show{opacity:var(--bs-backdrop-opacity)}.modal-header{display:flex;display:-webkit-flex;flex-shrink:0;-webkit-flex-shrink:0;align-items:center;-webkit-align-items:center;justify-content:space-between;-webkit-justify-content:space-between;padding:var(--bs-modal-header-padding);border-bottom:var(--bs-modal-header-border-width) solid var(--bs-modal-header-border-color)}.modal-header .btn-close{padding:calc(var(--bs-modal-header-padding-y)*.5) calc(var(--bs-modal-header-padding-x)*.5);margin:calc(-0.5*var(--bs-modal-header-padding-y)) calc(-0.5*var(--bs-modal-header-padding-x)) calc(-0.5*var(--bs-modal-header-padding-y)) auto}.modal-title{margin-bottom:0;line-height:var(--bs-modal-title-line-height)}.modal-body{position:relative;flex:1 1 auto;-webkit-flex:1 1 auto;padding:var(--bs-modal-padding)}.modal-footer{display:flex;display:-webkit-flex;flex-shrink:0;-webkit-flex-shrink:0;flex-wrap:wrap;-webkit-flex-wrap:wrap;align-items:center;-webkit-align-items:center;justify-content:flex-end;-webkit-justify-content:flex-end;padding:calc(var(--bs-modal-padding) - var(--bs-modal-footer-gap)*.5);background-color:var(--bs-modal-footer-bg);border-top:var(--bs-modal-footer-border-width) solid var(--bs-modal-footer-border-color)}.modal-footer>*{margin:calc(var(--bs-modal-footer-gap)*.5)}@media(min-width: 576px){.modal{--bs-modal-margin: 1.75rem;--bs-modal-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15)}.modal-dialog{max-width:var(--bs-modal-width);margin-right:auto;margin-left:auto}.modal-sm{--bs-modal-width: 300px}}@media(min-width: 992px){.modal-lg,.modal-xl{--bs-modal-width: 800px}}@media(min-width: 1200px){.modal-xl{--bs-modal-width: 1140px}}.modal-fullscreen{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen .modal-content{height:100%;border:0}.modal-fullscreen .modal-body{overflow-y:auto}@media(max-width: 575.98px){.modal-fullscreen-sm-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-sm-down .modal-content{height:100%;border:0}.modal-fullscreen-sm-down .modal-body{overflow-y:auto}}@media(max-width: 767.98px){.modal-fullscreen-md-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-md-down .modal-content{height:100%;border:0}.modal-fullscreen-md-down .modal-body{overflow-y:auto}}@media(max-width: 991.98px){.modal-fullscreen-lg-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-lg-down .modal-content{height:100%;border:0}.modal-fullscreen-lg-down .modal-body{overflow-y:auto}}@media(max-width: 1199.98px){.modal-fullscreen-xl-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-xl-down .modal-content{height:100%;border:0}.modal-fullscreen-xl-down .modal-body{overflow-y:auto}}@media(max-width: 1399.98px){.modal-fullscreen-xxl-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-xxl-down .modal-content{height:100%;border:0}.modal-fullscreen-xxl-down .modal-body{overflow-y:auto}}.tooltip{--bs-tooltip-zindex: 1080;--bs-tooltip-max-width: 200px;--bs-tooltip-padding-x: 0.5rem;--bs-tooltip-padding-y: 0.25rem;--bs-tooltip-margin: ;--bs-tooltip-font-size:0.875rem;--bs-tooltip-color: #fff;--bs-tooltip-bg: #000;--bs-tooltip-border-radius: 0.25rem;--bs-tooltip-opacity: 0.9;--bs-tooltip-arrow-width: 0.8rem;--bs-tooltip-arrow-height: 0.4rem;z-index:var(--bs-tooltip-zindex);display:block;margin:var(--bs-tooltip-margin);font-family:"Source Sans Pro",-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;white-space:normal;word-spacing:normal;line-break:auto;font-size:var(--bs-tooltip-font-size);word-wrap:break-word;opacity:0}.tooltip.show{opacity:var(--bs-tooltip-opacity)}.tooltip .tooltip-arrow{display:block;width:var(--bs-tooltip-arrow-width);height:var(--bs-tooltip-arrow-height)}.tooltip .tooltip-arrow::before{position:absolute;content:"";border-color:rgba(0,0,0,0);border-style:solid}.bs-tooltip-top .tooltip-arrow,.bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow{bottom:calc(-1*var(--bs-tooltip-arrow-height))}.bs-tooltip-top .tooltip-arrow::before,.bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow::before{top:-1px;border-width:var(--bs-tooltip-arrow-height) calc(var(--bs-tooltip-arrow-width)*.5) 0;border-top-color:var(--bs-tooltip-bg)}.bs-tooltip-end .tooltip-arrow,.bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow{left:calc(-1*var(--bs-tooltip-arrow-height));width:var(--bs-tooltip-arrow-height);height:var(--bs-tooltip-arrow-width)}.bs-tooltip-end .tooltip-arrow::before,.bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow::before{right:-1px;border-width:calc(var(--bs-tooltip-arrow-width)*.5) var(--bs-tooltip-arrow-height) calc(var(--bs-tooltip-arrow-width)*.5) 0;border-right-color:var(--bs-tooltip-bg)}.bs-tooltip-bottom .tooltip-arrow,.bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow{top:calc(-1*var(--bs-tooltip-arrow-height))}.bs-tooltip-bottom .tooltip-arrow::before,.bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow::before{bottom:-1px;border-width:0 calc(var(--bs-tooltip-arrow-width)*.5) var(--bs-tooltip-arrow-height);border-bottom-color:var(--bs-tooltip-bg)}.bs-tooltip-start .tooltip-arrow,.bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow{right:calc(-1*var(--bs-tooltip-arrow-height));width:var(--bs-tooltip-arrow-height);height:var(--bs-tooltip-arrow-width)}.bs-tooltip-start .tooltip-arrow::before,.bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow::before{left:-1px;border-width:calc(var(--bs-tooltip-arrow-width)*.5) 0 calc(var(--bs-tooltip-arrow-width)*.5) var(--bs-tooltip-arrow-height);border-left-color:var(--bs-tooltip-bg)}.tooltip-inner{max-width:var(--bs-tooltip-max-width);padding:var(--bs-tooltip-padding-y) var(--bs-tooltip-padding-x);color:var(--bs-tooltip-color);text-align:center;background-color:var(--bs-tooltip-bg)}.popover{--bs-popover-zindex: 1070;--bs-popover-max-width: 276px;--bs-popover-font-size:0.875rem;--bs-popover-bg: #fff;--bs-popover-border-width: 1px;--bs-popover-border-color: rgba(0, 0, 0, 0.175);--bs-popover-border-radius: 0.5rem;--bs-popover-inner-border-radius: calc(0.5rem - 1px);--bs-popover-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15);--bs-popover-header-padding-x: 1rem;--bs-popover-header-padding-y: 0.5rem;--bs-popover-header-font-size:1rem;--bs-popover-header-color: inherit;--bs-popover-header-bg: #e9ecef;--bs-popover-body-padding-x: 1rem;--bs-popover-body-padding-y: 1rem;--bs-popover-body-color: #343a40;--bs-popover-arrow-width: 1rem;--bs-popover-arrow-height: 0.5rem;--bs-popover-arrow-border: var(--bs-popover-border-color);z-index:var(--bs-popover-zindex);display:block;max-width:var(--bs-popover-max-width);font-family:"Source Sans Pro",-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;white-space:normal;word-spacing:normal;line-break:auto;font-size:var(--bs-popover-font-size);word-wrap:break-word;background-color:var(--bs-popover-bg);background-clip:padding-box;border:var(--bs-popover-border-width) solid var(--bs-popover-border-color)}.popover .popover-arrow{display:block;width:var(--bs-popover-arrow-width);height:var(--bs-popover-arrow-height)}.popover .popover-arrow::before,.popover .popover-arrow::after{position:absolute;display:block;content:"";border-color:rgba(0,0,0,0);border-style:solid;border-width:0}.bs-popover-top>.popover-arrow,.bs-popover-auto[data-popper-placement^=top]>.popover-arrow{bottom:calc(-1*(var(--bs-popover-arrow-height)) - var(--bs-popover-border-width))}.bs-popover-top>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::before,.bs-popover-top>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::after{border-width:var(--bs-popover-arrow-height) calc(var(--bs-popover-arrow-width)*.5) 0}.bs-popover-top>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::before{bottom:0;border-top-color:var(--bs-popover-arrow-border)}.bs-popover-top>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::after{bottom:var(--bs-popover-border-width);border-top-color:var(--bs-popover-bg)}.bs-popover-end>.popover-arrow,.bs-popover-auto[data-popper-placement^=right]>.popover-arrow{left:calc(-1*(var(--bs-popover-arrow-height)) - var(--bs-popover-border-width));width:var(--bs-popover-arrow-height);height:var(--bs-popover-arrow-width)}.bs-popover-end>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::before,.bs-popover-end>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::after{border-width:calc(var(--bs-popover-arrow-width)*.5) var(--bs-popover-arrow-height) calc(var(--bs-popover-arrow-width)*.5) 0}.bs-popover-end>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::before{left:0;border-right-color:var(--bs-popover-arrow-border)}.bs-popover-end>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::after{left:var(--bs-popover-border-width);border-right-color:var(--bs-popover-bg)}.bs-popover-bottom>.popover-arrow,.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow{top:calc(-1*(var(--bs-popover-arrow-height)) - var(--bs-popover-border-width))}.bs-popover-bottom>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::before,.bs-popover-bottom>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::after{border-width:0 calc(var(--bs-popover-arrow-width)*.5) var(--bs-popover-arrow-height)}.bs-popover-bottom>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::before{top:0;border-bottom-color:var(--bs-popover-arrow-border)}.bs-popover-bottom>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::after{top:var(--bs-popover-border-width);border-bottom-color:var(--bs-popover-bg)}.bs-popover-bottom .popover-header::before,.bs-popover-auto[data-popper-placement^=bottom] .popover-header::before{position:absolute;top:0;left:50%;display:block;width:var(--bs-popover-arrow-width);margin-left:calc(-0.5*var(--bs-popover-arrow-width));content:"";border-bottom:var(--bs-popover-border-width) solid var(--bs-popover-header-bg)}.bs-popover-start>.popover-arrow,.bs-popover-auto[data-popper-placement^=left]>.popover-arrow{right:calc(-1*(var(--bs-popover-arrow-height)) - var(--bs-popover-border-width));width:var(--bs-popover-arrow-height);height:var(--bs-popover-arrow-width)}.bs-popover-start>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::before,.bs-popover-start>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::after{border-width:calc(var(--bs-popover-arrow-width)*.5) 0 calc(var(--bs-popover-arrow-width)*.5) var(--bs-popover-arrow-height)}.bs-popover-start>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::before{right:0;border-left-color:var(--bs-popover-arrow-border)}.bs-popover-start>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::after{right:var(--bs-popover-border-width);border-left-color:var(--bs-popover-bg)}.popover-header{padding:var(--bs-popover-header-padding-y) var(--bs-popover-header-padding-x);margin-bottom:0;font-size:var(--bs-popover-header-font-size);color:var(--bs-popover-header-color);background-color:var(--bs-popover-header-bg);border-bottom:var(--bs-popover-border-width) solid var(--bs-popover-border-color)}.popover-header:empty{display:none}.popover-body{padding:var(--bs-popover-body-padding-y) var(--bs-popover-body-padding-x);color:var(--bs-popover-body-color)}.carousel{position:relative}.carousel.pointer-event{touch-action:pan-y;-webkit-touch-action:pan-y;-moz-touch-action:pan-y;-ms-touch-action:pan-y;-o-touch-action:pan-y}.carousel-inner{position:relative;width:100%;overflow:hidden}.carousel-inner::after{display:block;clear:both;content:""}.carousel-item{position:relative;display:none;float:left;width:100%;margin-right:-100%;backface-visibility:hidden;-webkit-backface-visibility:hidden;-moz-backface-visibility:hidden;-ms-backface-visibility:hidden;-o-backface-visibility:hidden;transition:transform .6s ease-in-out}@media(prefers-reduced-motion: reduce){.carousel-item{transition:none}}.carousel-item.active,.carousel-item-next,.carousel-item-prev{display:block}.carousel-item-next:not(.carousel-item-start),.active.carousel-item-end{transform:translateX(100%)}.carousel-item-prev:not(.carousel-item-end),.active.carousel-item-start{transform:translateX(-100%)}.carousel-fade .carousel-item{opacity:0;transition-property:opacity;transform:none}.carousel-fade .carousel-item.active,.carousel-fade .carousel-item-next.carousel-item-start,.carousel-fade .carousel-item-prev.carousel-item-end{z-index:1;opacity:1}.carousel-fade .active.carousel-item-start,.carousel-fade .active.carousel-item-end{z-index:0;opacity:0;transition:opacity 0s .6s}@media(prefers-reduced-motion: reduce){.carousel-fade .active.carousel-item-start,.carousel-fade .active.carousel-item-end{transition:none}}.carousel-control-prev,.carousel-control-next{position:absolute;top:0;bottom:0;z-index:1;display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;justify-content:center;-webkit-justify-content:center;width:15%;padding:0;color:#fff;text-align:center;background:none;border:0;opacity:.5;transition:opacity .15s ease}@media(prefers-reduced-motion: reduce){.carousel-control-prev,.carousel-control-next{transition:none}}.carousel-control-prev:hover,.carousel-control-prev:focus,.carousel-control-next:hover,.carousel-control-next:focus{color:#fff;text-decoration:none;outline:0;opacity:.9}.carousel-control-prev{left:0}.carousel-control-next{right:0}.carousel-control-prev-icon,.carousel-control-next-icon{display:inline-block;width:2rem;height:2rem;background-repeat:no-repeat;background-position:50%;background-size:100% 100%}.carousel-control-prev-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z'/%3e%3c/svg%3e")}.carousel-control-next-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M4.646 1.646a.5.5 0 0 1 .708 0l6 6a.5.5 0 0 1 0 .708l-6 6a.5.5 0 0 1-.708-.708L10.293 8 4.646 2.354a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e")}.carousel-indicators{position:absolute;right:0;bottom:0;left:0;z-index:2;display:flex;display:-webkit-flex;justify-content:center;-webkit-justify-content:center;padding:0;margin-right:15%;margin-bottom:1rem;margin-left:15%}.carousel-indicators [data-bs-target]{box-sizing:content-box;flex:0 1 auto;-webkit-flex:0 1 auto;width:30px;height:3px;padding:0;margin-right:3px;margin-left:3px;text-indent:-999px;cursor:pointer;background-color:#fff;background-clip:padding-box;border:0;border-top:10px solid rgba(0,0,0,0);border-bottom:10px solid rgba(0,0,0,0);opacity:.5;transition:opacity .6s ease}@media(prefers-reduced-motion: reduce){.carousel-indicators [data-bs-target]{transition:none}}.carousel-indicators .active{opacity:1}.carousel-caption{position:absolute;right:15%;bottom:1.25rem;left:15%;padding-top:1.25rem;padding-bottom:1.25rem;color:#fff;text-align:center}.carousel-dark .carousel-control-prev-icon,.carousel-dark .carousel-control-next-icon{filter:invert(1) grayscale(100)}.carousel-dark .carousel-indicators [data-bs-target]{background-color:#000}.carousel-dark .carousel-caption{color:#000}[data-bs-theme=dark] .carousel .carousel-control-prev-icon,[data-bs-theme=dark] .carousel .carousel-control-next-icon,[data-bs-theme=dark].carousel .carousel-control-prev-icon,[data-bs-theme=dark].carousel .carousel-control-next-icon{filter:invert(1) grayscale(100)}[data-bs-theme=dark] .carousel .carousel-indicators [data-bs-target],[data-bs-theme=dark].carousel .carousel-indicators [data-bs-target]{background-color:#000}[data-bs-theme=dark] .carousel .carousel-caption,[data-bs-theme=dark].carousel .carousel-caption{color:#000}.spinner-grow,.spinner-border{display:inline-block;width:var(--bs-spinner-width);height:var(--bs-spinner-height);vertical-align:var(--bs-spinner-vertical-align);border-radius:50%;animation:var(--bs-spinner-animation-speed) linear infinite var(--bs-spinner-animation-name)}@keyframes spinner-border{to{transform:rotate(360deg) /* rtl:ignore */}}.spinner-border{--bs-spinner-width: 2rem;--bs-spinner-height: 2rem;--bs-spinner-vertical-align: -0.125em;--bs-spinner-border-width: 0.25em;--bs-spinner-animation-speed: 0.75s;--bs-spinner-animation-name: spinner-border;border:var(--bs-spinner-border-width) solid currentcolor;border-right-color:rgba(0,0,0,0)}.spinner-border-sm{--bs-spinner-width: 1rem;--bs-spinner-height: 1rem;--bs-spinner-border-width: 0.2em}@keyframes spinner-grow{0%{transform:scale(0)}50%{opacity:1;transform:none}}.spinner-grow{--bs-spinner-width: 2rem;--bs-spinner-height: 2rem;--bs-spinner-vertical-align: -0.125em;--bs-spinner-animation-speed: 0.75s;--bs-spinner-animation-name: spinner-grow;background-color:currentcolor;opacity:0}.spinner-grow-sm{--bs-spinner-width: 1rem;--bs-spinner-height: 1rem}@media(prefers-reduced-motion: reduce){.spinner-border,.spinner-grow{--bs-spinner-animation-speed: 1.5s}}.offcanvas,.offcanvas-xxl,.offcanvas-xl,.offcanvas-lg,.offcanvas-md,.offcanvas-sm{--bs-offcanvas-zindex: 1045;--bs-offcanvas-width: 400px;--bs-offcanvas-height: 30vh;--bs-offcanvas-padding-x: 1rem;--bs-offcanvas-padding-y: 1rem;--bs-offcanvas-color: #343a40;--bs-offcanvas-bg: #fff;--bs-offcanvas-border-width: 1px;--bs-offcanvas-border-color: rgba(0, 0, 0, 0.175);--bs-offcanvas-box-shadow: 0 0.125rem 0.25rem rgba(0, 0, 0, 0.075);--bs-offcanvas-transition: transform 0.3s ease-in-out;--bs-offcanvas-title-line-height: 1.5}@media(max-width: 575.98px){.offcanvas-sm{position:fixed;bottom:0;z-index:var(--bs-offcanvas-zindex);display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;max-width:100%;color:var(--bs-offcanvas-color);visibility:hidden;background-color:var(--bs-offcanvas-bg);background-clip:padding-box;outline:0;transition:var(--bs-offcanvas-transition)}}@media(max-width: 575.98px)and (prefers-reduced-motion: reduce){.offcanvas-sm{transition:none}}@media(max-width: 575.98px){.offcanvas-sm.offcanvas-start{top:0;left:0;width:var(--bs-offcanvas-width);border-right:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateX(-100%)}.offcanvas-sm.offcanvas-end{top:0;right:0;width:var(--bs-offcanvas-width);border-left:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateX(100%)}.offcanvas-sm.offcanvas-top{top:0;right:0;left:0;height:var(--bs-offcanvas-height);max-height:100%;border-bottom:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateY(-100%)}.offcanvas-sm.offcanvas-bottom{right:0;left:0;height:var(--bs-offcanvas-height);max-height:100%;border-top:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateY(100%)}.offcanvas-sm.showing,.offcanvas-sm.show:not(.hiding){transform:none}.offcanvas-sm.showing,.offcanvas-sm.hiding,.offcanvas-sm.show{visibility:visible}}@media(min-width: 576px){.offcanvas-sm{--bs-offcanvas-height: auto;--bs-offcanvas-border-width: 0;background-color:rgba(0,0,0,0) !important}.offcanvas-sm .offcanvas-header{display:none}.offcanvas-sm .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible;background-color:rgba(0,0,0,0) !important}}@media(max-width: 767.98px){.offcanvas-md{position:fixed;bottom:0;z-index:var(--bs-offcanvas-zindex);display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;max-width:100%;color:var(--bs-offcanvas-color);visibility:hidden;background-color:var(--bs-offcanvas-bg);background-clip:padding-box;outline:0;transition:var(--bs-offcanvas-transition)}}@media(max-width: 767.98px)and (prefers-reduced-motion: reduce){.offcanvas-md{transition:none}}@media(max-width: 767.98px){.offcanvas-md.offcanvas-start{top:0;left:0;width:var(--bs-offcanvas-width);border-right:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateX(-100%)}.offcanvas-md.offcanvas-end{top:0;right:0;width:var(--bs-offcanvas-width);border-left:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateX(100%)}.offcanvas-md.offcanvas-top{top:0;right:0;left:0;height:var(--bs-offcanvas-height);max-height:100%;border-bottom:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateY(-100%)}.offcanvas-md.offcanvas-bottom{right:0;left:0;height:var(--bs-offcanvas-height);max-height:100%;border-top:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateY(100%)}.offcanvas-md.showing,.offcanvas-md.show:not(.hiding){transform:none}.offcanvas-md.showing,.offcanvas-md.hiding,.offcanvas-md.show{visibility:visible}}@media(min-width: 768px){.offcanvas-md{--bs-offcanvas-height: auto;--bs-offcanvas-border-width: 0;background-color:rgba(0,0,0,0) !important}.offcanvas-md .offcanvas-header{display:none}.offcanvas-md .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible;background-color:rgba(0,0,0,0) !important}}@media(max-width: 991.98px){.offcanvas-lg{position:fixed;bottom:0;z-index:var(--bs-offcanvas-zindex);display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;max-width:100%;color:var(--bs-offcanvas-color);visibility:hidden;background-color:var(--bs-offcanvas-bg);background-clip:padding-box;outline:0;transition:var(--bs-offcanvas-transition)}}@media(max-width: 991.98px)and (prefers-reduced-motion: reduce){.offcanvas-lg{transition:none}}@media(max-width: 991.98px){.offcanvas-lg.offcanvas-start{top:0;left:0;width:var(--bs-offcanvas-width);border-right:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateX(-100%)}.offcanvas-lg.offcanvas-end{top:0;right:0;width:var(--bs-offcanvas-width);border-left:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateX(100%)}.offcanvas-lg.offcanvas-top{top:0;right:0;left:0;height:var(--bs-offcanvas-height);max-height:100%;border-bottom:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateY(-100%)}.offcanvas-lg.offcanvas-bottom{right:0;left:0;height:var(--bs-offcanvas-height);max-height:100%;border-top:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateY(100%)}.offcanvas-lg.showing,.offcanvas-lg.show:not(.hiding){transform:none}.offcanvas-lg.showing,.offcanvas-lg.hiding,.offcanvas-lg.show{visibility:visible}}@media(min-width: 992px){.offcanvas-lg{--bs-offcanvas-height: auto;--bs-offcanvas-border-width: 0;background-color:rgba(0,0,0,0) !important}.offcanvas-lg .offcanvas-header{display:none}.offcanvas-lg .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible;background-color:rgba(0,0,0,0) !important}}@media(max-width: 1199.98px){.offcanvas-xl{position:fixed;bottom:0;z-index:var(--bs-offcanvas-zindex);display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;max-width:100%;color:var(--bs-offcanvas-color);visibility:hidden;background-color:var(--bs-offcanvas-bg);background-clip:padding-box;outline:0;transition:var(--bs-offcanvas-transition)}}@media(max-width: 1199.98px)and (prefers-reduced-motion: reduce){.offcanvas-xl{transition:none}}@media(max-width: 1199.98px){.offcanvas-xl.offcanvas-start{top:0;left:0;width:var(--bs-offcanvas-width);border-right:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateX(-100%)}.offcanvas-xl.offcanvas-end{top:0;right:0;width:var(--bs-offcanvas-width);border-left:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateX(100%)}.offcanvas-xl.offcanvas-top{top:0;right:0;left:0;height:var(--bs-offcanvas-height);max-height:100%;border-bottom:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateY(-100%)}.offcanvas-xl.offcanvas-bottom{right:0;left:0;height:var(--bs-offcanvas-height);max-height:100%;border-top:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateY(100%)}.offcanvas-xl.showing,.offcanvas-xl.show:not(.hiding){transform:none}.offcanvas-xl.showing,.offcanvas-xl.hiding,.offcanvas-xl.show{visibility:visible}}@media(min-width: 1200px){.offcanvas-xl{--bs-offcanvas-height: auto;--bs-offcanvas-border-width: 0;background-color:rgba(0,0,0,0) !important}.offcanvas-xl .offcanvas-header{display:none}.offcanvas-xl .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible;background-color:rgba(0,0,0,0) !important}}@media(max-width: 1399.98px){.offcanvas-xxl{position:fixed;bottom:0;z-index:var(--bs-offcanvas-zindex);display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;max-width:100%;color:var(--bs-offcanvas-color);visibility:hidden;background-color:var(--bs-offcanvas-bg);background-clip:padding-box;outline:0;transition:var(--bs-offcanvas-transition)}}@media(max-width: 1399.98px)and (prefers-reduced-motion: reduce){.offcanvas-xxl{transition:none}}@media(max-width: 1399.98px){.offcanvas-xxl.offcanvas-start{top:0;left:0;width:var(--bs-offcanvas-width);border-right:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateX(-100%)}.offcanvas-xxl.offcanvas-end{top:0;right:0;width:var(--bs-offcanvas-width);border-left:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateX(100%)}.offcanvas-xxl.offcanvas-top{top:0;right:0;left:0;height:var(--bs-offcanvas-height);max-height:100%;border-bottom:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateY(-100%)}.offcanvas-xxl.offcanvas-bottom{right:0;left:0;height:var(--bs-offcanvas-height);max-height:100%;border-top:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateY(100%)}.offcanvas-xxl.showing,.offcanvas-xxl.show:not(.hiding){transform:none}.offcanvas-xxl.showing,.offcanvas-xxl.hiding,.offcanvas-xxl.show{visibility:visible}}@media(min-width: 1400px){.offcanvas-xxl{--bs-offcanvas-height: auto;--bs-offcanvas-border-width: 0;background-color:rgba(0,0,0,0) !important}.offcanvas-xxl .offcanvas-header{display:none}.offcanvas-xxl .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible;background-color:rgba(0,0,0,0) !important}}.offcanvas{position:fixed;bottom:0;z-index:var(--bs-offcanvas-zindex);display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;max-width:100%;color:var(--bs-offcanvas-color);visibility:hidden;background-color:var(--bs-offcanvas-bg);background-clip:padding-box;outline:0;transition:var(--bs-offcanvas-transition)}@media(prefers-reduced-motion: reduce){.offcanvas{transition:none}}.offcanvas.offcanvas-start{top:0;left:0;width:var(--bs-offcanvas-width);border-right:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateX(-100%)}.offcanvas.offcanvas-end{top:0;right:0;width:var(--bs-offcanvas-width);border-left:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateX(100%)}.offcanvas.offcanvas-top{top:0;right:0;left:0;height:var(--bs-offcanvas-height);max-height:100%;border-bottom:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateY(-100%)}.offcanvas.offcanvas-bottom{right:0;left:0;height:var(--bs-offcanvas-height);max-height:100%;border-top:var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color);transform:translateY(100%)}.offcanvas.showing,.offcanvas.show:not(.hiding){transform:none}.offcanvas.showing,.offcanvas.hiding,.offcanvas.show{visibility:visible}.offcanvas-backdrop{position:fixed;top:0;left:0;z-index:1040;width:100vw;height:100vh;background-color:#000}.offcanvas-backdrop.fade{opacity:0}.offcanvas-backdrop.show{opacity:.5}.offcanvas-header{display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;justify-content:space-between;-webkit-justify-content:space-between;padding:var(--bs-offcanvas-padding-y) var(--bs-offcanvas-padding-x)}.offcanvas-header .btn-close{padding:calc(var(--bs-offcanvas-padding-y)*.5) calc(var(--bs-offcanvas-padding-x)*.5);margin-top:calc(-0.5*var(--bs-offcanvas-padding-y));margin-right:calc(-0.5*var(--bs-offcanvas-padding-x));margin-bottom:calc(-0.5*var(--bs-offcanvas-padding-y))}.offcanvas-title{margin-bottom:0;line-height:var(--bs-offcanvas-title-line-height)}.offcanvas-body{flex-grow:1;-webkit-flex-grow:1;padding:var(--bs-offcanvas-padding-y) var(--bs-offcanvas-padding-x);overflow-y:auto}.placeholder{display:inline-block;min-height:1em;vertical-align:middle;cursor:wait;background-color:currentcolor;opacity:.5}.placeholder.btn::before{display:inline-block;content:""}.placeholder-xs{min-height:.6em}.placeholder-sm{min-height:.8em}.placeholder-lg{min-height:1.2em}.placeholder-glow .placeholder{animation:placeholder-glow 2s ease-in-out infinite}@keyframes placeholder-glow{50%{opacity:.2}}.placeholder-wave{mask-image:linear-gradient(130deg, #000 55%, rgba(0, 0, 0, 0.8) 75%, #000 95%);-webkit-mask-image:linear-gradient(130deg, #000 55%, rgba(0, 0, 0, 0.8) 75%, #000 95%);mask-size:200% 100%;-webkit-mask-size:200% 100%;animation:placeholder-wave 2s linear infinite}@keyframes placeholder-wave{100%{mask-position:-200% 0%;-webkit-mask-position:-200% 0%}}.clearfix::after{display:block;clear:both;content:""}.text-bg-default{color:#fff !important;background-color:RGBA(var(--bs-default-rgb), var(--bs-bg-opacity, 1)) !important}.text-bg-primary{color:#fff !important;background-color:RGBA(var(--bs-primary-rgb), var(--bs-bg-opacity, 1)) !important}.text-bg-secondary{color:#fff !important;background-color:RGBA(var(--bs-secondary-rgb), var(--bs-bg-opacity, 1)) !important}.text-bg-success{color:#fff !important;background-color:RGBA(var(--bs-success-rgb), var(--bs-bg-opacity, 1)) !important}.text-bg-info{color:#fff !important;background-color:RGBA(var(--bs-info-rgb), var(--bs-bg-opacity, 1)) !important}.text-bg-warning{color:#fff !important;background-color:RGBA(var(--bs-warning-rgb), var(--bs-bg-opacity, 1)) !important}.text-bg-danger{color:#fff !important;background-color:RGBA(var(--bs-danger-rgb), var(--bs-bg-opacity, 1)) !important}.text-bg-light{color:#000 !important;background-color:RGBA(var(--bs-light-rgb), var(--bs-bg-opacity, 1)) !important}.text-bg-dark{color:#fff !important;background-color:RGBA(var(--bs-dark-rgb), var(--bs-bg-opacity, 1)) !important}.link-default{color:RGBA(var(--bs-default-rgb), var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(var(--bs-default-rgb), var(--bs-link-underline-opacity, 1)) !important}.link-default:hover,.link-default:focus{color:RGBA(42, 46, 51, var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(42, 46, 51, var(--bs-link-underline-opacity, 1)) !important}.link-primary{color:RGBA(var(--bs-primary-rgb), var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(var(--bs-primary-rgb), var(--bs-link-underline-opacity, 1)) !important}.link-primary:hover,.link-primary:focus{color:RGBA(31, 102, 182, var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(31, 102, 182, var(--bs-link-underline-opacity, 1)) !important}.link-secondary{color:RGBA(var(--bs-secondary-rgb), var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(var(--bs-secondary-rgb), var(--bs-link-underline-opacity, 1)) !important}.link-secondary:hover,.link-secondary:focus{color:RGBA(42, 46, 51, var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(42, 46, 51, var(--bs-link-underline-opacity, 1)) !important}.link-success{color:RGBA(var(--bs-success-rgb), var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(var(--bs-success-rgb), var(--bs-link-underline-opacity, 1)) !important}.link-success:hover,.link-success:focus{color:RGBA(50, 146, 19, var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(50, 146, 19, var(--bs-link-underline-opacity, 1)) !important}.link-info{color:RGBA(var(--bs-info-rgb), var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(var(--bs-info-rgb), var(--bs-link-underline-opacity, 1)) !important}.link-info:hover,.link-info:focus{color:RGBA(122, 67, 150, var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(122, 67, 150, var(--bs-link-underline-opacity, 1)) !important}.link-warning{color:RGBA(var(--bs-warning-rgb), var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(var(--bs-warning-rgb), var(--bs-link-underline-opacity, 1)) !important}.link-warning:hover,.link-warning:focus{color:RGBA(204, 94, 19, var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(204, 94, 19, var(--bs-link-underline-opacity, 1)) !important}.link-danger{color:RGBA(var(--bs-danger-rgb), var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(var(--bs-danger-rgb), var(--bs-link-underline-opacity, 1)) !important}.link-danger:hover,.link-danger:focus{color:RGBA(204, 0, 46, var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(204, 0, 46, var(--bs-link-underline-opacity, 1)) !important}.link-light{color:RGBA(var(--bs-light-rgb), var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(var(--bs-light-rgb), var(--bs-link-underline-opacity, 1)) !important}.link-light:hover,.link-light:focus{color:RGBA(249, 250, 251, var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(249, 250, 251, var(--bs-link-underline-opacity, 1)) !important}.link-dark{color:RGBA(var(--bs-dark-rgb), var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(var(--bs-dark-rgb), var(--bs-link-underline-opacity, 1)) !important}.link-dark:hover,.link-dark:focus{color:RGBA(42, 46, 51, var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(42, 46, 51, var(--bs-link-underline-opacity, 1)) !important}.link-body-emphasis{color:RGBA(var(--bs-emphasis-color-rgb), var(--bs-link-opacity, 1)) !important;text-decoration-color:RGBA(var(--bs-emphasis-color-rgb), var(--bs-link-underline-opacity, 1)) !important}.link-body-emphasis:hover,.link-body-emphasis:focus{color:RGBA(var(--bs-emphasis-color-rgb), var(--bs-link-opacity, 0.75)) !important;text-decoration-color:RGBA(var(--bs-emphasis-color-rgb), var(--bs-link-underline-opacity, 0.75)) !important}.focus-ring:focus{outline:0;box-shadow:var(--bs-focus-ring-x, 0) var(--bs-focus-ring-y, 0) var(--bs-focus-ring-blur, 0) var(--bs-focus-ring-width) var(--bs-focus-ring-color)}.icon-link{display:inline-flex;gap:.375rem;align-items:center;-webkit-align-items:center;text-decoration-color:rgba(var(--bs-link-color-rgb), var(--bs-link-opacity, 0.5));text-underline-offset:.25em;backface-visibility:hidden;-webkit-backface-visibility:hidden;-moz-backface-visibility:hidden;-ms-backface-visibility:hidden;-o-backface-visibility:hidden}.icon-link>.bi{flex-shrink:0;-webkit-flex-shrink:0;width:1em;height:1em;fill:currentcolor;transition:.2s ease-in-out transform}@media(prefers-reduced-motion: reduce){.icon-link>.bi{transition:none}}.icon-link-hover:hover>.bi,.icon-link-hover:focus-visible>.bi{transform:var(--bs-icon-link-transform, translate3d(0.25em, 0, 0))}.ratio{position:relative;width:100%}.ratio::before{display:block;padding-top:var(--bs-aspect-ratio);content:""}.ratio>*{position:absolute;top:0;left:0;width:100%;height:100%}.ratio-1x1{--bs-aspect-ratio: 100%}.ratio-4x3{--bs-aspect-ratio: 75%}.ratio-16x9{--bs-aspect-ratio: 56.25%}.ratio-21x9{--bs-aspect-ratio: 42.8571428571%}.fixed-top{position:fixed;top:0;right:0;left:0;z-index:1030}.fixed-bottom{position:fixed;right:0;bottom:0;left:0;z-index:1030}.sticky-top{position:sticky;top:0;z-index:1020}.sticky-bottom{position:sticky;bottom:0;z-index:1020}@media(min-width: 576px){.sticky-sm-top{position:sticky;top:0;z-index:1020}.sticky-sm-bottom{position:sticky;bottom:0;z-index:1020}}@media(min-width: 768px){.sticky-md-top{position:sticky;top:0;z-index:1020}.sticky-md-bottom{position:sticky;bottom:0;z-index:1020}}@media(min-width: 992px){.sticky-lg-top{position:sticky;top:0;z-index:1020}.sticky-lg-bottom{position:sticky;bottom:0;z-index:1020}}@media(min-width: 1200px){.sticky-xl-top{position:sticky;top:0;z-index:1020}.sticky-xl-bottom{position:sticky;bottom:0;z-index:1020}}@media(min-width: 1400px){.sticky-xxl-top{position:sticky;top:0;z-index:1020}.sticky-xxl-bottom{position:sticky;bottom:0;z-index:1020}}.hstack{display:flex;display:-webkit-flex;flex-direction:row;-webkit-flex-direction:row;align-items:center;-webkit-align-items:center;align-self:stretch;-webkit-align-self:stretch}.vstack{display:flex;display:-webkit-flex;flex:1 1 auto;-webkit-flex:1 1 auto;flex-direction:column;-webkit-flex-direction:column;align-self:stretch;-webkit-align-self:stretch}.visually-hidden,.visually-hidden-focusable:not(:focus):not(:focus-within){width:1px !important;height:1px !important;padding:0 !important;margin:-1px !important;overflow:hidden !important;clip:rect(0, 0, 0, 0) !important;white-space:nowrap !important;border:0 !important}.visually-hidden:not(caption),.visually-hidden-focusable:not(:focus):not(:focus-within):not(caption){position:absolute !important}.stretched-link::after{position:absolute;top:0;right:0;bottom:0;left:0;z-index:1;content:""}.text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.vr{display:inline-block;align-self:stretch;-webkit-align-self:stretch;width:1px;min-height:1em;background-color:currentcolor;opacity:.25}.align-baseline{vertical-align:baseline !important}.align-top{vertical-align:top !important}.align-middle{vertical-align:middle !important}.align-bottom{vertical-align:bottom !important}.align-text-bottom{vertical-align:text-bottom !important}.align-text-top{vertical-align:text-top !important}.float-start{float:left !important}.float-end{float:right !important}.float-none{float:none !important}.object-fit-contain{object-fit:contain !important}.object-fit-cover{object-fit:cover !important}.object-fit-fill{object-fit:fill !important}.object-fit-scale{object-fit:scale-down !important}.object-fit-none{object-fit:none !important}.opacity-0{opacity:0 !important}.opacity-25{opacity:.25 !important}.opacity-50{opacity:.5 !important}.opacity-75{opacity:.75 !important}.opacity-100{opacity:1 !important}.overflow-auto{overflow:auto !important}.overflow-hidden{overflow:hidden !important}.overflow-visible{overflow:visible !important}.overflow-scroll{overflow:scroll !important}.overflow-x-auto{overflow-x:auto !important}.overflow-x-hidden{overflow-x:hidden !important}.overflow-x-visible{overflow-x:visible !important}.overflow-x-scroll{overflow-x:scroll !important}.overflow-y-auto{overflow-y:auto !important}.overflow-y-hidden{overflow-y:hidden !important}.overflow-y-visible{overflow-y:visible !important}.overflow-y-scroll{overflow-y:scroll !important}.d-inline{display:inline !important}.d-inline-block{display:inline-block !important}.d-block{display:block !important}.d-grid{display:grid !important}.d-inline-grid{display:inline-grid !important}.d-table{display:table !important}.d-table-row{display:table-row !important}.d-table-cell{display:table-cell !important}.d-flex{display:flex !important}.d-inline-flex{display:inline-flex !important}.d-none{display:none !important}.shadow{box-shadow:0 .5rem 1rem rgba(0,0,0,.15) !important}.shadow-sm{box-shadow:0 .125rem .25rem rgba(0,0,0,.075) !important}.shadow-lg{box-shadow:0 1rem 3rem rgba(0,0,0,.175) !important}.shadow-none{box-shadow:none !important}.focus-ring-default{--bs-focus-ring-color: rgba(var(--bs-default-rgb), var(--bs-focus-ring-opacity))}.focus-ring-primary{--bs-focus-ring-color: rgba(var(--bs-primary-rgb), var(--bs-focus-ring-opacity))}.focus-ring-secondary{--bs-focus-ring-color: rgba(var(--bs-secondary-rgb), var(--bs-focus-ring-opacity))}.focus-ring-success{--bs-focus-ring-color: rgba(var(--bs-success-rgb), var(--bs-focus-ring-opacity))}.focus-ring-info{--bs-focus-ring-color: rgba(var(--bs-info-rgb), var(--bs-focus-ring-opacity))}.focus-ring-warning{--bs-focus-ring-color: rgba(var(--bs-warning-rgb), var(--bs-focus-ring-opacity))}.focus-ring-danger{--bs-focus-ring-color: rgba(var(--bs-danger-rgb), var(--bs-focus-ring-opacity))}.focus-ring-light{--bs-focus-ring-color: rgba(var(--bs-light-rgb), var(--bs-focus-ring-opacity))}.focus-ring-dark{--bs-focus-ring-color: rgba(var(--bs-dark-rgb), var(--bs-focus-ring-opacity))}.position-static{position:static !important}.position-relative{position:relative !important}.position-absolute{position:absolute !important}.position-fixed{position:fixed !important}.position-sticky{position:sticky !important}.top-0{top:0 !important}.top-50{top:50% !important}.top-100{top:100% !important}.bottom-0{bottom:0 !important}.bottom-50{bottom:50% !important}.bottom-100{bottom:100% !important}.start-0{left:0 !important}.start-50{left:50% !important}.start-100{left:100% !important}.end-0{right:0 !important}.end-50{right:50% !important}.end-100{right:100% !important}.translate-middle{transform:translate(-50%, -50%) !important}.translate-middle-x{transform:translateX(-50%) !important}.translate-middle-y{transform:translateY(-50%) !important}.border{border:var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important}.border-0{border:0 !important}.border-top{border-top:var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important}.border-top-0{border-top:0 !important}.border-end{border-right:var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important}.border-end-0{border-right:0 !important}.border-bottom{border-bottom:var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important}.border-bottom-0{border-bottom:0 !important}.border-start{border-left:var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important}.border-start-0{border-left:0 !important}.border-default{--bs-border-opacity: 1;border-color:rgba(var(--bs-default-rgb), var(--bs-border-opacity)) !important}.border-primary{--bs-border-opacity: 1;border-color:rgba(var(--bs-primary-rgb), var(--bs-border-opacity)) !important}.border-secondary{--bs-border-opacity: 1;border-color:rgba(var(--bs-secondary-rgb), var(--bs-border-opacity)) !important}.border-success{--bs-border-opacity: 1;border-color:rgba(var(--bs-success-rgb), var(--bs-border-opacity)) !important}.border-info{--bs-border-opacity: 1;border-color:rgba(var(--bs-info-rgb), var(--bs-border-opacity)) !important}.border-warning{--bs-border-opacity: 1;border-color:rgba(var(--bs-warning-rgb), var(--bs-border-opacity)) !important}.border-danger{--bs-border-opacity: 1;border-color:rgba(var(--bs-danger-rgb), var(--bs-border-opacity)) !important}.border-light{--bs-border-opacity: 1;border-color:rgba(var(--bs-light-rgb), var(--bs-border-opacity)) !important}.border-dark{--bs-border-opacity: 1;border-color:rgba(var(--bs-dark-rgb), var(--bs-border-opacity)) !important}.border-black{--bs-border-opacity: 1;border-color:rgba(var(--bs-black-rgb), var(--bs-border-opacity)) !important}.border-white{--bs-border-opacity: 1;border-color:rgba(var(--bs-white-rgb), var(--bs-border-opacity)) !important}.border-primary-subtle{border-color:var(--bs-primary-border-subtle) !important}.border-secondary-subtle{border-color:var(--bs-secondary-border-subtle) !important}.border-success-subtle{border-color:var(--bs-success-border-subtle) !important}.border-info-subtle{border-color:var(--bs-info-border-subtle) !important}.border-warning-subtle{border-color:var(--bs-warning-border-subtle) !important}.border-danger-subtle{border-color:var(--bs-danger-border-subtle) !important}.border-light-subtle{border-color:var(--bs-light-border-subtle) !important}.border-dark-subtle{border-color:var(--bs-dark-border-subtle) !important}.border-1{border-width:1px !important}.border-2{border-width:2px !important}.border-3{border-width:3px !important}.border-4{border-width:4px !important}.border-5{border-width:5px !important}.border-opacity-10{--bs-border-opacity: 0.1}.border-opacity-25{--bs-border-opacity: 0.25}.border-opacity-50{--bs-border-opacity: 0.5}.border-opacity-75{--bs-border-opacity: 0.75}.border-opacity-100{--bs-border-opacity: 1}.w-25{width:25% !important}.w-50{width:50% !important}.w-75{width:75% !important}.w-100{width:100% !important}.w-auto{width:auto !important}.mw-100{max-width:100% !important}.vw-100{width:100vw !important}.min-vw-100{min-width:100vw !important}.h-25{height:25% !important}.h-50{height:50% !important}.h-75{height:75% !important}.h-100{height:100% !important}.h-auto{height:auto !important}.mh-100{max-height:100% !important}.vh-100{height:100vh !important}.min-vh-100{min-height:100vh !important}.flex-fill{flex:1 1 auto !important}.flex-row{flex-direction:row !important}.flex-column{flex-direction:column !important}.flex-row-reverse{flex-direction:row-reverse !important}.flex-column-reverse{flex-direction:column-reverse !important}.flex-grow-0{flex-grow:0 !important}.flex-grow-1{flex-grow:1 !important}.flex-shrink-0{flex-shrink:0 !important}.flex-shrink-1{flex-shrink:1 !important}.flex-wrap{flex-wrap:wrap !important}.flex-nowrap{flex-wrap:nowrap !important}.flex-wrap-reverse{flex-wrap:wrap-reverse !important}.justify-content-start{justify-content:flex-start !important}.justify-content-end{justify-content:flex-end !important}.justify-content-center{justify-content:center !important}.justify-content-between{justify-content:space-between !important}.justify-content-around{justify-content:space-around !important}.justify-content-evenly{justify-content:space-evenly !important}.align-items-start{align-items:flex-start !important}.align-items-end{align-items:flex-end !important}.align-items-center{align-items:center !important}.align-items-baseline{align-items:baseline !important}.align-items-stretch{align-items:stretch !important}.align-content-start{align-content:flex-start !important}.align-content-end{align-content:flex-end !important}.align-content-center{align-content:center !important}.align-content-between{align-content:space-between !important}.align-content-around{align-content:space-around !important}.align-content-stretch{align-content:stretch !important}.align-self-auto{align-self:auto !important}.align-self-start{align-self:flex-start !important}.align-self-end{align-self:flex-end !important}.align-self-center{align-self:center !important}.align-self-baseline{align-self:baseline !important}.align-self-stretch{align-self:stretch !important}.order-first{order:-1 !important}.order-0{order:0 !important}.order-1{order:1 !important}.order-2{order:2 !important}.order-3{order:3 !important}.order-4{order:4 !important}.order-5{order:5 !important}.order-last{order:6 !important}.m-0{margin:0 !important}.m-1{margin:.25rem !important}.m-2{margin:.5rem !important}.m-3{margin:1rem !important}.m-4{margin:1.5rem !important}.m-5{margin:3rem !important}.m-auto{margin:auto !important}.mx-0{margin-right:0 !important;margin-left:0 !important}.mx-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-3{margin-right:1rem !important;margin-left:1rem !important}.mx-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-5{margin-right:3rem !important;margin-left:3rem !important}.mx-auto{margin-right:auto !important;margin-left:auto !important}.my-0{margin-top:0 !important;margin-bottom:0 !important}.my-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-0{margin-top:0 !important}.mt-1{margin-top:.25rem !important}.mt-2{margin-top:.5rem !important}.mt-3{margin-top:1rem !important}.mt-4{margin-top:1.5rem !important}.mt-5{margin-top:3rem !important}.mt-auto{margin-top:auto !important}.me-0{margin-right:0 !important}.me-1{margin-right:.25rem !important}.me-2{margin-right:.5rem !important}.me-3{margin-right:1rem !important}.me-4{margin-right:1.5rem !important}.me-5{margin-right:3rem !important}.me-auto{margin-right:auto !important}.mb-0{margin-bottom:0 !important}.mb-1{margin-bottom:.25rem !important}.mb-2{margin-bottom:.5rem !important}.mb-3{margin-bottom:1rem !important}.mb-4{margin-bottom:1.5rem !important}.mb-5{margin-bottom:3rem !important}.mb-auto{margin-bottom:auto !important}.ms-0{margin-left:0 !important}.ms-1{margin-left:.25rem !important}.ms-2{margin-left:.5rem !important}.ms-3{margin-left:1rem !important}.ms-4{margin-left:1.5rem !important}.ms-5{margin-left:3rem !important}.ms-auto{margin-left:auto !important}.p-0{padding:0 !important}.p-1{padding:.25rem !important}.p-2{padding:.5rem !important}.p-3{padding:1rem !important}.p-4{padding:1.5rem !important}.p-5{padding:3rem !important}.px-0{padding-right:0 !important;padding-left:0 !important}.px-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-3{padding-right:1rem !important;padding-left:1rem !important}.px-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-5{padding-right:3rem !important;padding-left:3rem !important}.py-0{padding-top:0 !important;padding-bottom:0 !important}.py-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-0{padding-top:0 !important}.pt-1{padding-top:.25rem !important}.pt-2{padding-top:.5rem !important}.pt-3{padding-top:1rem !important}.pt-4{padding-top:1.5rem !important}.pt-5{padding-top:3rem !important}.pe-0{padding-right:0 !important}.pe-1{padding-right:.25rem !important}.pe-2{padding-right:.5rem !important}.pe-3{padding-right:1rem !important}.pe-4{padding-right:1.5rem !important}.pe-5{padding-right:3rem !important}.pb-0{padding-bottom:0 !important}.pb-1{padding-bottom:.25rem !important}.pb-2{padding-bottom:.5rem !important}.pb-3{padding-bottom:1rem !important}.pb-4{padding-bottom:1.5rem !important}.pb-5{padding-bottom:3rem !important}.ps-0{padding-left:0 !important}.ps-1{padding-left:.25rem !important}.ps-2{padding-left:.5rem !important}.ps-3{padding-left:1rem !important}.ps-4{padding-left:1.5rem !important}.ps-5{padding-left:3rem !important}.gap-0{gap:0 !important}.gap-1{gap:.25rem !important}.gap-2{gap:.5rem !important}.gap-3{gap:1rem !important}.gap-4{gap:1.5rem !important}.gap-5{gap:3rem !important}.row-gap-0{row-gap:0 !important}.row-gap-1{row-gap:.25rem !important}.row-gap-2{row-gap:.5rem !important}.row-gap-3{row-gap:1rem !important}.row-gap-4{row-gap:1.5rem !important}.row-gap-5{row-gap:3rem !important}.column-gap-0{column-gap:0 !important}.column-gap-1{column-gap:.25rem !important}.column-gap-2{column-gap:.5rem !important}.column-gap-3{column-gap:1rem !important}.column-gap-4{column-gap:1.5rem !important}.column-gap-5{column-gap:3rem !important}.font-monospace{font-family:var(--bs-font-monospace) !important}.fs-1{font-size:calc(1.325rem + 0.9vw) !important}.fs-2{font-size:calc(1.29rem + 0.48vw) !important}.fs-3{font-size:calc(1.27rem + 0.24vw) !important}.fs-4{font-size:1.25rem !important}.fs-5{font-size:1.1rem !important}.fs-6{font-size:1rem !important}.fst-italic{font-style:italic !important}.fst-normal{font-style:normal !important}.fw-lighter{font-weight:lighter !important}.fw-light{font-weight:300 !important}.fw-normal{font-weight:400 !important}.fw-medium{font-weight:500 !important}.fw-semibold{font-weight:600 !important}.fw-bold{font-weight:700 !important}.fw-bolder{font-weight:bolder !important}.lh-1{line-height:1 !important}.lh-sm{line-height:1.25 !important}.lh-base{line-height:1.5 !important}.lh-lg{line-height:2 !important}.text-start{text-align:left !important}.text-end{text-align:right !important}.text-center{text-align:center !important}.text-decoration-none{text-decoration:none !important}.text-decoration-underline{text-decoration:underline !important}.text-decoration-line-through{text-decoration:line-through !important}.text-lowercase{text-transform:lowercase !important}.text-uppercase{text-transform:uppercase !important}.text-capitalize{text-transform:capitalize !important}.text-wrap{white-space:normal !important}.text-nowrap{white-space:nowrap !important}.text-break{word-wrap:break-word !important;word-break:break-word !important}.text-default{--bs-text-opacity: 1;color:rgba(var(--bs-default-rgb), var(--bs-text-opacity)) !important}.text-primary{--bs-text-opacity: 1;color:rgba(var(--bs-primary-rgb), var(--bs-text-opacity)) !important}.text-secondary{--bs-text-opacity: 1;color:rgba(var(--bs-secondary-rgb), var(--bs-text-opacity)) !important}.text-success{--bs-text-opacity: 1;color:rgba(var(--bs-success-rgb), var(--bs-text-opacity)) !important}.text-info{--bs-text-opacity: 1;color:rgba(var(--bs-info-rgb), var(--bs-text-opacity)) !important}.text-warning{--bs-text-opacity: 1;color:rgba(var(--bs-warning-rgb), var(--bs-text-opacity)) !important}.text-danger{--bs-text-opacity: 1;color:rgba(var(--bs-danger-rgb), var(--bs-text-opacity)) !important}.text-light{--bs-text-opacity: 1;color:rgba(var(--bs-light-rgb), var(--bs-text-opacity)) !important}.text-dark{--bs-text-opacity: 1;color:rgba(var(--bs-dark-rgb), var(--bs-text-opacity)) !important}.text-black{--bs-text-opacity: 1;color:rgba(var(--bs-black-rgb), var(--bs-text-opacity)) !important}.text-white{--bs-text-opacity: 1;color:rgba(var(--bs-white-rgb), var(--bs-text-opacity)) !important}.text-body{--bs-text-opacity: 1;color:rgba(var(--bs-body-color-rgb), var(--bs-text-opacity)) !important}.text-muted{--bs-text-opacity: 1;color:var(--bs-secondary-color) !important}.text-black-50{--bs-text-opacity: 1;color:rgba(0,0,0,.5) !important}.text-white-50{--bs-text-opacity: 1;color:rgba(255,255,255,.5) !important}.text-body-secondary{--bs-text-opacity: 1;color:var(--bs-secondary-color) !important}.text-body-tertiary{--bs-text-opacity: 1;color:var(--bs-tertiary-color) !important}.text-body-emphasis{--bs-text-opacity: 1;color:var(--bs-emphasis-color) !important}.text-reset{--bs-text-opacity: 1;color:inherit !important}.text-opacity-25{--bs-text-opacity: 0.25}.text-opacity-50{--bs-text-opacity: 0.5}.text-opacity-75{--bs-text-opacity: 0.75}.text-opacity-100{--bs-text-opacity: 1}.text-primary-emphasis{color:var(--bs-primary-text-emphasis) !important}.text-secondary-emphasis{color:var(--bs-secondary-text-emphasis) !important}.text-success-emphasis{color:var(--bs-success-text-emphasis) !important}.text-info-emphasis{color:var(--bs-info-text-emphasis) !important}.text-warning-emphasis{color:var(--bs-warning-text-emphasis) !important}.text-danger-emphasis{color:var(--bs-danger-text-emphasis) !important}.text-light-emphasis{color:var(--bs-light-text-emphasis) !important}.text-dark-emphasis{color:var(--bs-dark-text-emphasis) !important}.link-opacity-10{--bs-link-opacity: 0.1}.link-opacity-10-hover:hover{--bs-link-opacity: 0.1}.link-opacity-25{--bs-link-opacity: 0.25}.link-opacity-25-hover:hover{--bs-link-opacity: 0.25}.link-opacity-50{--bs-link-opacity: 0.5}.link-opacity-50-hover:hover{--bs-link-opacity: 0.5}.link-opacity-75{--bs-link-opacity: 0.75}.link-opacity-75-hover:hover{--bs-link-opacity: 0.75}.link-opacity-100{--bs-link-opacity: 1}.link-opacity-100-hover:hover{--bs-link-opacity: 1}.link-offset-1{text-underline-offset:.125em !important}.link-offset-1-hover:hover{text-underline-offset:.125em !important}.link-offset-2{text-underline-offset:.25em !important}.link-offset-2-hover:hover{text-underline-offset:.25em !important}.link-offset-3{text-underline-offset:.375em !important}.link-offset-3-hover:hover{text-underline-offset:.375em !important}.link-underline-default{--bs-link-underline-opacity: 1;text-decoration-color:rgba(var(--bs-default-rgb), var(--bs-link-underline-opacity)) !important}.link-underline-primary{--bs-link-underline-opacity: 1;text-decoration-color:rgba(var(--bs-primary-rgb), var(--bs-link-underline-opacity)) !important}.link-underline-secondary{--bs-link-underline-opacity: 1;text-decoration-color:rgba(var(--bs-secondary-rgb), var(--bs-link-underline-opacity)) !important}.link-underline-success{--bs-link-underline-opacity: 1;text-decoration-color:rgba(var(--bs-success-rgb), var(--bs-link-underline-opacity)) !important}.link-underline-info{--bs-link-underline-opacity: 1;text-decoration-color:rgba(var(--bs-info-rgb), var(--bs-link-underline-opacity)) !important}.link-underline-warning{--bs-link-underline-opacity: 1;text-decoration-color:rgba(var(--bs-warning-rgb), var(--bs-link-underline-opacity)) !important}.link-underline-danger{--bs-link-underline-opacity: 1;text-decoration-color:rgba(var(--bs-danger-rgb), var(--bs-link-underline-opacity)) !important}.link-underline-light{--bs-link-underline-opacity: 1;text-decoration-color:rgba(var(--bs-light-rgb), var(--bs-link-underline-opacity)) !important}.link-underline-dark{--bs-link-underline-opacity: 1;text-decoration-color:rgba(var(--bs-dark-rgb), var(--bs-link-underline-opacity)) !important}.link-underline{--bs-link-underline-opacity: 1;text-decoration-color:rgba(var(--bs-link-color-rgb), var(--bs-link-underline-opacity, 1)) !important}.link-underline-opacity-0{--bs-link-underline-opacity: 0}.link-underline-opacity-0-hover:hover{--bs-link-underline-opacity: 0}.link-underline-opacity-10{--bs-link-underline-opacity: 0.1}.link-underline-opacity-10-hover:hover{--bs-link-underline-opacity: 0.1}.link-underline-opacity-25{--bs-link-underline-opacity: 0.25}.link-underline-opacity-25-hover:hover{--bs-link-underline-opacity: 0.25}.link-underline-opacity-50{--bs-link-underline-opacity: 0.5}.link-underline-opacity-50-hover:hover{--bs-link-underline-opacity: 0.5}.link-underline-opacity-75{--bs-link-underline-opacity: 0.75}.link-underline-opacity-75-hover:hover{--bs-link-underline-opacity: 0.75}.link-underline-opacity-100{--bs-link-underline-opacity: 1}.link-underline-opacity-100-hover:hover{--bs-link-underline-opacity: 1}.bg-default{--bs-bg-opacity: 1;background-color:rgba(var(--bs-default-rgb), var(--bs-bg-opacity)) !important}.bg-primary{--bs-bg-opacity: 1;background-color:rgba(var(--bs-primary-rgb), var(--bs-bg-opacity)) !important}.bg-secondary{--bs-bg-opacity: 1;background-color:rgba(var(--bs-secondary-rgb), var(--bs-bg-opacity)) !important}.bg-success{--bs-bg-opacity: 1;background-color:rgba(var(--bs-success-rgb), var(--bs-bg-opacity)) !important}.bg-info{--bs-bg-opacity: 1;background-color:rgba(var(--bs-info-rgb), var(--bs-bg-opacity)) !important}.bg-warning{--bs-bg-opacity: 1;background-color:rgba(var(--bs-warning-rgb), var(--bs-bg-opacity)) !important}.bg-danger{--bs-bg-opacity: 1;background-color:rgba(var(--bs-danger-rgb), var(--bs-bg-opacity)) !important}.bg-light{--bs-bg-opacity: 1;background-color:rgba(var(--bs-light-rgb), var(--bs-bg-opacity)) !important}.bg-dark{--bs-bg-opacity: 1;background-color:rgba(var(--bs-dark-rgb), var(--bs-bg-opacity)) !important}.bg-black{--bs-bg-opacity: 1;background-color:rgba(var(--bs-black-rgb), var(--bs-bg-opacity)) !important}.bg-white{--bs-bg-opacity: 1;background-color:rgba(var(--bs-white-rgb), var(--bs-bg-opacity)) !important}.bg-body{--bs-bg-opacity: 1;background-color:rgba(var(--bs-body-bg-rgb), var(--bs-bg-opacity)) !important}.bg-transparent{--bs-bg-opacity: 1;background-color:rgba(0,0,0,0) !important}.bg-body-secondary{--bs-bg-opacity: 1;background-color:rgba(var(--bs-secondary-bg-rgb), var(--bs-bg-opacity)) !important}.bg-body-tertiary{--bs-bg-opacity: 1;background-color:rgba(var(--bs-tertiary-bg-rgb), var(--bs-bg-opacity)) !important}.bg-opacity-10{--bs-bg-opacity: 0.1}.bg-opacity-25{--bs-bg-opacity: 0.25}.bg-opacity-50{--bs-bg-opacity: 0.5}.bg-opacity-75{--bs-bg-opacity: 0.75}.bg-opacity-100{--bs-bg-opacity: 1}.bg-primary-subtle{background-color:var(--bs-primary-bg-subtle) !important}.bg-secondary-subtle{background-color:var(--bs-secondary-bg-subtle) !important}.bg-success-subtle{background-color:var(--bs-success-bg-subtle) !important}.bg-info-subtle{background-color:var(--bs-info-bg-subtle) !important}.bg-warning-subtle{background-color:var(--bs-warning-bg-subtle) !important}.bg-danger-subtle{background-color:var(--bs-danger-bg-subtle) !important}.bg-light-subtle{background-color:var(--bs-light-bg-subtle) !important}.bg-dark-subtle{background-color:var(--bs-dark-bg-subtle) !important}.bg-gradient{background-image:var(--bs-gradient) !important}.user-select-all{user-select:all !important}.user-select-auto{user-select:auto !important}.user-select-none{user-select:none !important}.pe-none{pointer-events:none !important}.pe-auto{pointer-events:auto !important}.rounded{border-radius:var(--bs-border-radius) !important}.rounded-0{border-radius:0 !important}.rounded-1{border-radius:var(--bs-border-radius-sm) !important}.rounded-2{border-radius:var(--bs-border-radius) !important}.rounded-3{border-radius:var(--bs-border-radius-lg) !important}.rounded-4{border-radius:var(--bs-border-radius-xl) !important}.rounded-5{border-radius:var(--bs-border-radius-xxl) !important}.rounded-circle{border-radius:50% !important}.rounded-pill{border-radius:var(--bs-border-radius-pill) !important}.rounded-top{border-top-left-radius:var(--bs-border-radius) !important;border-top-right-radius:var(--bs-border-radius) !important}.rounded-top-0{border-top-left-radius:0 !important;border-top-right-radius:0 !important}.rounded-top-1{border-top-left-radius:var(--bs-border-radius-sm) !important;border-top-right-radius:var(--bs-border-radius-sm) !important}.rounded-top-2{border-top-left-radius:var(--bs-border-radius) !important;border-top-right-radius:var(--bs-border-radius) !important}.rounded-top-3{border-top-left-radius:var(--bs-border-radius-lg) !important;border-top-right-radius:var(--bs-border-radius-lg) !important}.rounded-top-4{border-top-left-radius:var(--bs-border-radius-xl) !important;border-top-right-radius:var(--bs-border-radius-xl) !important}.rounded-top-5{border-top-left-radius:var(--bs-border-radius-xxl) !important;border-top-right-radius:var(--bs-border-radius-xxl) !important}.rounded-top-circle{border-top-left-radius:50% !important;border-top-right-radius:50% !important}.rounded-top-pill{border-top-left-radius:var(--bs-border-radius-pill) !important;border-top-right-radius:var(--bs-border-radius-pill) !important}.rounded-end{border-top-right-radius:var(--bs-border-radius) !important;border-bottom-right-radius:var(--bs-border-radius) !important}.rounded-end-0{border-top-right-radius:0 !important;border-bottom-right-radius:0 !important}.rounded-end-1{border-top-right-radius:var(--bs-border-radius-sm) !important;border-bottom-right-radius:var(--bs-border-radius-sm) !important}.rounded-end-2{border-top-right-radius:var(--bs-border-radius) !important;border-bottom-right-radius:var(--bs-border-radius) !important}.rounded-end-3{border-top-right-radius:var(--bs-border-radius-lg) !important;border-bottom-right-radius:var(--bs-border-radius-lg) !important}.rounded-end-4{border-top-right-radius:var(--bs-border-radius-xl) !important;border-bottom-right-radius:var(--bs-border-radius-xl) !important}.rounded-end-5{border-top-right-radius:var(--bs-border-radius-xxl) !important;border-bottom-right-radius:var(--bs-border-radius-xxl) !important}.rounded-end-circle{border-top-right-radius:50% !important;border-bottom-right-radius:50% !important}.rounded-end-pill{border-top-right-radius:var(--bs-border-radius-pill) !important;border-bottom-right-radius:var(--bs-border-radius-pill) !important}.rounded-bottom{border-bottom-right-radius:var(--bs-border-radius) !important;border-bottom-left-radius:var(--bs-border-radius) !important}.rounded-bottom-0{border-bottom-right-radius:0 !important;border-bottom-left-radius:0 !important}.rounded-bottom-1{border-bottom-right-radius:var(--bs-border-radius-sm) !important;border-bottom-left-radius:var(--bs-border-radius-sm) !important}.rounded-bottom-2{border-bottom-right-radius:var(--bs-border-radius) !important;border-bottom-left-radius:var(--bs-border-radius) !important}.rounded-bottom-3{border-bottom-right-radius:var(--bs-border-radius-lg) !important;border-bottom-left-radius:var(--bs-border-radius-lg) !important}.rounded-bottom-4{border-bottom-right-radius:var(--bs-border-radius-xl) !important;border-bottom-left-radius:var(--bs-border-radius-xl) !important}.rounded-bottom-5{border-bottom-right-radius:var(--bs-border-radius-xxl) !important;border-bottom-left-radius:var(--bs-border-radius-xxl) !important}.rounded-bottom-circle{border-bottom-right-radius:50% !important;border-bottom-left-radius:50% !important}.rounded-bottom-pill{border-bottom-right-radius:var(--bs-border-radius-pill) !important;border-bottom-left-radius:var(--bs-border-radius-pill) !important}.rounded-start{border-bottom-left-radius:var(--bs-border-radius) !important;border-top-left-radius:var(--bs-border-radius) !important}.rounded-start-0{border-bottom-left-radius:0 !important;border-top-left-radius:0 !important}.rounded-start-1{border-bottom-left-radius:var(--bs-border-radius-sm) !important;border-top-left-radius:var(--bs-border-radius-sm) !important}.rounded-start-2{border-bottom-left-radius:var(--bs-border-radius) !important;border-top-left-radius:var(--bs-border-radius) !important}.rounded-start-3{border-bottom-left-radius:var(--bs-border-radius-lg) !important;border-top-left-radius:var(--bs-border-radius-lg) !important}.rounded-start-4{border-bottom-left-radius:var(--bs-border-radius-xl) !important;border-top-left-radius:var(--bs-border-radius-xl) !important}.rounded-start-5{border-bottom-left-radius:var(--bs-border-radius-xxl) !important;border-top-left-radius:var(--bs-border-radius-xxl) !important}.rounded-start-circle{border-bottom-left-radius:50% !important;border-top-left-radius:50% !important}.rounded-start-pill{border-bottom-left-radius:var(--bs-border-radius-pill) !important;border-top-left-radius:var(--bs-border-radius-pill) !important}.visible{visibility:visible !important}.invisible{visibility:hidden !important}.z-n1{z-index:-1 !important}.z-0{z-index:0 !important}.z-1{z-index:1 !important}.z-2{z-index:2 !important}.z-3{z-index:3 !important}@media(min-width: 576px){.float-sm-start{float:left !important}.float-sm-end{float:right !important}.float-sm-none{float:none !important}.object-fit-sm-contain{object-fit:contain !important}.object-fit-sm-cover{object-fit:cover !important}.object-fit-sm-fill{object-fit:fill !important}.object-fit-sm-scale{object-fit:scale-down !important}.object-fit-sm-none{object-fit:none !important}.d-sm-inline{display:inline !important}.d-sm-inline-block{display:inline-block !important}.d-sm-block{display:block !important}.d-sm-grid{display:grid !important}.d-sm-inline-grid{display:inline-grid !important}.d-sm-table{display:table !important}.d-sm-table-row{display:table-row !important}.d-sm-table-cell{display:table-cell !important}.d-sm-flex{display:flex !important}.d-sm-inline-flex{display:inline-flex !important}.d-sm-none{display:none !important}.flex-sm-fill{flex:1 1 auto !important}.flex-sm-row{flex-direction:row !important}.flex-sm-column{flex-direction:column !important}.flex-sm-row-reverse{flex-direction:row-reverse !important}.flex-sm-column-reverse{flex-direction:column-reverse !important}.flex-sm-grow-0{flex-grow:0 !important}.flex-sm-grow-1{flex-grow:1 !important}.flex-sm-shrink-0{flex-shrink:0 !important}.flex-sm-shrink-1{flex-shrink:1 !important}.flex-sm-wrap{flex-wrap:wrap !important}.flex-sm-nowrap{flex-wrap:nowrap !important}.flex-sm-wrap-reverse{flex-wrap:wrap-reverse !important}.justify-content-sm-start{justify-content:flex-start !important}.justify-content-sm-end{justify-content:flex-end !important}.justify-content-sm-center{justify-content:center !important}.justify-content-sm-between{justify-content:space-between !important}.justify-content-sm-around{justify-content:space-around !important}.justify-content-sm-evenly{justify-content:space-evenly !important}.align-items-sm-start{align-items:flex-start !important}.align-items-sm-end{align-items:flex-end !important}.align-items-sm-center{align-items:center !important}.align-items-sm-baseline{align-items:baseline !important}.align-items-sm-stretch{align-items:stretch !important}.align-content-sm-start{align-content:flex-start !important}.align-content-sm-end{align-content:flex-end !important}.align-content-sm-center{align-content:center !important}.align-content-sm-between{align-content:space-between !important}.align-content-sm-around{align-content:space-around !important}.align-content-sm-stretch{align-content:stretch !important}.align-self-sm-auto{align-self:auto !important}.align-self-sm-start{align-self:flex-start !important}.align-self-sm-end{align-self:flex-end !important}.align-self-sm-center{align-self:center !important}.align-self-sm-baseline{align-self:baseline !important}.align-self-sm-stretch{align-self:stretch !important}.order-sm-first{order:-1 !important}.order-sm-0{order:0 !important}.order-sm-1{order:1 !important}.order-sm-2{order:2 !important}.order-sm-3{order:3 !important}.order-sm-4{order:4 !important}.order-sm-5{order:5 !important}.order-sm-last{order:6 !important}.m-sm-0{margin:0 !important}.m-sm-1{margin:.25rem !important}.m-sm-2{margin:.5rem !important}.m-sm-3{margin:1rem !important}.m-sm-4{margin:1.5rem !important}.m-sm-5{margin:3rem !important}.m-sm-auto{margin:auto !important}.mx-sm-0{margin-right:0 !important;margin-left:0 !important}.mx-sm-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-sm-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-sm-3{margin-right:1rem !important;margin-left:1rem !important}.mx-sm-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-sm-5{margin-right:3rem !important;margin-left:3rem !important}.mx-sm-auto{margin-right:auto !important;margin-left:auto !important}.my-sm-0{margin-top:0 !important;margin-bottom:0 !important}.my-sm-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-sm-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-sm-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-sm-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-sm-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-sm-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-sm-0{margin-top:0 !important}.mt-sm-1{margin-top:.25rem !important}.mt-sm-2{margin-top:.5rem !important}.mt-sm-3{margin-top:1rem !important}.mt-sm-4{margin-top:1.5rem !important}.mt-sm-5{margin-top:3rem !important}.mt-sm-auto{margin-top:auto !important}.me-sm-0{margin-right:0 !important}.me-sm-1{margin-right:.25rem !important}.me-sm-2{margin-right:.5rem !important}.me-sm-3{margin-right:1rem !important}.me-sm-4{margin-right:1.5rem !important}.me-sm-5{margin-right:3rem !important}.me-sm-auto{margin-right:auto !important}.mb-sm-0{margin-bottom:0 !important}.mb-sm-1{margin-bottom:.25rem !important}.mb-sm-2{margin-bottom:.5rem !important}.mb-sm-3{margin-bottom:1rem !important}.mb-sm-4{margin-bottom:1.5rem !important}.mb-sm-5{margin-bottom:3rem !important}.mb-sm-auto{margin-bottom:auto !important}.ms-sm-0{margin-left:0 !important}.ms-sm-1{margin-left:.25rem !important}.ms-sm-2{margin-left:.5rem !important}.ms-sm-3{margin-left:1rem !important}.ms-sm-4{margin-left:1.5rem !important}.ms-sm-5{margin-left:3rem !important}.ms-sm-auto{margin-left:auto !important}.p-sm-0{padding:0 !important}.p-sm-1{padding:.25rem !important}.p-sm-2{padding:.5rem !important}.p-sm-3{padding:1rem !important}.p-sm-4{padding:1.5rem !important}.p-sm-5{padding:3rem !important}.px-sm-0{padding-right:0 !important;padding-left:0 !important}.px-sm-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-sm-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-sm-3{padding-right:1rem !important;padding-left:1rem !important}.px-sm-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-sm-5{padding-right:3rem !important;padding-left:3rem !important}.py-sm-0{padding-top:0 !important;padding-bottom:0 !important}.py-sm-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-sm-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-sm-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-sm-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-sm-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-sm-0{padding-top:0 !important}.pt-sm-1{padding-top:.25rem !important}.pt-sm-2{padding-top:.5rem !important}.pt-sm-3{padding-top:1rem !important}.pt-sm-4{padding-top:1.5rem !important}.pt-sm-5{padding-top:3rem !important}.pe-sm-0{padding-right:0 !important}.pe-sm-1{padding-right:.25rem !important}.pe-sm-2{padding-right:.5rem !important}.pe-sm-3{padding-right:1rem !important}.pe-sm-4{padding-right:1.5rem !important}.pe-sm-5{padding-right:3rem !important}.pb-sm-0{padding-bottom:0 !important}.pb-sm-1{padding-bottom:.25rem !important}.pb-sm-2{padding-bottom:.5rem !important}.pb-sm-3{padding-bottom:1rem !important}.pb-sm-4{padding-bottom:1.5rem !important}.pb-sm-5{padding-bottom:3rem !important}.ps-sm-0{padding-left:0 !important}.ps-sm-1{padding-left:.25rem !important}.ps-sm-2{padding-left:.5rem !important}.ps-sm-3{padding-left:1rem !important}.ps-sm-4{padding-left:1.5rem !important}.ps-sm-5{padding-left:3rem !important}.gap-sm-0{gap:0 !important}.gap-sm-1{gap:.25rem !important}.gap-sm-2{gap:.5rem !important}.gap-sm-3{gap:1rem !important}.gap-sm-4{gap:1.5rem !important}.gap-sm-5{gap:3rem !important}.row-gap-sm-0{row-gap:0 !important}.row-gap-sm-1{row-gap:.25rem !important}.row-gap-sm-2{row-gap:.5rem !important}.row-gap-sm-3{row-gap:1rem !important}.row-gap-sm-4{row-gap:1.5rem !important}.row-gap-sm-5{row-gap:3rem !important}.column-gap-sm-0{column-gap:0 !important}.column-gap-sm-1{column-gap:.25rem !important}.column-gap-sm-2{column-gap:.5rem !important}.column-gap-sm-3{column-gap:1rem !important}.column-gap-sm-4{column-gap:1.5rem !important}.column-gap-sm-5{column-gap:3rem !important}.text-sm-start{text-align:left !important}.text-sm-end{text-align:right !important}.text-sm-center{text-align:center !important}}@media(min-width: 768px){.float-md-start{float:left !important}.float-md-end{float:right !important}.float-md-none{float:none !important}.object-fit-md-contain{object-fit:contain !important}.object-fit-md-cover{object-fit:cover !important}.object-fit-md-fill{object-fit:fill !important}.object-fit-md-scale{object-fit:scale-down !important}.object-fit-md-none{object-fit:none !important}.d-md-inline{display:inline !important}.d-md-inline-block{display:inline-block !important}.d-md-block{display:block !important}.d-md-grid{display:grid !important}.d-md-inline-grid{display:inline-grid !important}.d-md-table{display:table !important}.d-md-table-row{display:table-row !important}.d-md-table-cell{display:table-cell !important}.d-md-flex{display:flex !important}.d-md-inline-flex{display:inline-flex !important}.d-md-none{display:none !important}.flex-md-fill{flex:1 1 auto !important}.flex-md-row{flex-direction:row !important}.flex-md-column{flex-direction:column !important}.flex-md-row-reverse{flex-direction:row-reverse !important}.flex-md-column-reverse{flex-direction:column-reverse !important}.flex-md-grow-0{flex-grow:0 !important}.flex-md-grow-1{flex-grow:1 !important}.flex-md-shrink-0{flex-shrink:0 !important}.flex-md-shrink-1{flex-shrink:1 !important}.flex-md-wrap{flex-wrap:wrap !important}.flex-md-nowrap{flex-wrap:nowrap !important}.flex-md-wrap-reverse{flex-wrap:wrap-reverse !important}.justify-content-md-start{justify-content:flex-start !important}.justify-content-md-end{justify-content:flex-end !important}.justify-content-md-center{justify-content:center !important}.justify-content-md-between{justify-content:space-between !important}.justify-content-md-around{justify-content:space-around !important}.justify-content-md-evenly{justify-content:space-evenly !important}.align-items-md-start{align-items:flex-start !important}.align-items-md-end{align-items:flex-end !important}.align-items-md-center{align-items:center !important}.align-items-md-baseline{align-items:baseline !important}.align-items-md-stretch{align-items:stretch !important}.align-content-md-start{align-content:flex-start !important}.align-content-md-end{align-content:flex-end !important}.align-content-md-center{align-content:center !important}.align-content-md-between{align-content:space-between !important}.align-content-md-around{align-content:space-around !important}.align-content-md-stretch{align-content:stretch !important}.align-self-md-auto{align-self:auto !important}.align-self-md-start{align-self:flex-start !important}.align-self-md-end{align-self:flex-end !important}.align-self-md-center{align-self:center !important}.align-self-md-baseline{align-self:baseline !important}.align-self-md-stretch{align-self:stretch !important}.order-md-first{order:-1 !important}.order-md-0{order:0 !important}.order-md-1{order:1 !important}.order-md-2{order:2 !important}.order-md-3{order:3 !important}.order-md-4{order:4 !important}.order-md-5{order:5 !important}.order-md-last{order:6 !important}.m-md-0{margin:0 !important}.m-md-1{margin:.25rem !important}.m-md-2{margin:.5rem !important}.m-md-3{margin:1rem !important}.m-md-4{margin:1.5rem !important}.m-md-5{margin:3rem !important}.m-md-auto{margin:auto !important}.mx-md-0{margin-right:0 !important;margin-left:0 !important}.mx-md-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-md-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-md-3{margin-right:1rem !important;margin-left:1rem !important}.mx-md-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-md-5{margin-right:3rem !important;margin-left:3rem !important}.mx-md-auto{margin-right:auto !important;margin-left:auto !important}.my-md-0{margin-top:0 !important;margin-bottom:0 !important}.my-md-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-md-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-md-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-md-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-md-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-md-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-md-0{margin-top:0 !important}.mt-md-1{margin-top:.25rem !important}.mt-md-2{margin-top:.5rem !important}.mt-md-3{margin-top:1rem !important}.mt-md-4{margin-top:1.5rem !important}.mt-md-5{margin-top:3rem !important}.mt-md-auto{margin-top:auto !important}.me-md-0{margin-right:0 !important}.me-md-1{margin-right:.25rem !important}.me-md-2{margin-right:.5rem !important}.me-md-3{margin-right:1rem !important}.me-md-4{margin-right:1.5rem !important}.me-md-5{margin-right:3rem !important}.me-md-auto{margin-right:auto !important}.mb-md-0{margin-bottom:0 !important}.mb-md-1{margin-bottom:.25rem !important}.mb-md-2{margin-bottom:.5rem !important}.mb-md-3{margin-bottom:1rem !important}.mb-md-4{margin-bottom:1.5rem !important}.mb-md-5{margin-bottom:3rem !important}.mb-md-auto{margin-bottom:auto !important}.ms-md-0{margin-left:0 !important}.ms-md-1{margin-left:.25rem !important}.ms-md-2{margin-left:.5rem !important}.ms-md-3{margin-left:1rem !important}.ms-md-4{margin-left:1.5rem !important}.ms-md-5{margin-left:3rem !important}.ms-md-auto{margin-left:auto !important}.p-md-0{padding:0 !important}.p-md-1{padding:.25rem !important}.p-md-2{padding:.5rem !important}.p-md-3{padding:1rem !important}.p-md-4{padding:1.5rem !important}.p-md-5{padding:3rem !important}.px-md-0{padding-right:0 !important;padding-left:0 !important}.px-md-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-md-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-md-3{padding-right:1rem !important;padding-left:1rem !important}.px-md-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-md-5{padding-right:3rem !important;padding-left:3rem !important}.py-md-0{padding-top:0 !important;padding-bottom:0 !important}.py-md-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-md-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-md-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-md-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-md-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-md-0{padding-top:0 !important}.pt-md-1{padding-top:.25rem !important}.pt-md-2{padding-top:.5rem !important}.pt-md-3{padding-top:1rem !important}.pt-md-4{padding-top:1.5rem !important}.pt-md-5{padding-top:3rem !important}.pe-md-0{padding-right:0 !important}.pe-md-1{padding-right:.25rem !important}.pe-md-2{padding-right:.5rem !important}.pe-md-3{padding-right:1rem !important}.pe-md-4{padding-right:1.5rem !important}.pe-md-5{padding-right:3rem !important}.pb-md-0{padding-bottom:0 !important}.pb-md-1{padding-bottom:.25rem !important}.pb-md-2{padding-bottom:.5rem !important}.pb-md-3{padding-bottom:1rem !important}.pb-md-4{padding-bottom:1.5rem !important}.pb-md-5{padding-bottom:3rem !important}.ps-md-0{padding-left:0 !important}.ps-md-1{padding-left:.25rem !important}.ps-md-2{padding-left:.5rem !important}.ps-md-3{padding-left:1rem !important}.ps-md-4{padding-left:1.5rem !important}.ps-md-5{padding-left:3rem !important}.gap-md-0{gap:0 !important}.gap-md-1{gap:.25rem !important}.gap-md-2{gap:.5rem !important}.gap-md-3{gap:1rem !important}.gap-md-4{gap:1.5rem !important}.gap-md-5{gap:3rem !important}.row-gap-md-0{row-gap:0 !important}.row-gap-md-1{row-gap:.25rem !important}.row-gap-md-2{row-gap:.5rem !important}.row-gap-md-3{row-gap:1rem !important}.row-gap-md-4{row-gap:1.5rem !important}.row-gap-md-5{row-gap:3rem !important}.column-gap-md-0{column-gap:0 !important}.column-gap-md-1{column-gap:.25rem !important}.column-gap-md-2{column-gap:.5rem !important}.column-gap-md-3{column-gap:1rem !important}.column-gap-md-4{column-gap:1.5rem !important}.column-gap-md-5{column-gap:3rem !important}.text-md-start{text-align:left !important}.text-md-end{text-align:right !important}.text-md-center{text-align:center !important}}@media(min-width: 992px){.float-lg-start{float:left !important}.float-lg-end{float:right !important}.float-lg-none{float:none !important}.object-fit-lg-contain{object-fit:contain !important}.object-fit-lg-cover{object-fit:cover !important}.object-fit-lg-fill{object-fit:fill !important}.object-fit-lg-scale{object-fit:scale-down !important}.object-fit-lg-none{object-fit:none !important}.d-lg-inline{display:inline !important}.d-lg-inline-block{display:inline-block !important}.d-lg-block{display:block !important}.d-lg-grid{display:grid !important}.d-lg-inline-grid{display:inline-grid !important}.d-lg-table{display:table !important}.d-lg-table-row{display:table-row !important}.d-lg-table-cell{display:table-cell !important}.d-lg-flex{display:flex !important}.d-lg-inline-flex{display:inline-flex !important}.d-lg-none{display:none !important}.flex-lg-fill{flex:1 1 auto !important}.flex-lg-row{flex-direction:row !important}.flex-lg-column{flex-direction:column !important}.flex-lg-row-reverse{flex-direction:row-reverse !important}.flex-lg-column-reverse{flex-direction:column-reverse !important}.flex-lg-grow-0{flex-grow:0 !important}.flex-lg-grow-1{flex-grow:1 !important}.flex-lg-shrink-0{flex-shrink:0 !important}.flex-lg-shrink-1{flex-shrink:1 !important}.flex-lg-wrap{flex-wrap:wrap !important}.flex-lg-nowrap{flex-wrap:nowrap !important}.flex-lg-wrap-reverse{flex-wrap:wrap-reverse !important}.justify-content-lg-start{justify-content:flex-start !important}.justify-content-lg-end{justify-content:flex-end !important}.justify-content-lg-center{justify-content:center !important}.justify-content-lg-between{justify-content:space-between !important}.justify-content-lg-around{justify-content:space-around !important}.justify-content-lg-evenly{justify-content:space-evenly !important}.align-items-lg-start{align-items:flex-start !important}.align-items-lg-end{align-items:flex-end !important}.align-items-lg-center{align-items:center !important}.align-items-lg-baseline{align-items:baseline !important}.align-items-lg-stretch{align-items:stretch !important}.align-content-lg-start{align-content:flex-start !important}.align-content-lg-end{align-content:flex-end !important}.align-content-lg-center{align-content:center !important}.align-content-lg-between{align-content:space-between !important}.align-content-lg-around{align-content:space-around !important}.align-content-lg-stretch{align-content:stretch !important}.align-self-lg-auto{align-self:auto !important}.align-self-lg-start{align-self:flex-start !important}.align-self-lg-end{align-self:flex-end !important}.align-self-lg-center{align-self:center !important}.align-self-lg-baseline{align-self:baseline !important}.align-self-lg-stretch{align-self:stretch !important}.order-lg-first{order:-1 !important}.order-lg-0{order:0 !important}.order-lg-1{order:1 !important}.order-lg-2{order:2 !important}.order-lg-3{order:3 !important}.order-lg-4{order:4 !important}.order-lg-5{order:5 !important}.order-lg-last{order:6 !important}.m-lg-0{margin:0 !important}.m-lg-1{margin:.25rem !important}.m-lg-2{margin:.5rem !important}.m-lg-3{margin:1rem !important}.m-lg-4{margin:1.5rem !important}.m-lg-5{margin:3rem !important}.m-lg-auto{margin:auto !important}.mx-lg-0{margin-right:0 !important;margin-left:0 !important}.mx-lg-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-lg-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-lg-3{margin-right:1rem !important;margin-left:1rem !important}.mx-lg-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-lg-5{margin-right:3rem !important;margin-left:3rem !important}.mx-lg-auto{margin-right:auto !important;margin-left:auto !important}.my-lg-0{margin-top:0 !important;margin-bottom:0 !important}.my-lg-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-lg-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-lg-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-lg-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-lg-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-lg-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-lg-0{margin-top:0 !important}.mt-lg-1{margin-top:.25rem !important}.mt-lg-2{margin-top:.5rem !important}.mt-lg-3{margin-top:1rem !important}.mt-lg-4{margin-top:1.5rem !important}.mt-lg-5{margin-top:3rem !important}.mt-lg-auto{margin-top:auto !important}.me-lg-0{margin-right:0 !important}.me-lg-1{margin-right:.25rem !important}.me-lg-2{margin-right:.5rem !important}.me-lg-3{margin-right:1rem !important}.me-lg-4{margin-right:1.5rem !important}.me-lg-5{margin-right:3rem !important}.me-lg-auto{margin-right:auto !important}.mb-lg-0{margin-bottom:0 !important}.mb-lg-1{margin-bottom:.25rem !important}.mb-lg-2{margin-bottom:.5rem !important}.mb-lg-3{margin-bottom:1rem !important}.mb-lg-4{margin-bottom:1.5rem !important}.mb-lg-5{margin-bottom:3rem !important}.mb-lg-auto{margin-bottom:auto !important}.ms-lg-0{margin-left:0 !important}.ms-lg-1{margin-left:.25rem !important}.ms-lg-2{margin-left:.5rem !important}.ms-lg-3{margin-left:1rem !important}.ms-lg-4{margin-left:1.5rem !important}.ms-lg-5{margin-left:3rem !important}.ms-lg-auto{margin-left:auto !important}.p-lg-0{padding:0 !important}.p-lg-1{padding:.25rem !important}.p-lg-2{padding:.5rem !important}.p-lg-3{padding:1rem !important}.p-lg-4{padding:1.5rem !important}.p-lg-5{padding:3rem !important}.px-lg-0{padding-right:0 !important;padding-left:0 !important}.px-lg-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-lg-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-lg-3{padding-right:1rem !important;padding-left:1rem !important}.px-lg-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-lg-5{padding-right:3rem !important;padding-left:3rem !important}.py-lg-0{padding-top:0 !important;padding-bottom:0 !important}.py-lg-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-lg-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-lg-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-lg-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-lg-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-lg-0{padding-top:0 !important}.pt-lg-1{padding-top:.25rem !important}.pt-lg-2{padding-top:.5rem !important}.pt-lg-3{padding-top:1rem !important}.pt-lg-4{padding-top:1.5rem !important}.pt-lg-5{padding-top:3rem !important}.pe-lg-0{padding-right:0 !important}.pe-lg-1{padding-right:.25rem !important}.pe-lg-2{padding-right:.5rem !important}.pe-lg-3{padding-right:1rem !important}.pe-lg-4{padding-right:1.5rem !important}.pe-lg-5{padding-right:3rem !important}.pb-lg-0{padding-bottom:0 !important}.pb-lg-1{padding-bottom:.25rem !important}.pb-lg-2{padding-bottom:.5rem !important}.pb-lg-3{padding-bottom:1rem !important}.pb-lg-4{padding-bottom:1.5rem !important}.pb-lg-5{padding-bottom:3rem !important}.ps-lg-0{padding-left:0 !important}.ps-lg-1{padding-left:.25rem !important}.ps-lg-2{padding-left:.5rem !important}.ps-lg-3{padding-left:1rem !important}.ps-lg-4{padding-left:1.5rem !important}.ps-lg-5{padding-left:3rem !important}.gap-lg-0{gap:0 !important}.gap-lg-1{gap:.25rem !important}.gap-lg-2{gap:.5rem !important}.gap-lg-3{gap:1rem !important}.gap-lg-4{gap:1.5rem !important}.gap-lg-5{gap:3rem !important}.row-gap-lg-0{row-gap:0 !important}.row-gap-lg-1{row-gap:.25rem !important}.row-gap-lg-2{row-gap:.5rem !important}.row-gap-lg-3{row-gap:1rem !important}.row-gap-lg-4{row-gap:1.5rem !important}.row-gap-lg-5{row-gap:3rem !important}.column-gap-lg-0{column-gap:0 !important}.column-gap-lg-1{column-gap:.25rem !important}.column-gap-lg-2{column-gap:.5rem !important}.column-gap-lg-3{column-gap:1rem !important}.column-gap-lg-4{column-gap:1.5rem !important}.column-gap-lg-5{column-gap:3rem !important}.text-lg-start{text-align:left !important}.text-lg-end{text-align:right !important}.text-lg-center{text-align:center !important}}@media(min-width: 1200px){.float-xl-start{float:left !important}.float-xl-end{float:right !important}.float-xl-none{float:none !important}.object-fit-xl-contain{object-fit:contain !important}.object-fit-xl-cover{object-fit:cover !important}.object-fit-xl-fill{object-fit:fill !important}.object-fit-xl-scale{object-fit:scale-down !important}.object-fit-xl-none{object-fit:none !important}.d-xl-inline{display:inline !important}.d-xl-inline-block{display:inline-block !important}.d-xl-block{display:block !important}.d-xl-grid{display:grid !important}.d-xl-inline-grid{display:inline-grid !important}.d-xl-table{display:table !important}.d-xl-table-row{display:table-row !important}.d-xl-table-cell{display:table-cell !important}.d-xl-flex{display:flex !important}.d-xl-inline-flex{display:inline-flex !important}.d-xl-none{display:none !important}.flex-xl-fill{flex:1 1 auto !important}.flex-xl-row{flex-direction:row !important}.flex-xl-column{flex-direction:column !important}.flex-xl-row-reverse{flex-direction:row-reverse !important}.flex-xl-column-reverse{flex-direction:column-reverse !important}.flex-xl-grow-0{flex-grow:0 !important}.flex-xl-grow-1{flex-grow:1 !important}.flex-xl-shrink-0{flex-shrink:0 !important}.flex-xl-shrink-1{flex-shrink:1 !important}.flex-xl-wrap{flex-wrap:wrap !important}.flex-xl-nowrap{flex-wrap:nowrap !important}.flex-xl-wrap-reverse{flex-wrap:wrap-reverse !important}.justify-content-xl-start{justify-content:flex-start !important}.justify-content-xl-end{justify-content:flex-end !important}.justify-content-xl-center{justify-content:center !important}.justify-content-xl-between{justify-content:space-between !important}.justify-content-xl-around{justify-content:space-around !important}.justify-content-xl-evenly{justify-content:space-evenly !important}.align-items-xl-start{align-items:flex-start !important}.align-items-xl-end{align-items:flex-end !important}.align-items-xl-center{align-items:center !important}.align-items-xl-baseline{align-items:baseline !important}.align-items-xl-stretch{align-items:stretch !important}.align-content-xl-start{align-content:flex-start !important}.align-content-xl-end{align-content:flex-end !important}.align-content-xl-center{align-content:center !important}.align-content-xl-between{align-content:space-between !important}.align-content-xl-around{align-content:space-around !important}.align-content-xl-stretch{align-content:stretch !important}.align-self-xl-auto{align-self:auto !important}.align-self-xl-start{align-self:flex-start !important}.align-self-xl-end{align-self:flex-end !important}.align-self-xl-center{align-self:center !important}.align-self-xl-baseline{align-self:baseline !important}.align-self-xl-stretch{align-self:stretch !important}.order-xl-first{order:-1 !important}.order-xl-0{order:0 !important}.order-xl-1{order:1 !important}.order-xl-2{order:2 !important}.order-xl-3{order:3 !important}.order-xl-4{order:4 !important}.order-xl-5{order:5 !important}.order-xl-last{order:6 !important}.m-xl-0{margin:0 !important}.m-xl-1{margin:.25rem !important}.m-xl-2{margin:.5rem !important}.m-xl-3{margin:1rem !important}.m-xl-4{margin:1.5rem !important}.m-xl-5{margin:3rem !important}.m-xl-auto{margin:auto !important}.mx-xl-0{margin-right:0 !important;margin-left:0 !important}.mx-xl-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-xl-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-xl-3{margin-right:1rem !important;margin-left:1rem !important}.mx-xl-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-xl-5{margin-right:3rem !important;margin-left:3rem !important}.mx-xl-auto{margin-right:auto !important;margin-left:auto !important}.my-xl-0{margin-top:0 !important;margin-bottom:0 !important}.my-xl-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-xl-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-xl-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-xl-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-xl-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-xl-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-xl-0{margin-top:0 !important}.mt-xl-1{margin-top:.25rem !important}.mt-xl-2{margin-top:.5rem !important}.mt-xl-3{margin-top:1rem !important}.mt-xl-4{margin-top:1.5rem !important}.mt-xl-5{margin-top:3rem !important}.mt-xl-auto{margin-top:auto !important}.me-xl-0{margin-right:0 !important}.me-xl-1{margin-right:.25rem !important}.me-xl-2{margin-right:.5rem !important}.me-xl-3{margin-right:1rem !important}.me-xl-4{margin-right:1.5rem !important}.me-xl-5{margin-right:3rem !important}.me-xl-auto{margin-right:auto !important}.mb-xl-0{margin-bottom:0 !important}.mb-xl-1{margin-bottom:.25rem !important}.mb-xl-2{margin-bottom:.5rem !important}.mb-xl-3{margin-bottom:1rem !important}.mb-xl-4{margin-bottom:1.5rem !important}.mb-xl-5{margin-bottom:3rem !important}.mb-xl-auto{margin-bottom:auto !important}.ms-xl-0{margin-left:0 !important}.ms-xl-1{margin-left:.25rem !important}.ms-xl-2{margin-left:.5rem !important}.ms-xl-3{margin-left:1rem !important}.ms-xl-4{margin-left:1.5rem !important}.ms-xl-5{margin-left:3rem !important}.ms-xl-auto{margin-left:auto !important}.p-xl-0{padding:0 !important}.p-xl-1{padding:.25rem !important}.p-xl-2{padding:.5rem !important}.p-xl-3{padding:1rem !important}.p-xl-4{padding:1.5rem !important}.p-xl-5{padding:3rem !important}.px-xl-0{padding-right:0 !important;padding-left:0 !important}.px-xl-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-xl-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-xl-3{padding-right:1rem !important;padding-left:1rem !important}.px-xl-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-xl-5{padding-right:3rem !important;padding-left:3rem !important}.py-xl-0{padding-top:0 !important;padding-bottom:0 !important}.py-xl-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-xl-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-xl-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-xl-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-xl-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-xl-0{padding-top:0 !important}.pt-xl-1{padding-top:.25rem !important}.pt-xl-2{padding-top:.5rem !important}.pt-xl-3{padding-top:1rem !important}.pt-xl-4{padding-top:1.5rem !important}.pt-xl-5{padding-top:3rem !important}.pe-xl-0{padding-right:0 !important}.pe-xl-1{padding-right:.25rem !important}.pe-xl-2{padding-right:.5rem !important}.pe-xl-3{padding-right:1rem !important}.pe-xl-4{padding-right:1.5rem !important}.pe-xl-5{padding-right:3rem !important}.pb-xl-0{padding-bottom:0 !important}.pb-xl-1{padding-bottom:.25rem !important}.pb-xl-2{padding-bottom:.5rem !important}.pb-xl-3{padding-bottom:1rem !important}.pb-xl-4{padding-bottom:1.5rem !important}.pb-xl-5{padding-bottom:3rem !important}.ps-xl-0{padding-left:0 !important}.ps-xl-1{padding-left:.25rem !important}.ps-xl-2{padding-left:.5rem !important}.ps-xl-3{padding-left:1rem !important}.ps-xl-4{padding-left:1.5rem !important}.ps-xl-5{padding-left:3rem !important}.gap-xl-0{gap:0 !important}.gap-xl-1{gap:.25rem !important}.gap-xl-2{gap:.5rem !important}.gap-xl-3{gap:1rem !important}.gap-xl-4{gap:1.5rem !important}.gap-xl-5{gap:3rem !important}.row-gap-xl-0{row-gap:0 !important}.row-gap-xl-1{row-gap:.25rem !important}.row-gap-xl-2{row-gap:.5rem !important}.row-gap-xl-3{row-gap:1rem !important}.row-gap-xl-4{row-gap:1.5rem !important}.row-gap-xl-5{row-gap:3rem !important}.column-gap-xl-0{column-gap:0 !important}.column-gap-xl-1{column-gap:.25rem !important}.column-gap-xl-2{column-gap:.5rem !important}.column-gap-xl-3{column-gap:1rem !important}.column-gap-xl-4{column-gap:1.5rem !important}.column-gap-xl-5{column-gap:3rem !important}.text-xl-start{text-align:left !important}.text-xl-end{text-align:right !important}.text-xl-center{text-align:center !important}}@media(min-width: 1400px){.float-xxl-start{float:left !important}.float-xxl-end{float:right !important}.float-xxl-none{float:none !important}.object-fit-xxl-contain{object-fit:contain !important}.object-fit-xxl-cover{object-fit:cover !important}.object-fit-xxl-fill{object-fit:fill !important}.object-fit-xxl-scale{object-fit:scale-down !important}.object-fit-xxl-none{object-fit:none !important}.d-xxl-inline{display:inline !important}.d-xxl-inline-block{display:inline-block !important}.d-xxl-block{display:block !important}.d-xxl-grid{display:grid !important}.d-xxl-inline-grid{display:inline-grid !important}.d-xxl-table{display:table !important}.d-xxl-table-row{display:table-row !important}.d-xxl-table-cell{display:table-cell !important}.d-xxl-flex{display:flex !important}.d-xxl-inline-flex{display:inline-flex !important}.d-xxl-none{display:none !important}.flex-xxl-fill{flex:1 1 auto !important}.flex-xxl-row{flex-direction:row !important}.flex-xxl-column{flex-direction:column !important}.flex-xxl-row-reverse{flex-direction:row-reverse !important}.flex-xxl-column-reverse{flex-direction:column-reverse !important}.flex-xxl-grow-0{flex-grow:0 !important}.flex-xxl-grow-1{flex-grow:1 !important}.flex-xxl-shrink-0{flex-shrink:0 !important}.flex-xxl-shrink-1{flex-shrink:1 !important}.flex-xxl-wrap{flex-wrap:wrap !important}.flex-xxl-nowrap{flex-wrap:nowrap !important}.flex-xxl-wrap-reverse{flex-wrap:wrap-reverse !important}.justify-content-xxl-start{justify-content:flex-start !important}.justify-content-xxl-end{justify-content:flex-end !important}.justify-content-xxl-center{justify-content:center !important}.justify-content-xxl-between{justify-content:space-between !important}.justify-content-xxl-around{justify-content:space-around !important}.justify-content-xxl-evenly{justify-content:space-evenly !important}.align-items-xxl-start{align-items:flex-start !important}.align-items-xxl-end{align-items:flex-end !important}.align-items-xxl-center{align-items:center !important}.align-items-xxl-baseline{align-items:baseline !important}.align-items-xxl-stretch{align-items:stretch !important}.align-content-xxl-start{align-content:flex-start !important}.align-content-xxl-end{align-content:flex-end !important}.align-content-xxl-center{align-content:center !important}.align-content-xxl-between{align-content:space-between !important}.align-content-xxl-around{align-content:space-around !important}.align-content-xxl-stretch{align-content:stretch !important}.align-self-xxl-auto{align-self:auto !important}.align-self-xxl-start{align-self:flex-start !important}.align-self-xxl-end{align-self:flex-end !important}.align-self-xxl-center{align-self:center !important}.align-self-xxl-baseline{align-self:baseline !important}.align-self-xxl-stretch{align-self:stretch !important}.order-xxl-first{order:-1 !important}.order-xxl-0{order:0 !important}.order-xxl-1{order:1 !important}.order-xxl-2{order:2 !important}.order-xxl-3{order:3 !important}.order-xxl-4{order:4 !important}.order-xxl-5{order:5 !important}.order-xxl-last{order:6 !important}.m-xxl-0{margin:0 !important}.m-xxl-1{margin:.25rem !important}.m-xxl-2{margin:.5rem !important}.m-xxl-3{margin:1rem !important}.m-xxl-4{margin:1.5rem !important}.m-xxl-5{margin:3rem !important}.m-xxl-auto{margin:auto !important}.mx-xxl-0{margin-right:0 !important;margin-left:0 !important}.mx-xxl-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-xxl-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-xxl-3{margin-right:1rem !important;margin-left:1rem !important}.mx-xxl-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-xxl-5{margin-right:3rem !important;margin-left:3rem !important}.mx-xxl-auto{margin-right:auto !important;margin-left:auto !important}.my-xxl-0{margin-top:0 !important;margin-bottom:0 !important}.my-xxl-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-xxl-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-xxl-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-xxl-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-xxl-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-xxl-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-xxl-0{margin-top:0 !important}.mt-xxl-1{margin-top:.25rem !important}.mt-xxl-2{margin-top:.5rem !important}.mt-xxl-3{margin-top:1rem !important}.mt-xxl-4{margin-top:1.5rem !important}.mt-xxl-5{margin-top:3rem !important}.mt-xxl-auto{margin-top:auto !important}.me-xxl-0{margin-right:0 !important}.me-xxl-1{margin-right:.25rem !important}.me-xxl-2{margin-right:.5rem !important}.me-xxl-3{margin-right:1rem !important}.me-xxl-4{margin-right:1.5rem !important}.me-xxl-5{margin-right:3rem !important}.me-xxl-auto{margin-right:auto !important}.mb-xxl-0{margin-bottom:0 !important}.mb-xxl-1{margin-bottom:.25rem !important}.mb-xxl-2{margin-bottom:.5rem !important}.mb-xxl-3{margin-bottom:1rem !important}.mb-xxl-4{margin-bottom:1.5rem !important}.mb-xxl-5{margin-bottom:3rem !important}.mb-xxl-auto{margin-bottom:auto !important}.ms-xxl-0{margin-left:0 !important}.ms-xxl-1{margin-left:.25rem !important}.ms-xxl-2{margin-left:.5rem !important}.ms-xxl-3{margin-left:1rem !important}.ms-xxl-4{margin-left:1.5rem !important}.ms-xxl-5{margin-left:3rem !important}.ms-xxl-auto{margin-left:auto !important}.p-xxl-0{padding:0 !important}.p-xxl-1{padding:.25rem !important}.p-xxl-2{padding:.5rem !important}.p-xxl-3{padding:1rem !important}.p-xxl-4{padding:1.5rem !important}.p-xxl-5{padding:3rem !important}.px-xxl-0{padding-right:0 !important;padding-left:0 !important}.px-xxl-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-xxl-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-xxl-3{padding-right:1rem !important;padding-left:1rem !important}.px-xxl-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-xxl-5{padding-right:3rem !important;padding-left:3rem !important}.py-xxl-0{padding-top:0 !important;padding-bottom:0 !important}.py-xxl-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-xxl-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-xxl-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-xxl-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-xxl-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-xxl-0{padding-top:0 !important}.pt-xxl-1{padding-top:.25rem !important}.pt-xxl-2{padding-top:.5rem !important}.pt-xxl-3{padding-top:1rem !important}.pt-xxl-4{padding-top:1.5rem !important}.pt-xxl-5{padding-top:3rem !important}.pe-xxl-0{padding-right:0 !important}.pe-xxl-1{padding-right:.25rem !important}.pe-xxl-2{padding-right:.5rem !important}.pe-xxl-3{padding-right:1rem !important}.pe-xxl-4{padding-right:1.5rem !important}.pe-xxl-5{padding-right:3rem !important}.pb-xxl-0{padding-bottom:0 !important}.pb-xxl-1{padding-bottom:.25rem !important}.pb-xxl-2{padding-bottom:.5rem !important}.pb-xxl-3{padding-bottom:1rem !important}.pb-xxl-4{padding-bottom:1.5rem !important}.pb-xxl-5{padding-bottom:3rem !important}.ps-xxl-0{padding-left:0 !important}.ps-xxl-1{padding-left:.25rem !important}.ps-xxl-2{padding-left:.5rem !important}.ps-xxl-3{padding-left:1rem !important}.ps-xxl-4{padding-left:1.5rem !important}.ps-xxl-5{padding-left:3rem !important}.gap-xxl-0{gap:0 !important}.gap-xxl-1{gap:.25rem !important}.gap-xxl-2{gap:.5rem !important}.gap-xxl-3{gap:1rem !important}.gap-xxl-4{gap:1.5rem !important}.gap-xxl-5{gap:3rem !important}.row-gap-xxl-0{row-gap:0 !important}.row-gap-xxl-1{row-gap:.25rem !important}.row-gap-xxl-2{row-gap:.5rem !important}.row-gap-xxl-3{row-gap:1rem !important}.row-gap-xxl-4{row-gap:1.5rem !important}.row-gap-xxl-5{row-gap:3rem !important}.column-gap-xxl-0{column-gap:0 !important}.column-gap-xxl-1{column-gap:.25rem !important}.column-gap-xxl-2{column-gap:.5rem !important}.column-gap-xxl-3{column-gap:1rem !important}.column-gap-xxl-4{column-gap:1.5rem !important}.column-gap-xxl-5{column-gap:3rem !important}.text-xxl-start{text-align:left !important}.text-xxl-end{text-align:right !important}.text-xxl-center{text-align:center !important}}.bg-default{color:#fff}.bg-primary{color:#fff}.bg-secondary{color:#fff}.bg-success{color:#fff}.bg-info{color:#fff}.bg-warning{color:#fff}.bg-danger{color:#fff}.bg-light{color:#000}.bg-dark{color:#fff}@media(min-width: 1200px){.fs-1{font-size:2rem !important}.fs-2{font-size:1.65rem !important}.fs-3{font-size:1.45rem !important}}@media print{.d-print-inline{display:inline !important}.d-print-inline-block{display:inline-block !important}.d-print-block{display:block !important}.d-print-grid{display:grid !important}.d-print-inline-grid{display:inline-grid !important}.d-print-table{display:table !important}.d-print-table-row{display:table-row !important}.d-print-table-cell{display:table-cell !important}.d-print-flex{display:flex !important}.d-print-inline-flex{display:inline-flex !important}.d-print-none{display:none !important}}:root{--bslib-spacer: 1rem;--bslib-mb-spacer: var(--bslib-spacer, 1rem)}.bslib-mb-spacing{margin-bottom:var(--bslib-mb-spacer)}.bslib-gap-spacing{gap:var(--bslib-mb-spacer)}.bslib-gap-spacing>.bslib-mb-spacing,.bslib-gap-spacing>.form-group,.bslib-gap-spacing>p,.bslib-gap-spacing>pre{margin-bottom:0}.html-fill-container>.html-fill-item.bslib-mb-spacing{margin-bottom:0}.tab-content>.tab-pane.html-fill-container{display:none}.tab-content>.active.html-fill-container{display:flex}.tab-content.html-fill-container{padding:0}.bg-blue{--bslib-color-bg: #2780e3;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-blue{--bslib-color-fg: #2780e3;color:var(--bslib-color-fg)}.bg-indigo{--bslib-color-bg: #6610f2;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-indigo{--bslib-color-fg: #6610f2;color:var(--bslib-color-fg)}.bg-purple{--bslib-color-bg: #613d7c;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-purple{--bslib-color-fg: #613d7c;color:var(--bslib-color-fg)}.bg-pink{--bslib-color-bg: #e83e8c;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-pink{--bslib-color-fg: #e83e8c;color:var(--bslib-color-fg)}.bg-red{--bslib-color-bg: #ff0039;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-red{--bslib-color-fg: #ff0039;color:var(--bslib-color-fg)}.bg-orange{--bslib-color-bg: #f0ad4e;--bslib-color-fg: #000;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-orange{--bslib-color-fg: #f0ad4e;color:var(--bslib-color-fg)}.bg-yellow{--bslib-color-bg: #ff7518;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-yellow{--bslib-color-fg: #ff7518;color:var(--bslib-color-fg)}.bg-green{--bslib-color-bg: #3fb618;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-green{--bslib-color-fg: #3fb618;color:var(--bslib-color-fg)}.bg-teal{--bslib-color-bg: #20c997;--bslib-color-fg: #000;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-teal{--bslib-color-fg: #20c997;color:var(--bslib-color-fg)}.bg-cyan{--bslib-color-bg: #9954bb;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-cyan{--bslib-color-fg: #9954bb;color:var(--bslib-color-fg)}.text-default{--bslib-color-fg: #343a40}.bg-default{--bslib-color-bg: #343a40;--bslib-color-fg: #fff}.text-primary{--bslib-color-fg: #2780e3}.bg-primary{--bslib-color-bg: #2780e3;--bslib-color-fg: #fff}.text-secondary{--bslib-color-fg: #343a40}.bg-secondary{--bslib-color-bg: #343a40;--bslib-color-fg: #fff}.text-success{--bslib-color-fg: #3fb618}.bg-success{--bslib-color-bg: #3fb618;--bslib-color-fg: #fff}.text-info{--bslib-color-fg: #9954bb}.bg-info{--bslib-color-bg: #9954bb;--bslib-color-fg: #fff}.text-warning{--bslib-color-fg: #ff7518}.bg-warning{--bslib-color-bg: #ff7518;--bslib-color-fg: #fff}.text-danger{--bslib-color-fg: #ff0039}.bg-danger{--bslib-color-bg: #ff0039;--bslib-color-fg: #fff}.text-light{--bslib-color-fg: #f8f9fa}.bg-light{--bslib-color-bg: #f8f9fa;--bslib-color-fg: #000}.text-dark{--bslib-color-fg: #343a40}.bg-dark{--bslib-color-bg: #343a40;--bslib-color-fg: #fff}.bg-gradient-blue-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #4053e9;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #4053e9;color:#fff}.bg-gradient-blue-purple{--bslib-color-fg: #fff;--bslib-color-bg: #3e65ba;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #3e65ba;color:#fff}.bg-gradient-blue-pink{--bslib-color-fg: #fff;--bslib-color-bg: #7466c0;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #7466c0;color:#fff}.bg-gradient-blue-red{--bslib-color-fg: #fff;--bslib-color-bg: #7d4d9f;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #7d4d9f;color:#fff}.bg-gradient-blue-orange{--bslib-color-fg: #fff;--bslib-color-bg: #7792a7;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #7792a7;color:#fff}.bg-gradient-blue-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #7d7c92;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #7d7c92;color:#fff}.bg-gradient-blue-green{--bslib-color-fg: #fff;--bslib-color-bg: #319692;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #319692;color:#fff}.bg-gradient-blue-teal{--bslib-color-fg: #fff;--bslib-color-bg: #249dc5;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #249dc5;color:#fff}.bg-gradient-blue-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #556ed3;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #556ed3;color:#fff}.bg-gradient-indigo-blue{--bslib-color-fg: #fff;--bslib-color-bg: #4d3dec;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #4d3dec;color:#fff}.bg-gradient-indigo-purple{--bslib-color-fg: #fff;--bslib-color-bg: #6422c3;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #6422c3;color:#fff}.bg-gradient-indigo-pink{--bslib-color-fg: #fff;--bslib-color-bg: #9a22c9;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #9a22c9;color:#fff}.bg-gradient-indigo-red{--bslib-color-fg: #fff;--bslib-color-bg: #a30aa8;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #a30aa8;color:#fff}.bg-gradient-indigo-orange{--bslib-color-fg: #fff;--bslib-color-bg: #9d4fb0;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #9d4fb0;color:#fff}.bg-gradient-indigo-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #a3389b;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #a3389b;color:#fff}.bg-gradient-indigo-green{--bslib-color-fg: #fff;--bslib-color-bg: #56529b;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #56529b;color:#fff}.bg-gradient-indigo-teal{--bslib-color-fg: #fff;--bslib-color-bg: #4a5ace;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #4a5ace;color:#fff}.bg-gradient-indigo-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #7a2bdc;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #7a2bdc;color:#fff}.bg-gradient-purple-blue{--bslib-color-fg: #fff;--bslib-color-bg: #4a58a5;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #4a58a5;color:#fff}.bg-gradient-purple-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #632bab;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #632bab;color:#fff}.bg-gradient-purple-pink{--bslib-color-fg: #fff;--bslib-color-bg: #973d82;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #973d82;color:#fff}.bg-gradient-purple-red{--bslib-color-fg: #fff;--bslib-color-bg: #a02561;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #a02561;color:#fff}.bg-gradient-purple-orange{--bslib-color-fg: #fff;--bslib-color-bg: #9a6a6a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #9a6a6a;color:#fff}.bg-gradient-purple-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #a05354;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #a05354;color:#fff}.bg-gradient-purple-green{--bslib-color-fg: #fff;--bslib-color-bg: #536d54;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #536d54;color:#fff}.bg-gradient-purple-teal{--bslib-color-fg: #fff;--bslib-color-bg: #477587;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #477587;color:#fff}.bg-gradient-purple-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #774695;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #774695;color:#fff}.bg-gradient-pink-blue{--bslib-color-fg: #fff;--bslib-color-bg: #9b58af;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #9b58af;color:#fff}.bg-gradient-pink-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #b42cb5;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #b42cb5;color:#fff}.bg-gradient-pink-purple{--bslib-color-fg: #fff;--bslib-color-bg: #b23e86;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #b23e86;color:#fff}.bg-gradient-pink-red{--bslib-color-fg: #fff;--bslib-color-bg: #f1256b;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #f1256b;color:#fff}.bg-gradient-pink-orange{--bslib-color-fg: #fff;--bslib-color-bg: #eb6a73;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #eb6a73;color:#fff}.bg-gradient-pink-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #f1545e;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #f1545e;color:#fff}.bg-gradient-pink-green{--bslib-color-fg: #fff;--bslib-color-bg: #a46e5e;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #a46e5e;color:#fff}.bg-gradient-pink-teal{--bslib-color-fg: #fff;--bslib-color-bg: #987690;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #987690;color:#fff}.bg-gradient-pink-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #c8479f;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #c8479f;color:#fff}.bg-gradient-red-blue{--bslib-color-fg: #fff;--bslib-color-bg: #a9337d;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #a9337d;color:#fff}.bg-gradient-red-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #c20683;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #c20683;color:#fff}.bg-gradient-red-purple{--bslib-color-fg: #fff;--bslib-color-bg: #c01854;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #c01854;color:#fff}.bg-gradient-red-pink{--bslib-color-fg: #fff;--bslib-color-bg: #f6195a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #f6195a;color:#fff}.bg-gradient-red-orange{--bslib-color-fg: #fff;--bslib-color-bg: #f94541;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #f94541;color:#fff}.bg-gradient-red-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #ff2f2c;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #ff2f2c;color:#fff}.bg-gradient-red-green{--bslib-color-fg: #fff;--bslib-color-bg: #b2492c;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #b2492c;color:#fff}.bg-gradient-red-teal{--bslib-color-fg: #fff;--bslib-color-bg: #a6505f;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #a6505f;color:#fff}.bg-gradient-red-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #d6226d;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #d6226d;color:#fff}.bg-gradient-orange-blue{--bslib-color-fg: #fff;--bslib-color-bg: #a09b8a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #a09b8a;color:#fff}.bg-gradient-orange-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #b96e90;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #b96e90;color:#fff}.bg-gradient-orange-purple{--bslib-color-fg: #fff;--bslib-color-bg: #b78060;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #b78060;color:#fff}.bg-gradient-orange-pink{--bslib-color-fg: #fff;--bslib-color-bg: #ed8167;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #ed8167;color:#fff}.bg-gradient-orange-red{--bslib-color-fg: #fff;--bslib-color-bg: #f66846;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #f66846;color:#fff}.bg-gradient-orange-yellow{--bslib-color-fg: #000;--bslib-color-bg: #f69738;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #f69738;color:#000}.bg-gradient-orange-green{--bslib-color-fg: #000;--bslib-color-bg: #a9b138;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #a9b138;color:#000}.bg-gradient-orange-teal{--bslib-color-fg: #000;--bslib-color-bg: #9db86b;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #9db86b;color:#000}.bg-gradient-orange-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #cd897a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #cd897a;color:#fff}.bg-gradient-yellow-blue{--bslib-color-fg: #fff;--bslib-color-bg: #a97969;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #a97969;color:#fff}.bg-gradient-yellow-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #c24d6f;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #c24d6f;color:#fff}.bg-gradient-yellow-purple{--bslib-color-fg: #fff;--bslib-color-bg: #c05f40;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #c05f40;color:#fff}.bg-gradient-yellow-pink{--bslib-color-fg: #fff;--bslib-color-bg: #f65f46;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #f65f46;color:#fff}.bg-gradient-yellow-red{--bslib-color-fg: #fff;--bslib-color-bg: #ff4625;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #ff4625;color:#fff}.bg-gradient-yellow-orange{--bslib-color-fg: #000;--bslib-color-bg: #f98b2e;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #f98b2e;color:#000}.bg-gradient-yellow-green{--bslib-color-fg: #fff;--bslib-color-bg: #b28f18;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #b28f18;color:#fff}.bg-gradient-yellow-teal{--bslib-color-fg: #fff;--bslib-color-bg: #a6974b;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #a6974b;color:#fff}.bg-gradient-yellow-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #d66859;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #d66859;color:#fff}.bg-gradient-green-blue{--bslib-color-fg: #fff;--bslib-color-bg: #35a069;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #35a069;color:#fff}.bg-gradient-green-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #4f746f;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #4f746f;color:#fff}.bg-gradient-green-purple{--bslib-color-fg: #fff;--bslib-color-bg: #4d8640;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #4d8640;color:#fff}.bg-gradient-green-pink{--bslib-color-fg: #fff;--bslib-color-bg: #838646;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #838646;color:#fff}.bg-gradient-green-red{--bslib-color-fg: #fff;--bslib-color-bg: #8c6d25;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #8c6d25;color:#fff}.bg-gradient-green-orange{--bslib-color-fg: #000;--bslib-color-bg: #86b22e;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #86b22e;color:#000}.bg-gradient-green-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #8c9c18;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #8c9c18;color:#fff}.bg-gradient-green-teal{--bslib-color-fg: #000;--bslib-color-bg: #33be4b;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #33be4b;color:#000}.bg-gradient-green-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #638f59;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #638f59;color:#fff}.bg-gradient-teal-blue{--bslib-color-fg: #fff;--bslib-color-bg: #23acb5;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #23acb5;color:#fff}.bg-gradient-teal-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #3c7fbb;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #3c7fbb;color:#fff}.bg-gradient-teal-purple{--bslib-color-fg: #fff;--bslib-color-bg: #3a918c;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #3a918c;color:#fff}.bg-gradient-teal-pink{--bslib-color-fg: #fff;--bslib-color-bg: #709193;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #709193;color:#fff}.bg-gradient-teal-red{--bslib-color-fg: #fff;--bslib-color-bg: #797971;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #797971;color:#fff}.bg-gradient-teal-orange{--bslib-color-fg: #000;--bslib-color-bg: #73be7a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #73be7a;color:#000}.bg-gradient-teal-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #79a764;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #79a764;color:#fff}.bg-gradient-teal-green{--bslib-color-fg: #000;--bslib-color-bg: #2cc164;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #2cc164;color:#000}.bg-gradient-teal-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #509aa5;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #509aa5;color:#fff}.bg-gradient-cyan-blue{--bslib-color-fg: #fff;--bslib-color-bg: #6b66cb;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #6b66cb;color:#fff}.bg-gradient-cyan-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #8539d1;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #8539d1;color:#fff}.bg-gradient-cyan-purple{--bslib-color-fg: #fff;--bslib-color-bg: #834ba2;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #834ba2;color:#fff}.bg-gradient-cyan-pink{--bslib-color-fg: #fff;--bslib-color-bg: #b94ba8;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #b94ba8;color:#fff}.bg-gradient-cyan-red{--bslib-color-fg: #fff;--bslib-color-bg: #c23287;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #c23287;color:#fff}.bg-gradient-cyan-orange{--bslib-color-fg: #fff;--bslib-color-bg: #bc788f;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #bc788f;color:#fff}.bg-gradient-cyan-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #c2617a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #c2617a;color:#fff}.bg-gradient-cyan-green{--bslib-color-fg: #fff;--bslib-color-bg: #757b7a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #757b7a;color:#fff}.bg-gradient-cyan-teal{--bslib-color-fg: #fff;--bslib-color-bg: #6983ad;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #6983ad;color:#fff}:root{--bslib-spacer: 1rem;--bslib-mb-spacer: var(--bslib-spacer, 1rem)}.bslib-mb-spacing{margin-bottom:var(--bslib-mb-spacer)}.bslib-gap-spacing{gap:var(--bslib-mb-spacer)}.bslib-gap-spacing>.bslib-mb-spacing,.bslib-gap-spacing>.form-group,.bslib-gap-spacing>p,.bslib-gap-spacing>pre{margin-bottom:0}.html-fill-container>.html-fill-item.bslib-mb-spacing{margin-bottom:0}.tab-content>.tab-pane.html-fill-container{display:none}.tab-content>.active.html-fill-container{display:flex}.tab-content.html-fill-container{padding:0}.bg-blue{--bslib-color-bg: #2780e3;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-blue{--bslib-color-fg: #2780e3;color:var(--bslib-color-fg)}.bg-indigo{--bslib-color-bg: #6610f2;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-indigo{--bslib-color-fg: #6610f2;color:var(--bslib-color-fg)}.bg-purple{--bslib-color-bg: #613d7c;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-purple{--bslib-color-fg: #613d7c;color:var(--bslib-color-fg)}.bg-pink{--bslib-color-bg: #e83e8c;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-pink{--bslib-color-fg: #e83e8c;color:var(--bslib-color-fg)}.bg-red{--bslib-color-bg: #ff0039;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-red{--bslib-color-fg: #ff0039;color:var(--bslib-color-fg)}.bg-orange{--bslib-color-bg: #f0ad4e;--bslib-color-fg: #000;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-orange{--bslib-color-fg: #f0ad4e;color:var(--bslib-color-fg)}.bg-yellow{--bslib-color-bg: #ff7518;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-yellow{--bslib-color-fg: #ff7518;color:var(--bslib-color-fg)}.bg-green{--bslib-color-bg: #3fb618;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-green{--bslib-color-fg: #3fb618;color:var(--bslib-color-fg)}.bg-teal{--bslib-color-bg: #20c997;--bslib-color-fg: #000;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-teal{--bslib-color-fg: #20c997;color:var(--bslib-color-fg)}.bg-cyan{--bslib-color-bg: #9954bb;--bslib-color-fg: #fff;background-color:var(--bslib-color-bg);color:var(--bslib-color-fg)}.text-cyan{--bslib-color-fg: #9954bb;color:var(--bslib-color-fg)}.text-default{--bslib-color-fg: #343a40}.bg-default{--bslib-color-bg: #343a40;--bslib-color-fg: #fff}.text-primary{--bslib-color-fg: #2780e3}.bg-primary{--bslib-color-bg: #2780e3;--bslib-color-fg: #fff}.text-secondary{--bslib-color-fg: #343a40}.bg-secondary{--bslib-color-bg: #343a40;--bslib-color-fg: #fff}.text-success{--bslib-color-fg: #3fb618}.bg-success{--bslib-color-bg: #3fb618;--bslib-color-fg: #fff}.text-info{--bslib-color-fg: #9954bb}.bg-info{--bslib-color-bg: #9954bb;--bslib-color-fg: #fff}.text-warning{--bslib-color-fg: #ff7518}.bg-warning{--bslib-color-bg: #ff7518;--bslib-color-fg: #fff}.text-danger{--bslib-color-fg: #ff0039}.bg-danger{--bslib-color-bg: #ff0039;--bslib-color-fg: #fff}.text-light{--bslib-color-fg: #f8f9fa}.bg-light{--bslib-color-bg: #f8f9fa;--bslib-color-fg: #000}.text-dark{--bslib-color-fg: #343a40}.bg-dark{--bslib-color-bg: #343a40;--bslib-color-fg: #fff}.bg-gradient-blue-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #4053e9;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #4053e9;color:#fff}.bg-gradient-blue-purple{--bslib-color-fg: #fff;--bslib-color-bg: #3e65ba;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #3e65ba;color:#fff}.bg-gradient-blue-pink{--bslib-color-fg: #fff;--bslib-color-bg: #7466c0;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #7466c0;color:#fff}.bg-gradient-blue-red{--bslib-color-fg: #fff;--bslib-color-bg: #7d4d9f;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #7d4d9f;color:#fff}.bg-gradient-blue-orange{--bslib-color-fg: #fff;--bslib-color-bg: #7792a7;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #7792a7;color:#fff}.bg-gradient-blue-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #7d7c92;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #7d7c92;color:#fff}.bg-gradient-blue-green{--bslib-color-fg: #fff;--bslib-color-bg: #319692;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #319692;color:#fff}.bg-gradient-blue-teal{--bslib-color-fg: #fff;--bslib-color-bg: #249dc5;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #249dc5;color:#fff}.bg-gradient-blue-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #556ed3;background:linear-gradient(var(--bg-gradient-deg, 140deg), #2780e3 var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #556ed3;color:#fff}.bg-gradient-indigo-blue{--bslib-color-fg: #fff;--bslib-color-bg: #4d3dec;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #4d3dec;color:#fff}.bg-gradient-indigo-purple{--bslib-color-fg: #fff;--bslib-color-bg: #6422c3;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #6422c3;color:#fff}.bg-gradient-indigo-pink{--bslib-color-fg: #fff;--bslib-color-bg: #9a22c9;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #9a22c9;color:#fff}.bg-gradient-indigo-red{--bslib-color-fg: #fff;--bslib-color-bg: #a30aa8;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #a30aa8;color:#fff}.bg-gradient-indigo-orange{--bslib-color-fg: #fff;--bslib-color-bg: #9d4fb0;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #9d4fb0;color:#fff}.bg-gradient-indigo-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #a3389b;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #a3389b;color:#fff}.bg-gradient-indigo-green{--bslib-color-fg: #fff;--bslib-color-bg: #56529b;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #56529b;color:#fff}.bg-gradient-indigo-teal{--bslib-color-fg: #fff;--bslib-color-bg: #4a5ace;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #4a5ace;color:#fff}.bg-gradient-indigo-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #7a2bdc;background:linear-gradient(var(--bg-gradient-deg, 140deg), #6610f2 var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #7a2bdc;color:#fff}.bg-gradient-purple-blue{--bslib-color-fg: #fff;--bslib-color-bg: #4a58a5;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #4a58a5;color:#fff}.bg-gradient-purple-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #632bab;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #632bab;color:#fff}.bg-gradient-purple-pink{--bslib-color-fg: #fff;--bslib-color-bg: #973d82;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #973d82;color:#fff}.bg-gradient-purple-red{--bslib-color-fg: #fff;--bslib-color-bg: #a02561;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #a02561;color:#fff}.bg-gradient-purple-orange{--bslib-color-fg: #fff;--bslib-color-bg: #9a6a6a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #9a6a6a;color:#fff}.bg-gradient-purple-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #a05354;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #a05354;color:#fff}.bg-gradient-purple-green{--bslib-color-fg: #fff;--bslib-color-bg: #536d54;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #536d54;color:#fff}.bg-gradient-purple-teal{--bslib-color-fg: #fff;--bslib-color-bg: #477587;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #477587;color:#fff}.bg-gradient-purple-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #774695;background:linear-gradient(var(--bg-gradient-deg, 140deg), #613d7c var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #774695;color:#fff}.bg-gradient-pink-blue{--bslib-color-fg: #fff;--bslib-color-bg: #9b58af;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #9b58af;color:#fff}.bg-gradient-pink-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #b42cb5;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #b42cb5;color:#fff}.bg-gradient-pink-purple{--bslib-color-fg: #fff;--bslib-color-bg: #b23e86;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #b23e86;color:#fff}.bg-gradient-pink-red{--bslib-color-fg: #fff;--bslib-color-bg: #f1256b;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #f1256b;color:#fff}.bg-gradient-pink-orange{--bslib-color-fg: #fff;--bslib-color-bg: #eb6a73;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #eb6a73;color:#fff}.bg-gradient-pink-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #f1545e;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #f1545e;color:#fff}.bg-gradient-pink-green{--bslib-color-fg: #fff;--bslib-color-bg: #a46e5e;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #a46e5e;color:#fff}.bg-gradient-pink-teal{--bslib-color-fg: #fff;--bslib-color-bg: #987690;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #987690;color:#fff}.bg-gradient-pink-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #c8479f;background:linear-gradient(var(--bg-gradient-deg, 140deg), #e83e8c var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #c8479f;color:#fff}.bg-gradient-red-blue{--bslib-color-fg: #fff;--bslib-color-bg: #a9337d;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #a9337d;color:#fff}.bg-gradient-red-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #c20683;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #c20683;color:#fff}.bg-gradient-red-purple{--bslib-color-fg: #fff;--bslib-color-bg: #c01854;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #c01854;color:#fff}.bg-gradient-red-pink{--bslib-color-fg: #fff;--bslib-color-bg: #f6195a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #f6195a;color:#fff}.bg-gradient-red-orange{--bslib-color-fg: #fff;--bslib-color-bg: #f94541;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #f94541;color:#fff}.bg-gradient-red-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #ff2f2c;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #ff2f2c;color:#fff}.bg-gradient-red-green{--bslib-color-fg: #fff;--bslib-color-bg: #b2492c;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #b2492c;color:#fff}.bg-gradient-red-teal{--bslib-color-fg: #fff;--bslib-color-bg: #a6505f;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #a6505f;color:#fff}.bg-gradient-red-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #d6226d;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff0039 var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #d6226d;color:#fff}.bg-gradient-orange-blue{--bslib-color-fg: #fff;--bslib-color-bg: #a09b8a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #a09b8a;color:#fff}.bg-gradient-orange-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #b96e90;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #b96e90;color:#fff}.bg-gradient-orange-purple{--bslib-color-fg: #fff;--bslib-color-bg: #b78060;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #b78060;color:#fff}.bg-gradient-orange-pink{--bslib-color-fg: #fff;--bslib-color-bg: #ed8167;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #ed8167;color:#fff}.bg-gradient-orange-red{--bslib-color-fg: #fff;--bslib-color-bg: #f66846;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #f66846;color:#fff}.bg-gradient-orange-yellow{--bslib-color-fg: #000;--bslib-color-bg: #f69738;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #f69738;color:#000}.bg-gradient-orange-green{--bslib-color-fg: #000;--bslib-color-bg: #a9b138;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #a9b138;color:#000}.bg-gradient-orange-teal{--bslib-color-fg: #000;--bslib-color-bg: #9db86b;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #9db86b;color:#000}.bg-gradient-orange-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #cd897a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #f0ad4e var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #cd897a;color:#fff}.bg-gradient-yellow-blue{--bslib-color-fg: #fff;--bslib-color-bg: #a97969;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #a97969;color:#fff}.bg-gradient-yellow-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #c24d6f;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #c24d6f;color:#fff}.bg-gradient-yellow-purple{--bslib-color-fg: #fff;--bslib-color-bg: #c05f40;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #c05f40;color:#fff}.bg-gradient-yellow-pink{--bslib-color-fg: #fff;--bslib-color-bg: #f65f46;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #f65f46;color:#fff}.bg-gradient-yellow-red{--bslib-color-fg: #fff;--bslib-color-bg: #ff4625;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #ff4625;color:#fff}.bg-gradient-yellow-orange{--bslib-color-fg: #000;--bslib-color-bg: #f98b2e;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #f98b2e;color:#000}.bg-gradient-yellow-green{--bslib-color-fg: #fff;--bslib-color-bg: #b28f18;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #b28f18;color:#fff}.bg-gradient-yellow-teal{--bslib-color-fg: #fff;--bslib-color-bg: #a6974b;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #a6974b;color:#fff}.bg-gradient-yellow-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #d66859;background:linear-gradient(var(--bg-gradient-deg, 140deg), #ff7518 var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #d66859;color:#fff}.bg-gradient-green-blue{--bslib-color-fg: #fff;--bslib-color-bg: #35a069;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #35a069;color:#fff}.bg-gradient-green-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #4f746f;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #4f746f;color:#fff}.bg-gradient-green-purple{--bslib-color-fg: #fff;--bslib-color-bg: #4d8640;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #4d8640;color:#fff}.bg-gradient-green-pink{--bslib-color-fg: #fff;--bslib-color-bg: #838646;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #838646;color:#fff}.bg-gradient-green-red{--bslib-color-fg: #fff;--bslib-color-bg: #8c6d25;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #8c6d25;color:#fff}.bg-gradient-green-orange{--bslib-color-fg: #000;--bslib-color-bg: #86b22e;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #86b22e;color:#000}.bg-gradient-green-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #8c9c18;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #8c9c18;color:#fff}.bg-gradient-green-teal{--bslib-color-fg: #000;--bslib-color-bg: #33be4b;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #33be4b;color:#000}.bg-gradient-green-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #638f59;background:linear-gradient(var(--bg-gradient-deg, 140deg), #3fb618 var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #638f59;color:#fff}.bg-gradient-teal-blue{--bslib-color-fg: #fff;--bslib-color-bg: #23acb5;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #23acb5;color:#fff}.bg-gradient-teal-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #3c7fbb;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #3c7fbb;color:#fff}.bg-gradient-teal-purple{--bslib-color-fg: #fff;--bslib-color-bg: #3a918c;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #3a918c;color:#fff}.bg-gradient-teal-pink{--bslib-color-fg: #fff;--bslib-color-bg: #709193;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #709193;color:#fff}.bg-gradient-teal-red{--bslib-color-fg: #fff;--bslib-color-bg: #797971;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #797971;color:#fff}.bg-gradient-teal-orange{--bslib-color-fg: #000;--bslib-color-bg: #73be7a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #73be7a;color:#000}.bg-gradient-teal-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #79a764;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #79a764;color:#fff}.bg-gradient-teal-green{--bslib-color-fg: #000;--bslib-color-bg: #2cc164;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #2cc164;color:#000}.bg-gradient-teal-cyan{--bslib-color-fg: #fff;--bslib-color-bg: #509aa5;background:linear-gradient(var(--bg-gradient-deg, 140deg), #20c997 var(--bg-gradient-start, 36%), #9954bb var(--bg-gradient-end, 180%)) #509aa5;color:#fff}.bg-gradient-cyan-blue{--bslib-color-fg: #fff;--bslib-color-bg: #6b66cb;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #2780e3 var(--bg-gradient-end, 180%)) #6b66cb;color:#fff}.bg-gradient-cyan-indigo{--bslib-color-fg: #fff;--bslib-color-bg: #8539d1;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #6610f2 var(--bg-gradient-end, 180%)) #8539d1;color:#fff}.bg-gradient-cyan-purple{--bslib-color-fg: #fff;--bslib-color-bg: #834ba2;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #613d7c var(--bg-gradient-end, 180%)) #834ba2;color:#fff}.bg-gradient-cyan-pink{--bslib-color-fg: #fff;--bslib-color-bg: #b94ba8;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #e83e8c var(--bg-gradient-end, 180%)) #b94ba8;color:#fff}.bg-gradient-cyan-red{--bslib-color-fg: #fff;--bslib-color-bg: #c23287;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #ff0039 var(--bg-gradient-end, 180%)) #c23287;color:#fff}.bg-gradient-cyan-orange{--bslib-color-fg: #fff;--bslib-color-bg: #bc788f;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #f0ad4e var(--bg-gradient-end, 180%)) #bc788f;color:#fff}.bg-gradient-cyan-yellow{--bslib-color-fg: #fff;--bslib-color-bg: #c2617a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #ff7518 var(--bg-gradient-end, 180%)) #c2617a;color:#fff}.bg-gradient-cyan-green{--bslib-color-fg: #fff;--bslib-color-bg: #757b7a;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #3fb618 var(--bg-gradient-end, 180%)) #757b7a;color:#fff}.bg-gradient-cyan-teal{--bslib-color-fg: #fff;--bslib-color-bg: #6983ad;background:linear-gradient(var(--bg-gradient-deg, 140deg), #9954bb var(--bg-gradient-start, 36%), #20c997 var(--bg-gradient-end, 180%)) #6983ad;color:#fff}@media(min-width: 576px){.nav:not(.nav-hidden){display:flex !important;display:-webkit-flex !important}.nav:not(.nav-hidden):not(.nav-stacked):not(.flex-column){float:none !important}.nav:not(.nav-hidden):not(.nav-stacked):not(.flex-column)>.bslib-nav-spacer{margin-left:auto !important}.nav:not(.nav-hidden):not(.nav-stacked):not(.flex-column)>.form-inline{margin-top:auto;margin-bottom:auto}.nav:not(.nav-hidden).nav-stacked{flex-direction:column;-webkit-flex-direction:column;height:100%}.nav:not(.nav-hidden).nav-stacked>.bslib-nav-spacer{margin-top:auto !important}}.accordion .accordion-header{font-size:calc(1.29rem + 0.48vw);margin-top:0;margin-bottom:.5rem;font-weight:400;line-height:1.2;color:var(--bs-heading-color);margin-bottom:0}@media(min-width: 1200px){.accordion .accordion-header{font-size:1.65rem}}.accordion .accordion-icon:not(:empty){margin-right:.75rem;display:flex}.accordion .accordion-button:not(.collapsed){box-shadow:none}.accordion .accordion-button:not(.collapsed):focus{box-shadow:var(--bs-accordion-btn-focus-box-shadow)}.bslib-sidebar-layout{--bslib-sidebar-transition-duration: 500ms;--bslib-sidebar-transition-easing-x: cubic-bezier(0.8, 0.78, 0.22, 1.07);--bslib-sidebar-border: var(--bs-card-border-width, 1px) solid var(--bs-card-border-color, rgba(0, 0, 0, 0.175));--bslib-sidebar-border-radius: var(--bs-border-radius);--bslib-sidebar-vert-border: var(--bs-card-border-width, 1px) solid var(--bs-card-border-color, rgba(0, 0, 0, 0.175));--bslib-sidebar-bg: rgba(var(--bs-emphasis-color-rgb, 0, 0, 0), 0.05);--bslib-sidebar-fg: var(--bs-emphasis-color, black);--bslib-sidebar-main-fg: var(--bs-card-color, var(--bs-body-color));--bslib-sidebar-main-bg: var(--bs-card-bg, var(--bs-body-bg));--bslib-sidebar-toggle-bg: rgba(var(--bs-emphasis-color-rgb, 0, 0, 0), 0.1);--bslib-sidebar-padding: calc(var(--bslib-spacer) * 1.5);--bslib-sidebar-icon-size: var(--bslib-spacer, 1rem);--bslib-sidebar-icon-button-size: calc(var(--bslib-sidebar-icon-size, 1rem) * 2);--bslib-sidebar-padding-icon: calc(var(--bslib-sidebar-icon-button-size, 2rem) * 1.5);--bslib-collapse-toggle-border-radius: var(--bs-border-radius, 0.25rem);--bslib-collapse-toggle-transform: 0deg;--bslib-sidebar-toggle-transition-easing: cubic-bezier(1, 0, 0, 1);--bslib-collapse-toggle-right-transform: 180deg;--bslib-sidebar-column-main: minmax(0, 1fr);display:grid !important;grid-template-columns:min(100% - var(--bslib-sidebar-icon-size),var(--bslib-sidebar-width, 250px)) var(--bslib-sidebar-column-main);position:relative;transition:grid-template-columns ease-in-out var(--bslib-sidebar-transition-duration);border:var(--bslib-sidebar-border);border-radius:var(--bslib-sidebar-border-radius)}@media(prefers-reduced-motion: reduce){.bslib-sidebar-layout{transition:none}}.bslib-sidebar-layout[data-bslib-sidebar-border=false]{border:none}.bslib-sidebar-layout[data-bslib-sidebar-border-radius=false]{border-radius:initial}.bslib-sidebar-layout>.main,.bslib-sidebar-layout>.sidebar{grid-row:1/2;border-radius:inherit;overflow:auto}.bslib-sidebar-layout>.main{grid-column:2/3;border-top-left-radius:0;border-bottom-left-radius:0;padding:var(--bslib-sidebar-padding);transition:padding var(--bslib-sidebar-transition-easing-x) var(--bslib-sidebar-transition-duration);color:var(--bslib-sidebar-main-fg);background-color:var(--bslib-sidebar-main-bg)}.bslib-sidebar-layout>.sidebar{grid-column:1/2;width:100%;height:100%;border-right:var(--bslib-sidebar-vert-border);border-top-right-radius:0;border-bottom-right-radius:0;color:var(--bslib-sidebar-fg);background-color:var(--bslib-sidebar-bg);backdrop-filter:blur(5px)}.bslib-sidebar-layout>.sidebar>.sidebar-content{display:flex;flex-direction:column;gap:var(--bslib-spacer, 1rem);padding:var(--bslib-sidebar-padding);padding-top:var(--bslib-sidebar-padding-icon)}.bslib-sidebar-layout>.sidebar>.sidebar-content>:last-child:not(.sidebar-title){margin-bottom:0}.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion{margin-left:calc(-1*var(--bslib-sidebar-padding));margin-right:calc(-1*var(--bslib-sidebar-padding))}.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion:last-child{margin-bottom:calc(-1*var(--bslib-sidebar-padding))}.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion:not(:last-child){margin-bottom:1rem}.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion .accordion-body{display:flex;flex-direction:column}.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion:not(:first-child) .accordion-item:first-child{border-top:var(--bs-accordion-border-width) solid var(--bs-accordion-border-color)}.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion:not(:last-child) .accordion-item:last-child{border-bottom:var(--bs-accordion-border-width) solid var(--bs-accordion-border-color)}.bslib-sidebar-layout>.sidebar>.sidebar-content.has-accordion>.sidebar-title{border-bottom:none;padding-bottom:0}.bslib-sidebar-layout>.sidebar .shiny-input-container{width:100%}.bslib-sidebar-layout[data-bslib-sidebar-open=always]>.sidebar>.sidebar-content{padding-top:var(--bslib-sidebar-padding)}.bslib-sidebar-layout>.collapse-toggle{grid-row:1/2;grid-column:1/2;display:inline-flex;align-items:center;position:absolute;right:calc(var(--bslib-sidebar-icon-size));top:calc(var(--bslib-sidebar-icon-size, 1rem)/2);border:none;border-radius:var(--bslib-collapse-toggle-border-radius);height:var(--bslib-sidebar-icon-button-size, 2rem);width:var(--bslib-sidebar-icon-button-size, 2rem);display:flex;align-items:center;justify-content:center;padding:0;color:var(--bslib-sidebar-fg);background-color:unset;transition:color var(--bslib-sidebar-transition-easing-x) var(--bslib-sidebar-transition-duration),top var(--bslib-sidebar-transition-easing-x) var(--bslib-sidebar-transition-duration),right var(--bslib-sidebar-transition-easing-x) var(--bslib-sidebar-transition-duration),left var(--bslib-sidebar-transition-easing-x) var(--bslib-sidebar-transition-duration)}.bslib-sidebar-layout>.collapse-toggle:hover{background-color:var(--bslib-sidebar-toggle-bg)}.bslib-sidebar-layout>.collapse-toggle>.collapse-icon{opacity:.8;width:var(--bslib-sidebar-icon-size);height:var(--bslib-sidebar-icon-size);transform:rotateY(var(--bslib-collapse-toggle-transform));transition:transform var(--bslib-sidebar-toggle-transition-easing) var(--bslib-sidebar-transition-duration)}.bslib-sidebar-layout>.collapse-toggle:hover>.collapse-icon{opacity:1}.bslib-sidebar-layout .sidebar-title{font-size:1.25rem;line-height:1.25;margin-top:0;margin-bottom:1rem;padding-bottom:1rem;border-bottom:var(--bslib-sidebar-border)}.bslib-sidebar-layout.sidebar-right{grid-template-columns:var(--bslib-sidebar-column-main) min(100% - var(--bslib-sidebar-icon-size),var(--bslib-sidebar-width, 250px))}.bslib-sidebar-layout.sidebar-right>.main{grid-column:1/2;border-top-right-radius:0;border-bottom-right-radius:0;border-top-left-radius:inherit;border-bottom-left-radius:inherit}.bslib-sidebar-layout.sidebar-right>.sidebar{grid-column:2/3;border-right:none;border-left:var(--bslib-sidebar-vert-border);border-top-left-radius:0;border-bottom-left-radius:0}.bslib-sidebar-layout.sidebar-right>.collapse-toggle{grid-column:2/3;left:var(--bslib-sidebar-icon-size);right:unset;border:var(--bslib-collapse-toggle-border)}.bslib-sidebar-layout.sidebar-right>.collapse-toggle>.collapse-icon{transform:rotateY(var(--bslib-collapse-toggle-right-transform))}.bslib-sidebar-layout.sidebar-collapsed{--bslib-collapse-toggle-transform: 180deg;--bslib-collapse-toggle-right-transform: 0deg;--bslib-sidebar-vert-border: none;grid-template-columns:0 minmax(0, 1fr)}.bslib-sidebar-layout.sidebar-collapsed.sidebar-right{grid-template-columns:minmax(0, 1fr) 0}.bslib-sidebar-layout.sidebar-collapsed:not(.transitioning)>.sidebar>*{display:none}.bslib-sidebar-layout.sidebar-collapsed>.main{border-radius:inherit}.bslib-sidebar-layout.sidebar-collapsed:not(.sidebar-right)>.main{padding-left:var(--bslib-sidebar-padding-icon)}.bslib-sidebar-layout.sidebar-collapsed.sidebar-right>.main{padding-right:var(--bslib-sidebar-padding-icon)}.bslib-sidebar-layout.sidebar-collapsed>.collapse-toggle{color:var(--bslib-sidebar-main-fg);top:calc(var(--bslib-sidebar-overlap-counter, 0)*(var(--bslib-sidebar-icon-size) + var(--bslib-sidebar-padding)) + var(--bslib-sidebar-icon-size, 1rem)/2);right:calc(-2.5*var(--bslib-sidebar-icon-size) - var(--bs-card-border-width, 1px))}.bslib-sidebar-layout.sidebar-collapsed.sidebar-right>.collapse-toggle{left:calc(-2.5*var(--bslib-sidebar-icon-size) - var(--bs-card-border-width, 1px));right:unset}@media(min-width: 576px){.bslib-sidebar-layout.transitioning>.sidebar>.sidebar-content{display:none}}@media(max-width: 575.98px){.bslib-sidebar-layout[data-bslib-sidebar-open=desktop]{--bslib-sidebar-js-init-collapsed: true}.bslib-sidebar-layout>.sidebar,.bslib-sidebar-layout.sidebar-right>.sidebar{border:none}.bslib-sidebar-layout>.main,.bslib-sidebar-layout.sidebar-right>.main{grid-column:1/3}.bslib-sidebar-layout[data-bslib-sidebar-open=always]{display:block !important}.bslib-sidebar-layout[data-bslib-sidebar-open=always]>.sidebar{max-height:var(--bslib-sidebar-max-height-mobile);overflow-y:auto;border-top:var(--bslib-sidebar-vert-border)}.bslib-sidebar-layout:not([data-bslib-sidebar-open=always]){grid-template-columns:100% 0}.bslib-sidebar-layout:not([data-bslib-sidebar-open=always]):not(.sidebar-collapsed)>.sidebar{z-index:1}.bslib-sidebar-layout:not([data-bslib-sidebar-open=always]):not(.sidebar-collapsed)>.collapse-toggle{z-index:1}.bslib-sidebar-layout:not([data-bslib-sidebar-open=always]).sidebar-right{grid-template-columns:0 100%}.bslib-sidebar-layout:not([data-bslib-sidebar-open=always]).sidebar-collapsed{grid-template-columns:0 100%}.bslib-sidebar-layout:not([data-bslib-sidebar-open=always]).sidebar-collapsed.sidebar-right{grid-template-columns:100% 0}.bslib-sidebar-layout:not([data-bslib-sidebar-open=always]):not(.sidebar-right)>.main{padding-left:var(--bslib-sidebar-padding-icon)}.bslib-sidebar-layout:not([data-bslib-sidebar-open=always]).sidebar-right>.main{padding-right:var(--bslib-sidebar-padding-icon)}.bslib-sidebar-layout:not([data-bslib-sidebar-open=always])>.main{opacity:0;transition:opacity var(--bslib-sidebar-transition-easing-x) var(--bslib-sidebar-transition-duration)}.bslib-sidebar-layout:not([data-bslib-sidebar-open=always]).sidebar-collapsed>.main{opacity:1}}.bslib-grid{display:grid !important;gap:var(--bslib-spacer, 1rem);height:var(--bslib-grid-height)}.bslib-grid.grid{grid-template-columns:repeat(var(--bs-columns, 12), minmax(0, 1fr));grid-template-rows:unset;grid-auto-rows:var(--bslib-grid--row-heights);--bslib-grid--row-heights--xs: unset;--bslib-grid--row-heights--sm: unset;--bslib-grid--row-heights--md: unset;--bslib-grid--row-heights--lg: unset;--bslib-grid--row-heights--xl: unset;--bslib-grid--row-heights--xxl: unset}.bslib-grid.grid.bslib-grid--row-heights--xs{--bslib-grid--row-heights: var(--bslib-grid--row-heights--xs)}@media(min-width: 576px){.bslib-grid.grid.bslib-grid--row-heights--sm{--bslib-grid--row-heights: var(--bslib-grid--row-heights--sm)}}@media(min-width: 768px){.bslib-grid.grid.bslib-grid--row-heights--md{--bslib-grid--row-heights: var(--bslib-grid--row-heights--md)}}@media(min-width: 992px){.bslib-grid.grid.bslib-grid--row-heights--lg{--bslib-grid--row-heights: var(--bslib-grid--row-heights--lg)}}@media(min-width: 1200px){.bslib-grid.grid.bslib-grid--row-heights--xl{--bslib-grid--row-heights: var(--bslib-grid--row-heights--xl)}}@media(min-width: 1400px){.bslib-grid.grid.bslib-grid--row-heights--xxl{--bslib-grid--row-heights: var(--bslib-grid--row-heights--xxl)}}.bslib-grid>*>.shiny-input-container{width:100%}.bslib-grid-item{grid-column:auto/span 1}@media(max-width: 767.98px){.bslib-grid-item{grid-column:1/-1}}@media(max-width: 575.98px){.bslib-grid{grid-template-columns:1fr !important;height:var(--bslib-grid-height-mobile)}.bslib-grid.grid{height:unset !important;grid-auto-rows:var(--bslib-grid--row-heights--xs, auto)}}:root{--bslib-value-box-shadow: none;--bslib-value-box-border-width-auto-yes: var(--bslib-value-box-border-width-baseline);--bslib-value-box-border-width-auto-no: 0;--bslib-value-box-border-width-baseline: 1px}.bslib-value-box{border-width:var(--bslib-value-box-border-width-auto-no, var(--bslib-value-box-border-width-baseline));container-name:bslib-value-box;container-type:inline-size}.bslib-value-box.card{box-shadow:var(--bslib-value-box-shadow)}.bslib-value-box.border-auto{border-width:var(--bslib-value-box-border-width-auto-yes, var(--bslib-value-box-border-width-baseline))}.bslib-value-box.default{--bslib-value-box-bg-default: var(--bs-card-bg, #fff);--bslib-value-box-border-color-default: var(--bs-card-border-color, rgba(0, 0, 0, 0.175));color:var(--bslib-value-box-color);background-color:var(--bslib-value-box-bg, var(--bslib-value-box-bg-default));border-color:var(--bslib-value-box-border-color, var(--bslib-value-box-border-color-default))}.bslib-value-box .value-box-grid{display:grid;grid-template-areas:"left right";align-items:center;overflow:hidden}.bslib-value-box .value-box-showcase{height:100%;max-height:var(---bslib-value-box-showcase-max-h, 100%)}.bslib-value-box .value-box-showcase,.bslib-value-box .value-box-showcase>.html-fill-item{width:100%}.bslib-value-box[data-full-screen=true] .value-box-showcase{max-height:var(---bslib-value-box-showcase-max-h-fs, 100%)}@media screen and (min-width: 575.98px){@container bslib-value-box (max-width: 300px){.bslib-value-box:not(.showcase-bottom) .value-box-grid{grid-template-columns:1fr !important;grid-template-rows:auto auto;grid-template-areas:"top" "bottom"}.bslib-value-box:not(.showcase-bottom) .value-box-grid .value-box-showcase{grid-area:top !important}.bslib-value-box:not(.showcase-bottom) .value-box-grid .value-box-area{grid-area:bottom !important;justify-content:end}}}.bslib-value-box .value-box-area{justify-content:center;padding:1.5rem 1rem;font-size:.9rem;font-weight:500}.bslib-value-box .value-box-area *{margin-bottom:0;margin-top:0}.bslib-value-box .value-box-title{font-size:1rem;margin-top:0;margin-bottom:.5rem;font-weight:400;line-height:1.2}.bslib-value-box .value-box-title:empty::after{content:" "}.bslib-value-box .value-box-value{font-size:calc(1.29rem + 0.48vw);margin-top:0;margin-bottom:.5rem;font-weight:400;line-height:1.2}@media(min-width: 1200px){.bslib-value-box .value-box-value{font-size:1.65rem}}.bslib-value-box .value-box-value:empty::after{content:" "}.bslib-value-box .value-box-showcase{align-items:center;justify-content:center;margin-top:auto;margin-bottom:auto;padding:1rem}.bslib-value-box .value-box-showcase .bi,.bslib-value-box .value-box-showcase .fa,.bslib-value-box .value-box-showcase .fab,.bslib-value-box .value-box-showcase .fas,.bslib-value-box .value-box-showcase .far{opacity:.85;min-width:50px;max-width:125%}.bslib-value-box .value-box-showcase .bi,.bslib-value-box .value-box-showcase .fa,.bslib-value-box .value-box-showcase .fab,.bslib-value-box .value-box-showcase .fas,.bslib-value-box .value-box-showcase .far{font-size:4rem}.bslib-value-box.showcase-top-right .value-box-grid{grid-template-columns:1fr var(---bslib-value-box-showcase-w, 50%)}.bslib-value-box.showcase-top-right .value-box-grid .value-box-showcase{grid-area:right;margin-left:auto;align-self:start;align-items:end;padding-left:0;padding-bottom:0}.bslib-value-box.showcase-top-right .value-box-grid .value-box-area{grid-area:left;align-self:end}.bslib-value-box.showcase-top-right[data-full-screen=true] .value-box-grid{grid-template-columns:auto var(---bslib-value-box-showcase-w-fs, 1fr)}.bslib-value-box.showcase-top-right[data-full-screen=true] .value-box-grid>div{align-self:center}.bslib-value-box.showcase-top-right:not([data-full-screen=true]) .value-box-showcase{margin-top:0}@container bslib-value-box (max-width: 300px){.bslib-value-box.showcase-top-right:not([data-full-screen=true]) .value-box-grid .value-box-showcase{padding-left:1rem}}.bslib-value-box.showcase-left-center .value-box-grid{grid-template-columns:var(---bslib-value-box-showcase-w, 30%) auto}.bslib-value-box.showcase-left-center[data-full-screen=true] .value-box-grid{grid-template-columns:var(---bslib-value-box-showcase-w-fs, 1fr) auto}.bslib-value-box.showcase-left-center:not([data-fill-screen=true]) .value-box-grid .value-box-showcase{grid-area:left}.bslib-value-box.showcase-left-center:not([data-fill-screen=true]) .value-box-grid .value-box-area{grid-area:right}.bslib-value-box.showcase-bottom .value-box-grid{grid-template-columns:1fr;grid-template-rows:1fr var(---bslib-value-box-showcase-h, auto);grid-template-areas:"top" "bottom";overflow:hidden}.bslib-value-box.showcase-bottom .value-box-grid .value-box-showcase{grid-area:bottom;padding:0;margin:0}.bslib-value-box.showcase-bottom .value-box-grid .value-box-area{grid-area:top}.bslib-value-box.showcase-bottom[data-full-screen=true] .value-box-grid{grid-template-rows:1fr var(---bslib-value-box-showcase-h-fs, 2fr)}.bslib-value-box.showcase-bottom[data-full-screen=true] .value-box-grid .value-box-showcase{padding:1rem}[data-bs-theme=dark] .bslib-value-box{--bslib-value-box-shadow: 0 0.5rem 1rem rgb(0 0 0 / 50%)}.bslib-card{overflow:auto}.bslib-card .card-body+.card-body{padding-top:0}.bslib-card .card-body{overflow:auto}.bslib-card .card-body p{margin-top:0}.bslib-card .card-body p:last-child{margin-bottom:0}.bslib-card .card-body{max-height:var(--bslib-card-body-max-height, none)}.bslib-card[data-full-screen=true]>.card-body{max-height:var(--bslib-card-body-max-height-full-screen, none)}.bslib-card .card-header .form-group{margin-bottom:0}.bslib-card .card-header .selectize-control{margin-bottom:0}.bslib-card .card-header .selectize-control .item{margin-right:1.15rem}.bslib-card .card-footer{margin-top:auto}.bslib-card .bslib-navs-card-title{display:flex;flex-wrap:wrap;justify-content:space-between;align-items:center}.bslib-card .bslib-navs-card-title .nav{margin-left:auto}.bslib-card .bslib-sidebar-layout:not([data-bslib-sidebar-border=true]){border:none}.bslib-card .bslib-sidebar-layout:not([data-bslib-sidebar-border-radius=true]){border-top-left-radius:0;border-top-right-radius:0}[data-full-screen=true]{position:fixed;inset:3.5rem 1rem 1rem;height:auto !important;max-height:none !important;width:auto !important;z-index:1070}.bslib-full-screen-enter{display:none;position:absolute;bottom:var(--bslib-full-screen-enter-bottom, 0.2rem);right:var(--bslib-full-screen-enter-right, 0);top:var(--bslib-full-screen-enter-top);left:var(--bslib-full-screen-enter-left);color:var(--bslib-color-fg, var(--bs-card-color));background-color:var(--bslib-color-bg, var(--bs-card-bg, var(--bs-body-bg)));border:var(--bs-card-border-width) solid var(--bslib-color-fg, var(--bs-card-border-color));box-shadow:0 2px 4px rgba(0,0,0,.15);margin:.2rem .4rem;padding:.55rem !important;font-size:.8rem;cursor:pointer;opacity:.7;z-index:1070}.bslib-full-screen-enter:hover{opacity:1}.card[data-full-screen=false]:hover>*>.bslib-full-screen-enter{display:block}.bslib-has-full-screen .card:hover>*>.bslib-full-screen-enter{display:none}@media(max-width: 575.98px){.bslib-full-screen-enter{display:none !important}}.bslib-full-screen-exit{position:relative;top:1.35rem;font-size:.9rem;cursor:pointer;text-decoration:none;display:flex;float:right;margin-right:2.15rem;align-items:center;color:rgba(var(--bs-body-bg-rgb), 0.8)}.bslib-full-screen-exit:hover{color:rgba(var(--bs-body-bg-rgb), 1)}.bslib-full-screen-exit svg{margin-left:.5rem;font-size:1.5rem}#bslib-full-screen-overlay{position:fixed;inset:0;background-color:rgba(var(--bs-body-color-rgb), 0.6);backdrop-filter:blur(2px);-webkit-backdrop-filter:blur(2px);z-index:1069;animation:bslib-full-screen-overlay-enter 400ms cubic-bezier(0.6, 0.02, 0.65, 1) forwards}@keyframes bslib-full-screen-overlay-enter{0%{opacity:0}100%{opacity:1}}:root{--bslib-page-sidebar-title-bg: #f8f9fa;--bslib-page-sidebar-title-color: #000}.bslib-page-title{background-color:var(--bslib-page-sidebar-title-bg);color:var(--bslib-page-sidebar-title-color);font-size:1.25rem;font-weight:300;padding:var(--bslib-spacer, 1rem);padding-left:1.5rem;margin-bottom:0;border-bottom:1px solid #dee2e6}.navbar+.container-fluid:has(>.tab-content>.tab-pane.active.html-fill-container),.navbar+.container-sm:has(>.tab-content>.tab-pane.active.html-fill-container),.navbar+.container-md:has(>.tab-content>.tab-pane.active.html-fill-container),.navbar+.container-lg:has(>.tab-content>.tab-pane.active.html-fill-container),.navbar+.container-xl:has(>.tab-content>.tab-pane.active.html-fill-container),.navbar+.container-xxl:has(>.tab-content>.tab-pane.active.html-fill-container){padding-left:0;padding-right:0}.navbar+.container-fluid>.tab-content>.tab-pane.active.html-fill-container,.navbar+.container-sm>.tab-content>.tab-pane.active.html-fill-container,.navbar+.container-md>.tab-content>.tab-pane.active.html-fill-container,.navbar+.container-lg>.tab-content>.tab-pane.active.html-fill-container,.navbar+.container-xl>.tab-content>.tab-pane.active.html-fill-container,.navbar+.container-xxl>.tab-content>.tab-pane.active.html-fill-container{padding:var(--bslib-spacer, 1rem);gap:var(--bslib-spacer, 1rem)}.navbar+.container-fluid>.tab-content>.tab-pane.active.html-fill-container:has(>.bslib-sidebar-layout:only-child),.navbar+.container-sm>.tab-content>.tab-pane.active.html-fill-container:has(>.bslib-sidebar-layout:only-child),.navbar+.container-md>.tab-content>.tab-pane.active.html-fill-container:has(>.bslib-sidebar-layout:only-child),.navbar+.container-lg>.tab-content>.tab-pane.active.html-fill-container:has(>.bslib-sidebar-layout:only-child),.navbar+.container-xl>.tab-content>.tab-pane.active.html-fill-container:has(>.bslib-sidebar-layout:only-child),.navbar+.container-xxl>.tab-content>.tab-pane.active.html-fill-container:has(>.bslib-sidebar-layout:only-child){padding:0}.navbar+.container-fluid>.tab-content>.tab-pane.active.html-fill-container>.bslib-sidebar-layout:only-child:not([data-bslib-sidebar-border=true]),.navbar+.container-sm>.tab-content>.tab-pane.active.html-fill-container>.bslib-sidebar-layout:only-child:not([data-bslib-sidebar-border=true]),.navbar+.container-md>.tab-content>.tab-pane.active.html-fill-container>.bslib-sidebar-layout:only-child:not([data-bslib-sidebar-border=true]),.navbar+.container-lg>.tab-content>.tab-pane.active.html-fill-container>.bslib-sidebar-layout:only-child:not([data-bslib-sidebar-border=true]),.navbar+.container-xl>.tab-content>.tab-pane.active.html-fill-container>.bslib-sidebar-layout:only-child:not([data-bslib-sidebar-border=true]),.navbar+.container-xxl>.tab-content>.tab-pane.active.html-fill-container>.bslib-sidebar-layout:only-child:not([data-bslib-sidebar-border=true]){border-left:none;border-right:none;border-bottom:none}.navbar+.container-fluid>.tab-content>.tab-pane.active.html-fill-container>.bslib-sidebar-layout:only-child:not([data-bslib-sidebar-border-radius=true]),.navbar+.container-sm>.tab-content>.tab-pane.active.html-fill-container>.bslib-sidebar-layout:only-child:not([data-bslib-sidebar-border-radius=true]),.navbar+.container-md>.tab-content>.tab-pane.active.html-fill-container>.bslib-sidebar-layout:only-child:not([data-bslib-sidebar-border-radius=true]),.navbar+.container-lg>.tab-content>.tab-pane.active.html-fill-container>.bslib-sidebar-layout:only-child:not([data-bslib-sidebar-border-radius=true]),.navbar+.container-xl>.tab-content>.tab-pane.active.html-fill-container>.bslib-sidebar-layout:only-child:not([data-bslib-sidebar-border-radius=true]),.navbar+.container-xxl>.tab-content>.tab-pane.active.html-fill-container>.bslib-sidebar-layout:only-child:not([data-bslib-sidebar-border-radius=true]){border-radius:0}.navbar+div>.bslib-sidebar-layout{border-top:var(--bslib-sidebar-border)}html{height:100%}.bslib-page-fill{width:100%;height:100%;margin:0;padding:var(--bslib-spacer, 1rem);gap:var(--bslib-spacer, 1rem)}@media(max-width: 575.98px){.bslib-page-fill{height:var(--bslib-page-fill-mobile-height, auto)}}.html-fill-container{display:flex;flex-direction:column;min-height:0;min-width:0}.html-fill-container>.html-fill-item{flex:1 1 auto;min-height:0;min-width:0}.html-fill-container>:not(.html-fill-item){flex:0 0 auto}.sidebar-item .chapter-number{color:#343a40}.quarto-container{min-height:calc(100vh - 132px)}body.hypothesis-enabled #quarto-header{margin-right:16px}footer.footer .nav-footer,#quarto-header>nav{padding-left:1em;padding-right:1em}footer.footer div.nav-footer p:first-child{margin-top:0}footer.footer div.nav-footer p:last-child{margin-bottom:0}#quarto-content>*{padding-top:14px}#quarto-content>#quarto-sidebar-glass{padding-top:0px}@media(max-width: 991.98px){#quarto-content>*{padding-top:0}#quarto-content .subtitle{padding-top:14px}#quarto-content section:first-of-type h2:first-of-type,#quarto-content section:first-of-type .h2:first-of-type{margin-top:1rem}}.headroom-target,header.headroom{will-change:transform;transition:position 200ms linear;transition:all 200ms linear}header.headroom--pinned{transform:translateY(0%)}header.headroom--unpinned{transform:translateY(-100%)}.navbar-container{width:100%}.navbar-brand{overflow:hidden;text-overflow:ellipsis}.navbar-brand-container{max-width:calc(100% - 115px);min-width:0;display:flex;align-items:center}@media(min-width: 992px){.navbar-brand-container{margin-right:1em}}.navbar-brand.navbar-brand-logo{margin-right:4px;display:inline-flex}.navbar-toggler{flex-basis:content;flex-shrink:0}.navbar .navbar-brand-container{order:2}.navbar .navbar-toggler{order:1}.navbar .navbar-container>.navbar-nav{order:20}.navbar .navbar-container>.navbar-brand-container{margin-left:0 !important;margin-right:0 !important}.navbar .navbar-collapse{order:20}.navbar #quarto-search{order:4;margin-left:auto}.navbar .navbar-toggler{margin-right:.5em}.navbar-collapse .quarto-navbar-tools{margin-left:.5em}.navbar-logo{max-height:24px;width:auto;padding-right:4px}nav .nav-item:not(.compact){padding-top:1px}nav .nav-link i,nav .dropdown-item i{padding-right:1px}.navbar-expand-lg .navbar-nav .nav-link{padding-left:.6rem;padding-right:.6rem}nav .nav-item.compact .nav-link{padding-left:.5rem;padding-right:.5rem;font-size:1.1rem}.navbar .quarto-navbar-tools{order:3}.navbar .quarto-navbar-tools div.dropdown{display:inline-block}.navbar .quarto-navbar-tools .quarto-navigation-tool{color:#545555}.navbar .quarto-navbar-tools .quarto-navigation-tool:hover{color:#1f4eb6}.navbar-nav .dropdown-menu{min-width:220px;font-size:.9rem}.navbar .navbar-nav .nav-link.dropdown-toggle::after{opacity:.75;vertical-align:.175em}.navbar ul.dropdown-menu{padding-top:0;padding-bottom:0}.navbar .dropdown-header{text-transform:uppercase;font-size:.8rem;padding:0 .5rem}.navbar .dropdown-item{padding:.4rem .5rem}.navbar .dropdown-item>i.bi{margin-left:.1rem;margin-right:.25em}.sidebar #quarto-search{margin-top:-1px}.sidebar #quarto-search svg.aa-SubmitIcon{width:16px;height:16px}.sidebar-navigation a{color:inherit}.sidebar-title{margin-top:.25rem;padding-bottom:.5rem;font-size:1.3rem;line-height:1.6rem;visibility:visible}.sidebar-title>a{font-size:inherit;text-decoration:none}.sidebar-title .sidebar-tools-main{margin-top:-6px}@media(max-width: 991.98px){#quarto-sidebar div.sidebar-header{padding-top:.2em}}.sidebar-header-stacked .sidebar-title{margin-top:.6rem}.sidebar-logo{max-width:90%;padding-bottom:.5rem}.sidebar-logo-link{text-decoration:none}.sidebar-navigation li a{text-decoration:none}.sidebar-navigation .quarto-navigation-tool{opacity:.7;font-size:.875rem}#quarto-sidebar>nav>.sidebar-tools-main{margin-left:14px}.sidebar-tools-main{display:inline-flex;margin-left:0px;order:2}.sidebar-tools-main:not(.tools-wide){vertical-align:middle}.sidebar-navigation .quarto-navigation-tool.dropdown-toggle::after{display:none}.sidebar.sidebar-navigation>*{padding-top:1em}.sidebar-item{margin-bottom:.2em;line-height:1rem;margin-top:.4rem}.sidebar-section{padding-left:.5em;padding-bottom:.2em}.sidebar-item .sidebar-item-container{display:flex;justify-content:space-between;cursor:pointer}.sidebar-item-toggle:hover{cursor:pointer}.sidebar-item .sidebar-item-toggle .bi{font-size:.7rem;text-align:center}.sidebar-item .sidebar-item-toggle .bi-chevron-right::before{transition:transform 200ms ease}.sidebar-item .sidebar-item-toggle[aria-expanded=false] .bi-chevron-right::before{transform:none}.sidebar-item .sidebar-item-toggle[aria-expanded=true] .bi-chevron-right::before{transform:rotate(90deg)}.sidebar-item-text{width:100%}.sidebar-navigation .sidebar-divider{margin-left:0;margin-right:0;margin-top:.5rem;margin-bottom:.5rem}@media(max-width: 991.98px){.quarto-secondary-nav{display:block}.quarto-secondary-nav button.quarto-search-button{padding-right:0em;padding-left:2em}.quarto-secondary-nav button.quarto-btn-toggle{margin-left:-0.75rem;margin-right:.15rem}.quarto-secondary-nav nav.quarto-title-breadcrumbs{display:none}.quarto-secondary-nav nav.quarto-page-breadcrumbs{display:flex;align-items:center;padding-right:1em;margin-left:-0.25em}.quarto-secondary-nav nav.quarto-page-breadcrumbs a{text-decoration:none}.quarto-secondary-nav nav.quarto-page-breadcrumbs ol.breadcrumb{margin-bottom:0}}@media(min-width: 992px){.quarto-secondary-nav{display:none}}.quarto-title-breadcrumbs .breadcrumb{margin-bottom:.5em;font-size:.9rem}.quarto-title-breadcrumbs .breadcrumb li:last-of-type a{color:#6c757d}.quarto-secondary-nav .quarto-btn-toggle{color:#595959}.quarto-secondary-nav[aria-expanded=false] .quarto-btn-toggle .bi-chevron-right::before{transform:none}.quarto-secondary-nav[aria-expanded=true] .quarto-btn-toggle .bi-chevron-right::before{transform:rotate(90deg)}.quarto-secondary-nav .quarto-btn-toggle .bi-chevron-right::before{transition:transform 200ms ease}.quarto-secondary-nav{cursor:pointer}.no-decor{text-decoration:none}.quarto-secondary-nav-title{margin-top:.3em;color:#595959;padding-top:4px}.quarto-secondary-nav nav.quarto-page-breadcrumbs{color:#595959}.quarto-secondary-nav nav.quarto-page-breadcrumbs a{color:#595959}.quarto-secondary-nav nav.quarto-page-breadcrumbs a:hover{color:rgba(33,81,191,.8)}.quarto-secondary-nav nav.quarto-page-breadcrumbs .breadcrumb-item::before{color:#8c8c8c}.breadcrumb-item{line-height:1.2rem}div.sidebar-item-container{color:#595959}div.sidebar-item-container:hover,div.sidebar-item-container:focus{color:rgba(33,81,191,.8)}div.sidebar-item-container.disabled{color:rgba(89,89,89,.75)}div.sidebar-item-container .active,div.sidebar-item-container .show>.nav-link,div.sidebar-item-container .sidebar-link>code{color:#2151bf}div.sidebar.sidebar-navigation.rollup.quarto-sidebar-toggle-contents,nav.sidebar.sidebar-navigation:not(.rollup){background-color:#fff}@media(max-width: 991.98px){.sidebar-navigation .sidebar-item a,.nav-page .nav-page-text,.sidebar-navigation{font-size:1rem}.sidebar-navigation ul.sidebar-section.depth1 .sidebar-section-item{font-size:1.1rem}.sidebar-logo{display:none}.sidebar.sidebar-navigation{position:static;border-bottom:1px solid #dee2e6}.sidebar.sidebar-navigation.collapsing{position:fixed;z-index:1000}.sidebar.sidebar-navigation.show{position:fixed;z-index:1000}.sidebar.sidebar-navigation{min-height:100%}nav.quarto-secondary-nav{background-color:#fff;border-bottom:1px solid #dee2e6}.quarto-banner nav.quarto-secondary-nav{background-color:#f8f9fa;color:#545555;border-top:1px solid #dee2e6}.sidebar .sidebar-footer{visibility:visible;padding-top:1rem;position:inherit}.sidebar-tools-collapse{display:block}}#quarto-sidebar{transition:width .15s ease-in}#quarto-sidebar>*{padding-right:1em}@media(max-width: 991.98px){#quarto-sidebar .sidebar-menu-container{white-space:nowrap;min-width:225px}#quarto-sidebar.show{transition:width .15s ease-out}}@media(min-width: 992px){#quarto-sidebar{display:flex;flex-direction:column}.nav-page .nav-page-text,.sidebar-navigation .sidebar-section .sidebar-item{font-size:.875rem}.sidebar-navigation .sidebar-item{font-size:.925rem}.sidebar.sidebar-navigation{display:block;position:sticky}.sidebar-search{width:100%}.sidebar .sidebar-footer{visibility:visible}}@media(min-width: 992px){#quarto-sidebar-glass{display:none}}@media(max-width: 991.98px){#quarto-sidebar-glass{position:fixed;top:0;bottom:0;left:0;right:0;background-color:rgba(255,255,255,0);transition:background-color .15s ease-in;z-index:-1}#quarto-sidebar-glass.collapsing{z-index:1000}#quarto-sidebar-glass.show{transition:background-color .15s ease-out;background-color:rgba(102,102,102,.4);z-index:1000}}.sidebar .sidebar-footer{padding:.5rem 1rem;align-self:flex-end;color:#6c757d;width:100%}.quarto-page-breadcrumbs .breadcrumb-item+.breadcrumb-item,.quarto-page-breadcrumbs .breadcrumb-item{padding-right:.33em;padding-left:0}.quarto-page-breadcrumbs .breadcrumb-item::before{padding-right:.33em}.quarto-sidebar-footer{font-size:.875em}.sidebar-section .bi-chevron-right{vertical-align:middle}.sidebar-section .bi-chevron-right::before{font-size:.9em}.notransition{-webkit-transition:none !important;-moz-transition:none !important;-o-transition:none !important;transition:none !important}.btn:focus:not(:focus-visible){box-shadow:none}.page-navigation{display:flex;justify-content:space-between}.nav-page{padding-bottom:.75em}.nav-page .bi{font-size:1.8rem;vertical-align:middle}.nav-page .nav-page-text{padding-left:.25em;padding-right:.25em}.nav-page a{color:#6c757d;text-decoration:none;display:flex;align-items:center}.nav-page a:hover{color:#1f4eb6}.nav-footer .toc-actions{padding-bottom:.5em;padding-top:.5em}.nav-footer .toc-actions a,.nav-footer .toc-actions a:hover{text-decoration:none}.nav-footer .toc-actions ul{display:flex;list-style:none}.nav-footer .toc-actions ul :first-child{margin-left:auto}.nav-footer .toc-actions ul :last-child{margin-right:auto}.nav-footer .toc-actions ul li{padding-right:1.5em}.nav-footer .toc-actions ul li i.bi{padding-right:.4em}.nav-footer .toc-actions ul li:last-of-type{padding-right:0}.nav-footer{display:flex;flex-direction:row;flex-wrap:wrap;justify-content:space-between;align-items:baseline;text-align:center;padding-top:.5rem;padding-bottom:.5rem;background-color:#fff}body.nav-fixed{padding-top:64px}.nav-footer-contents{color:#6c757d;margin-top:.25rem}.nav-footer{min-height:3.5em;color:#757575}.nav-footer a{color:#757575}.nav-footer .nav-footer-left{font-size:.825em}.nav-footer .nav-footer-center{font-size:.825em}.nav-footer .nav-footer-right{font-size:.825em}.nav-footer-left .footer-items,.nav-footer-center .footer-items,.nav-footer-right .footer-items{display:inline-flex;padding-top:.3em;padding-bottom:.3em;margin-bottom:0em}.nav-footer-left .footer-items .nav-link,.nav-footer-center .footer-items .nav-link,.nav-footer-right .footer-items .nav-link{padding-left:.6em;padding-right:.6em}@media(min-width: 768px){.nav-footer-left{flex:1 1 0px;text-align:left}}@media(max-width: 575.98px){.nav-footer-left{margin-bottom:1em;flex:100%}}@media(min-width: 768px){.nav-footer-right{flex:1 1 0px;text-align:right}}@media(max-width: 575.98px){.nav-footer-right{margin-bottom:1em;flex:100%}}.nav-footer-center{text-align:center;min-height:3em}@media(min-width: 768px){.nav-footer-center{flex:1 1 0px}}.nav-footer-center .footer-items{justify-content:center}@media(max-width: 767.98px){.nav-footer-center{margin-bottom:1em;flex:100%}}@media(max-width: 767.98px){.nav-footer-center{margin-top:3em;order:10}}.navbar .quarto-reader-toggle.reader .quarto-reader-toggle-btn{background-color:#545555;border-radius:3px}@media(max-width: 991.98px){.quarto-reader-toggle{display:none}}.quarto-reader-toggle.reader.quarto-navigation-tool .quarto-reader-toggle-btn{background-color:#595959;border-radius:3px}.quarto-reader-toggle .quarto-reader-toggle-btn{display:inline-flex;padding-left:.2em;padding-right:.2em;margin-left:-0.2em;margin-right:-0.2em;text-align:center}.navbar .quarto-reader-toggle:not(.reader) .bi::before{background-image:url('data:image/svg+xml,')}.navbar .quarto-reader-toggle.reader .bi::before{background-image:url('data:image/svg+xml,')}.sidebar-navigation .quarto-reader-toggle:not(.reader) .bi::before{background-image:url('data:image/svg+xml,')}.sidebar-navigation .quarto-reader-toggle.reader .bi::before{background-image:url('data:image/svg+xml,')}#quarto-back-to-top{display:none;position:fixed;bottom:50px;background-color:#fff;border-radius:.25rem;box-shadow:0 .2rem .5rem #6c757d,0 0 .05rem #6c757d;color:#6c757d;text-decoration:none;font-size:.9em;text-align:center;left:50%;padding:.4rem .8rem;transform:translate(-50%, 0)}#quarto-announcement{padding:.5em;display:flex;justify-content:space-between;margin-bottom:0;font-size:.9em}#quarto-announcement .quarto-announcement-content{margin-right:auto}#quarto-announcement .quarto-announcement-content p{margin-bottom:0}#quarto-announcement .quarto-announcement-icon{margin-right:.5em;font-size:1.2em;margin-top:-0.15em}#quarto-announcement .quarto-announcement-action{cursor:pointer}.aa-DetachedSearchButtonQuery{display:none}.aa-DetachedOverlay ul.aa-List,#quarto-search-results ul.aa-List{list-style:none;padding-left:0}.aa-DetachedOverlay .aa-Panel,#quarto-search-results .aa-Panel{background-color:#fff;position:absolute;z-index:2000}#quarto-search-results .aa-Panel{max-width:400px}#quarto-search input{font-size:.925rem}@media(min-width: 992px){.navbar #quarto-search{margin-left:.25rem;order:999}}.navbar.navbar-expand-sm #quarto-search,.navbar.navbar-expand-md #quarto-search{order:999}@media(min-width: 992px){.navbar .quarto-navbar-tools{order:900}}@media(min-width: 992px){.navbar .quarto-navbar-tools.tools-end{margin-left:auto !important}}@media(max-width: 991.98px){#quarto-sidebar .sidebar-search{display:none}}#quarto-sidebar .sidebar-search .aa-Autocomplete{width:100%}.navbar .aa-Autocomplete .aa-Form{width:180px}.navbar #quarto-search.type-overlay .aa-Autocomplete{width:40px}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form{background-color:inherit;border:none}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form:focus-within{box-shadow:none;outline:none}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form .aa-InputWrapper{display:none}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form .aa-InputWrapper:focus-within{display:inherit}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form .aa-Label svg,.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form .aa-LoadingIndicator svg{width:26px;height:26px;color:#545555;opacity:1}.navbar #quarto-search.type-overlay .aa-Autocomplete svg.aa-SubmitIcon{width:26px;height:26px;color:#545555;opacity:1}.aa-Autocomplete .aa-Form,.aa-DetachedFormContainer .aa-Form{align-items:center;background-color:#fff;border:1px solid #dee2e6;border-radius:.25rem;color:#343a40;display:flex;line-height:1em;margin:0;position:relative;width:100%}.aa-Autocomplete .aa-Form:focus-within,.aa-DetachedFormContainer .aa-Form:focus-within{box-shadow:rgba(39,128,227,.6) 0 0 0 1px;outline:currentColor none medium}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix{align-items:center;display:flex;flex-shrink:0;order:1}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-Label,.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-Label,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator{cursor:initial;flex-shrink:0;padding:0;text-align:left}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-Label svg,.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator svg,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-Label svg,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator svg{color:#343a40;opacity:.5}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-SubmitButton,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-SubmitButton{appearance:none;background:none;border:0;margin:0}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator{align-items:center;display:flex;justify-content:center}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator[hidden],.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator[hidden]{display:none}.aa-Autocomplete .aa-Form .aa-InputWrapper,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper{order:3;position:relative;width:100%}.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input{appearance:none;background:none;border:0;color:#343a40;font:inherit;height:calc(1.5em + .1rem + 2px);padding:0;width:100%}.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::placeholder,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::placeholder{color:#343a40;opacity:.8}.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input:focus,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input:focus{border-color:none;box-shadow:none;outline:none}.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-decoration,.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-cancel-button,.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-results-button,.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-results-decoration,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-decoration,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-cancel-button,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-results-button,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-results-decoration{display:none}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix{align-items:center;display:flex;order:4}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton{align-items:center;background:none;border:0;color:#343a40;opacity:.8;cursor:pointer;display:flex;margin:0;width:calc(1.5em + .1rem + 2px)}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton:hover,.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton:focus,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton:hover,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton:focus{color:#343a40;opacity:.8}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton[hidden],.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton[hidden]{display:none}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton svg,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton svg{width:calc(1.5em + 0.75rem + calc(1px * 2))}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-CopyButton,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-CopyButton{border:none;align-items:center;background:none;color:#343a40;opacity:.4;font-size:.7rem;cursor:pointer;display:none;margin:0;width:calc(1em + .1rem + 2px)}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-CopyButton:hover,.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-CopyButton:focus,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-CopyButton:hover,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-CopyButton:focus{color:#343a40;opacity:.8}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-CopyButton[hidden],.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-CopyButton[hidden]{display:none}.aa-PanelLayout:empty{display:none}.quarto-search-no-results.no-query{display:none}.aa-Source:has(.no-query){display:none}#quarto-search-results .aa-Panel{border:solid #dee2e6 1px}#quarto-search-results .aa-SourceNoResults{width:398px}.aa-DetachedOverlay .aa-Panel,#quarto-search-results .aa-Panel{max-height:65vh;overflow-y:auto;font-size:.925rem}.aa-DetachedOverlay .aa-SourceNoResults,#quarto-search-results .aa-SourceNoResults{height:60px;display:flex;justify-content:center;align-items:center}.aa-DetachedOverlay .search-error,#quarto-search-results .search-error{padding-top:10px;padding-left:20px;padding-right:20px;cursor:default}.aa-DetachedOverlay .search-error .search-error-title,#quarto-search-results .search-error .search-error-title{font-size:1.1rem;margin-bottom:.5rem}.aa-DetachedOverlay .search-error .search-error-title .search-error-icon,#quarto-search-results .search-error .search-error-title .search-error-icon{margin-right:8px}.aa-DetachedOverlay .search-error .search-error-text,#quarto-search-results .search-error .search-error-text{font-weight:300}.aa-DetachedOverlay .search-result-text,#quarto-search-results .search-result-text{font-weight:300;overflow:hidden;text-overflow:ellipsis;display:-webkit-box;-webkit-line-clamp:2;-webkit-box-orient:vertical;line-height:1.2rem;max-height:2.4rem}.aa-DetachedOverlay .aa-SourceHeader .search-result-header,#quarto-search-results .aa-SourceHeader .search-result-header{font-size:.875rem;background-color:#f2f2f2;padding-left:14px;padding-bottom:4px;padding-top:4px}.aa-DetachedOverlay .aa-SourceHeader .search-result-header-no-results,#quarto-search-results .aa-SourceHeader .search-result-header-no-results{display:none}.aa-DetachedOverlay .aa-SourceFooter .algolia-search-logo,#quarto-search-results .aa-SourceFooter .algolia-search-logo{width:110px;opacity:.85;margin:8px;float:right}.aa-DetachedOverlay .search-result-section,#quarto-search-results .search-result-section{font-size:.925em}.aa-DetachedOverlay a.search-result-link,#quarto-search-results a.search-result-link{color:inherit;text-decoration:none}.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item,#quarto-search-results li.aa-Item[aria-selected=true] .search-item{background-color:#2780e3}.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item.search-result-more,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-result-section,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-result-text,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-result-title-container,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-result-text-container,#quarto-search-results li.aa-Item[aria-selected=true] .search-item.search-result-more,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-result-section,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-result-text,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-result-title-container,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-result-text-container{color:#fff;background-color:#2780e3}.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item mark.search-match,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-match.mark,#quarto-search-results li.aa-Item[aria-selected=true] .search-item mark.search-match,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-match.mark{color:#fff;background-color:#4b95e8}.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item,#quarto-search-results li.aa-Item[aria-selected=false] .search-item{background-color:#fff}.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item.search-result-more,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-result-section,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-result-text,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-result-title-container,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-result-text-container,#quarto-search-results li.aa-Item[aria-selected=false] .search-item.search-result-more,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-result-section,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-result-text,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-result-title-container,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-result-text-container{color:#343a40}.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item mark.search-match,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-match.mark,#quarto-search-results li.aa-Item[aria-selected=false] .search-item mark.search-match,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-match.mark{color:inherit;background-color:#e5effc}.aa-DetachedOverlay .aa-Item .search-result-doc:not(.document-selectable) .search-result-title-container,#quarto-search-results .aa-Item .search-result-doc:not(.document-selectable) .search-result-title-container{background-color:#fff;color:#343a40}.aa-DetachedOverlay .aa-Item .search-result-doc:not(.document-selectable) .search-result-text-container,#quarto-search-results .aa-Item .search-result-doc:not(.document-selectable) .search-result-text-container{padding-top:0px}.aa-DetachedOverlay li.aa-Item .search-result-doc.document-selectable .search-result-text-container,#quarto-search-results li.aa-Item .search-result-doc.document-selectable .search-result-text-container{margin-top:-4px}.aa-DetachedOverlay .aa-Item,#quarto-search-results .aa-Item{cursor:pointer}.aa-DetachedOverlay .aa-Item .search-item,#quarto-search-results .aa-Item .search-item{border-left:none;border-right:none;border-top:none;background-color:#fff;border-color:#dee2e6;color:#343a40}.aa-DetachedOverlay .aa-Item .search-item p,#quarto-search-results .aa-Item .search-item p{margin-top:0;margin-bottom:0}.aa-DetachedOverlay .aa-Item .search-item i.bi,#quarto-search-results .aa-Item .search-item i.bi{padding-left:8px;padding-right:8px;font-size:1.3em}.aa-DetachedOverlay .aa-Item .search-item .search-result-title,#quarto-search-results .aa-Item .search-item .search-result-title{margin-top:.3em;margin-bottom:0em}.aa-DetachedOverlay .aa-Item .search-item .search-result-crumbs,#quarto-search-results .aa-Item .search-item .search-result-crumbs{white-space:nowrap;text-overflow:ellipsis;font-size:.8em;font-weight:300;margin-right:1em}.aa-DetachedOverlay .aa-Item .search-item .search-result-crumbs:not(.search-result-crumbs-wrap),#quarto-search-results .aa-Item .search-item .search-result-crumbs:not(.search-result-crumbs-wrap){max-width:30%;margin-left:auto;margin-top:.5em;margin-bottom:.1rem}.aa-DetachedOverlay .aa-Item .search-item .search-result-crumbs.search-result-crumbs-wrap,#quarto-search-results .aa-Item .search-item .search-result-crumbs.search-result-crumbs-wrap{flex-basis:100%;margin-top:0em;margin-bottom:.2em;margin-left:37px}.aa-DetachedOverlay .aa-Item .search-result-title-container,#quarto-search-results .aa-Item .search-result-title-container{font-size:1em;display:flex;flex-wrap:wrap;padding:6px 4px 6px 4px}.aa-DetachedOverlay .aa-Item .search-result-text-container,#quarto-search-results .aa-Item .search-result-text-container{padding-bottom:8px;padding-right:8px;margin-left:42px}.aa-DetachedOverlay .aa-Item .search-result-doc-section,.aa-DetachedOverlay .aa-Item .search-result-more,#quarto-search-results .aa-Item .search-result-doc-section,#quarto-search-results .aa-Item .search-result-more{padding-top:8px;padding-bottom:8px;padding-left:44px}.aa-DetachedOverlay .aa-Item .search-result-more,#quarto-search-results .aa-Item .search-result-more{font-size:.8em;font-weight:400}.aa-DetachedOverlay .aa-Item .search-result-doc,#quarto-search-results .aa-Item .search-result-doc{border-top:1px solid #dee2e6}.aa-DetachedSearchButton{background:none;border:none}.aa-DetachedSearchButton .aa-DetachedSearchButtonPlaceholder{display:none}.navbar .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon{color:#545555}.sidebar-tools-collapse #quarto-search,.sidebar-tools-main #quarto-search{display:inline}.sidebar-tools-collapse #quarto-search .aa-Autocomplete,.sidebar-tools-main #quarto-search .aa-Autocomplete{display:inline}.sidebar-tools-collapse #quarto-search .aa-DetachedSearchButton,.sidebar-tools-main #quarto-search .aa-DetachedSearchButton{padding-left:4px;padding-right:4px}.sidebar-tools-collapse #quarto-search .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon,.sidebar-tools-main #quarto-search .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon{color:#595959}.sidebar-tools-collapse #quarto-search .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon .aa-SubmitIcon,.sidebar-tools-main #quarto-search .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon .aa-SubmitIcon{margin-top:-3px}.aa-DetachedContainer{background:rgba(255,255,255,.65);width:90%;bottom:0;box-shadow:rgba(222,226,230,.6) 0 0 0 1px;outline:currentColor none medium;display:flex;flex-direction:column;left:0;margin:0;overflow:hidden;padding:0;position:fixed;right:0;top:0;z-index:1101}.aa-DetachedContainer::after{height:32px}.aa-DetachedContainer .aa-SourceHeader{margin:var(--aa-spacing-half) 0 var(--aa-spacing-half) 2px}.aa-DetachedContainer .aa-Panel{background-color:#fff;border-radius:0;box-shadow:none;flex-grow:1;margin:0;padding:0;position:relative}.aa-DetachedContainer .aa-PanelLayout{bottom:0;box-shadow:none;left:0;margin:0;max-height:none;overflow-y:auto;position:absolute;right:0;top:0;width:100%}.aa-DetachedFormContainer{background-color:#fff;border-bottom:1px solid #dee2e6;display:flex;flex-direction:row;justify-content:space-between;margin:0;padding:.5em}.aa-DetachedCancelButton{background:none;font-size:.8em;border:0;border-radius:3px;color:#343a40;cursor:pointer;margin:0 0 0 .5em;padding:0 .5em}.aa-DetachedCancelButton:hover,.aa-DetachedCancelButton:focus{box-shadow:rgba(39,128,227,.6) 0 0 0 1px;outline:currentColor none medium}.aa-DetachedContainer--modal{bottom:inherit;height:auto;margin:0 auto;position:absolute;top:100px;border-radius:6px;max-width:850px}@media(max-width: 575.98px){.aa-DetachedContainer--modal{width:100%;top:0px;border-radius:0px;border:none}}.aa-DetachedContainer--modal .aa-PanelLayout{max-height:var(--aa-detached-modal-max-height);padding-bottom:var(--aa-spacing-half);position:static}.aa-Detached{height:100vh;overflow:hidden}.aa-DetachedOverlay{background-color:rgba(52,58,64,.4);position:fixed;left:0;right:0;top:0;margin:0;padding:0;height:100vh;z-index:1100}.quarto-dashboard.nav-fixed.dashboard-sidebar #quarto-content.quarto-dashboard-content{padding:0em}.quarto-dashboard #quarto-content.quarto-dashboard-content{padding:1em}.quarto-dashboard #quarto-content.quarto-dashboard-content>*{padding-top:0}@media(min-width: 576px){.quarto-dashboard{height:100%}}.quarto-dashboard .card.valuebox.bslib-card.bg-primary{background-color:#5397e9 !important}.quarto-dashboard .card.valuebox.bslib-card.bg-secondary{background-color:#343a40 !important}.quarto-dashboard .card.valuebox.bslib-card.bg-success{background-color:#3aa716 !important}.quarto-dashboard .card.valuebox.bslib-card.bg-info{background-color:rgba(153,84,187,.7019607843) !important}.quarto-dashboard .card.valuebox.bslib-card.bg-warning{background-color:#fa6400 !important}.quarto-dashboard .card.valuebox.bslib-card.bg-danger{background-color:rgba(255,0,57,.7019607843) !important}.quarto-dashboard .card.valuebox.bslib-card.bg-light{background-color:#f8f9fa !important}.quarto-dashboard .card.valuebox.bslib-card.bg-dark{background-color:#343a40 !important}.quarto-dashboard.dashboard-fill{display:flex;flex-direction:column}.quarto-dashboard #quarto-appendix{display:none}.quarto-dashboard #quarto-header #quarto-dashboard-header{border-top:solid 1px #dae0e5;border-bottom:solid 1px #dae0e5}.quarto-dashboard #quarto-header #quarto-dashboard-header>nav{padding-left:1em;padding-right:1em}.quarto-dashboard #quarto-header #quarto-dashboard-header>nav .navbar-brand-container{padding-left:0}.quarto-dashboard #quarto-header #quarto-dashboard-header .navbar-toggler{margin-right:0}.quarto-dashboard #quarto-header #quarto-dashboard-header .navbar-toggler-icon{height:1em;width:1em;background-image:url('data:image/svg+xml,')}.quarto-dashboard #quarto-header #quarto-dashboard-header .navbar-brand-container{padding-right:1em}.quarto-dashboard #quarto-header #quarto-dashboard-header .navbar-title{font-size:1.1em}.quarto-dashboard #quarto-header #quarto-dashboard-header .navbar-nav{font-size:.9em}.quarto-dashboard #quarto-dashboard-header .navbar{padding:0}.quarto-dashboard #quarto-dashboard-header .navbar .navbar-container{padding-left:1em}.quarto-dashboard #quarto-dashboard-header .navbar.slim .navbar-brand-container .nav-link,.quarto-dashboard #quarto-dashboard-header .navbar.slim .navbar-nav .nav-link{padding:.7em}.quarto-dashboard #quarto-dashboard-header .navbar .quarto-color-scheme-toggle{order:9}.quarto-dashboard #quarto-dashboard-header .navbar .navbar-toggler{margin-left:.5em;order:10}.quarto-dashboard #quarto-dashboard-header .navbar .navbar-nav .nav-link{padding:.5em;height:100%;display:flex;align-items:center}.quarto-dashboard #quarto-dashboard-header .navbar .navbar-nav .active{background-color:#e0e5e9}.quarto-dashboard #quarto-dashboard-header .navbar .navbar-brand-container{padding:.5em .5em .5em 0;display:flex;flex-direction:row;margin-right:2em;align-items:center}@media(max-width: 767.98px){.quarto-dashboard #quarto-dashboard-header .navbar .navbar-brand-container{margin-right:auto}}.quarto-dashboard #quarto-dashboard-header .navbar .navbar-collapse{align-self:stretch}@media(min-width: 768px){.quarto-dashboard #quarto-dashboard-header .navbar .navbar-collapse{order:8}}@media(max-width: 767.98px){.quarto-dashboard #quarto-dashboard-header .navbar .navbar-collapse{order:1000;padding-bottom:.5em}}.quarto-dashboard #quarto-dashboard-header .navbar .navbar-collapse .navbar-nav{align-self:stretch}.quarto-dashboard #quarto-dashboard-header .navbar .navbar-title{font-size:1.25em;line-height:1.1em;display:flex;flex-direction:row;flex-wrap:wrap;align-items:baseline}.quarto-dashboard #quarto-dashboard-header .navbar .navbar-title .navbar-title-text{margin-right:.4em}.quarto-dashboard #quarto-dashboard-header .navbar .navbar-title a{text-decoration:none;color:inherit}.quarto-dashboard #quarto-dashboard-header .navbar .navbar-subtitle,.quarto-dashboard #quarto-dashboard-header .navbar .navbar-author{font-size:.9rem;margin-right:.5em}.quarto-dashboard #quarto-dashboard-header .navbar .navbar-author{margin-left:auto}.quarto-dashboard #quarto-dashboard-header .navbar .navbar-logo{max-height:48px;min-height:30px;object-fit:cover;margin-right:1em}.quarto-dashboard #quarto-dashboard-header .navbar .quarto-dashboard-links{order:9;padding-right:1em}.quarto-dashboard #quarto-dashboard-header .navbar .quarto-dashboard-link-text{margin-left:.25em}.quarto-dashboard #quarto-dashboard-header .navbar .quarto-dashboard-link{padding-right:0em;padding-left:.7em;text-decoration:none;color:#545555}.quarto-dashboard .page-layout-custom .tab-content{padding:0;border:none}.quarto-dashboard-img-contain{height:100%;width:100%;object-fit:contain}@media(max-width: 575.98px){.quarto-dashboard .bslib-grid{grid-template-rows:minmax(1em, max-content) !important}.quarto-dashboard .sidebar-content{height:inherit}.quarto-dashboard .page-layout-custom{min-height:100vh}}.quarto-dashboard.dashboard-toolbar>.page-layout-custom,.quarto-dashboard.dashboard-sidebar>.page-layout-custom{padding:0}.quarto-dashboard .quarto-dashboard-content.quarto-dashboard-pages{padding:0}.quarto-dashboard .callout{margin-bottom:0;margin-top:0}.quarto-dashboard .html-fill-container figure{overflow:hidden}.quarto-dashboard bslib-tooltip .rounded-pill{border:solid #6c757d 1px}.quarto-dashboard bslib-tooltip .rounded-pill .svg{fill:#343a40}.quarto-dashboard .tabset .dashboard-card-no-title .nav-tabs{margin-left:0;margin-right:auto}.quarto-dashboard .tabset .tab-content{border:none}.quarto-dashboard .tabset .card-header .nav-link[role=tab]{margin-top:-6px;padding-top:6px;padding-bottom:6px}.quarto-dashboard .card.valuebox,.quarto-dashboard .card.bslib-value-box{min-height:3rem}.quarto-dashboard .card.valuebox .card-body,.quarto-dashboard .card.bslib-value-box .card-body{padding:0}.quarto-dashboard .bslib-value-box .value-box-value{font-size:clamp(.1em,15cqw,5em)}.quarto-dashboard .bslib-value-box .value-box-showcase .bi{font-size:clamp(.1em,max(18cqw,5.2cqh),5em);text-align:center;height:1em}.quarto-dashboard .bslib-value-box .value-box-showcase .bi::before{vertical-align:1em}.quarto-dashboard .bslib-value-box .value-box-area{margin-top:auto;margin-bottom:auto}.quarto-dashboard .card figure.quarto-float{display:flex;flex-direction:column;align-items:center}.quarto-dashboard .dashboard-scrolling{padding:1em}.quarto-dashboard .full-height{height:100%}.quarto-dashboard .showcase-bottom .value-box-grid{display:grid;grid-template-columns:1fr;grid-template-rows:1fr auto;grid-template-areas:"top" "bottom"}.quarto-dashboard .showcase-bottom .value-box-grid .value-box-showcase{grid-area:bottom;padding:0;margin:0}.quarto-dashboard .showcase-bottom .value-box-grid .value-box-showcase i.bi{font-size:4rem}.quarto-dashboard .showcase-bottom .value-box-grid .value-box-area{grid-area:top}.quarto-dashboard .tab-content{margin-bottom:0}.quarto-dashboard .bslib-card .bslib-navs-card-title{justify-content:stretch;align-items:end}.quarto-dashboard .card-header{display:flex;flex-wrap:wrap;justify-content:space-between}.quarto-dashboard .card-header .card-title{display:flex;flex-direction:column;justify-content:center;margin-bottom:0}.quarto-dashboard .tabset .card-toolbar{margin-bottom:1em}.quarto-dashboard .bslib-grid>.bslib-sidebar-layout{border:none;gap:var(--bslib-spacer, 1rem)}.quarto-dashboard .bslib-grid>.bslib-sidebar-layout>.main{padding:0}.quarto-dashboard .bslib-grid>.bslib-sidebar-layout>.sidebar{border-radius:.25rem;border:1px solid rgba(0,0,0,.175)}.quarto-dashboard .bslib-grid>.bslib-sidebar-layout>.collapse-toggle{display:none}@media(max-width: 767.98px){.quarto-dashboard .bslib-grid>.bslib-sidebar-layout{grid-template-columns:1fr;grid-template-rows:max-content 1fr}.quarto-dashboard .bslib-grid>.bslib-sidebar-layout>.main{grid-column:1;grid-row:2}.quarto-dashboard .bslib-grid>.bslib-sidebar-layout .sidebar{grid-column:1;grid-row:1}}.quarto-dashboard .sidebar-right .sidebar{padding-left:2.5em}.quarto-dashboard .sidebar-right .collapse-toggle{left:2px}.quarto-dashboard .quarto-dashboard .sidebar-right button.collapse-toggle:not(.transitioning){left:unset}.quarto-dashboard aside.sidebar{padding-left:1em;padding-right:1em;background-color:rgba(52,58,64,.25);color:#343a40}.quarto-dashboard .bslib-sidebar-layout>div.main{padding:.7em}.quarto-dashboard .bslib-sidebar-layout button.collapse-toggle{margin-top:.3em}.quarto-dashboard .bslib-sidebar-layout .collapse-toggle{top:0}.quarto-dashboard .bslib-sidebar-layout.sidebar-collapsed:not(.transitioning):not(.sidebar-right) .collapse-toggle{left:2px}.quarto-dashboard .sidebar>section>.h3:first-of-type{margin-top:0em}.quarto-dashboard .sidebar .h3,.quarto-dashboard .sidebar .h4,.quarto-dashboard .sidebar .h5,.quarto-dashboard .sidebar .h6{margin-top:.5em}.quarto-dashboard .sidebar form{flex-direction:column;align-items:start;margin-bottom:1em}.quarto-dashboard .sidebar form div[class*=oi-][class$=-input]{flex-direction:column}.quarto-dashboard .sidebar form[class*=oi-][class$=-toggle]{flex-direction:row-reverse;align-items:center;justify-content:start}.quarto-dashboard .sidebar form input[type=range]{margin-top:.5em;margin-right:.8em;margin-left:1em}.quarto-dashboard .sidebar label{width:fit-content}.quarto-dashboard .sidebar .card-body{margin-bottom:2em}.quarto-dashboard .sidebar .shiny-input-container{margin-bottom:1em}.quarto-dashboard .sidebar .shiny-options-group{margin-top:0}.quarto-dashboard .sidebar .control-label{margin-bottom:.3em}.quarto-dashboard .card .card-body .quarto-layout-row{align-items:stretch}.quarto-dashboard .toolbar{font-size:.9em;display:flex;flex-direction:row;border-top:solid 1px #bcbfc0;padding:1em;flex-wrap:wrap;background-color:rgba(52,58,64,.25)}.quarto-dashboard .toolbar .cell-output-display{display:flex}.quarto-dashboard .toolbar .shiny-input-container{padding-bottom:.5em;margin-bottom:.5em;width:inherit}.quarto-dashboard .toolbar .shiny-input-container>.checkbox:first-child{margin-top:6px}.quarto-dashboard .toolbar>*:last-child{margin-right:0}.quarto-dashboard .toolbar>*>*{margin-right:1em;align-items:baseline}.quarto-dashboard .toolbar>*>*>a{text-decoration:none;margin-top:auto;margin-bottom:auto}.quarto-dashboard .toolbar .shiny-input-container{padding-bottom:0;margin-bottom:0}.quarto-dashboard .toolbar .shiny-input-container>*{flex-shrink:0;flex-grow:0}.quarto-dashboard .toolbar .form-group.shiny-input-container:not([role=group])>label{margin-bottom:0}.quarto-dashboard .toolbar .shiny-input-container.no-baseline{align-items:start;padding-top:6px}.quarto-dashboard .toolbar .shiny-input-container{display:flex;align-items:baseline}.quarto-dashboard .toolbar .shiny-input-container label{padding-right:.4em}.quarto-dashboard .toolbar .shiny-input-container .bslib-input-switch{margin-top:6px}.quarto-dashboard .toolbar input[type=text]{line-height:1;width:inherit}.quarto-dashboard .toolbar .input-daterange{width:inherit}.quarto-dashboard .toolbar .input-daterange input[type=text]{height:2.4em;width:10em}.quarto-dashboard .toolbar .input-daterange .input-group-addon{height:auto;padding:0;margin-left:-5px !important;margin-right:-5px}.quarto-dashboard .toolbar .input-daterange .input-group-addon .input-group-text{padding-top:0;padding-bottom:0;height:100%}.quarto-dashboard .toolbar span.irs.irs--shiny{width:10em}.quarto-dashboard .toolbar span.irs.irs--shiny .irs-line{top:9px}.quarto-dashboard .toolbar span.irs.irs--shiny .irs-min,.quarto-dashboard .toolbar span.irs.irs--shiny .irs-max,.quarto-dashboard .toolbar span.irs.irs--shiny .irs-from,.quarto-dashboard .toolbar span.irs.irs--shiny .irs-to,.quarto-dashboard .toolbar span.irs.irs--shiny .irs-single{top:20px}.quarto-dashboard .toolbar span.irs.irs--shiny .irs-bar{top:8px}.quarto-dashboard .toolbar span.irs.irs--shiny .irs-handle{top:0px}.quarto-dashboard .toolbar .shiny-input-checkboxgroup>label{margin-top:6px}.quarto-dashboard .toolbar .shiny-input-checkboxgroup>.shiny-options-group{margin-top:0;align-items:baseline}.quarto-dashboard .toolbar .shiny-input-radiogroup>label{margin-top:6px}.quarto-dashboard .toolbar .shiny-input-radiogroup>.shiny-options-group{align-items:baseline;margin-top:0}.quarto-dashboard .toolbar .shiny-input-radiogroup>.shiny-options-group>.radio{margin-right:.3em}.quarto-dashboard .toolbar .form-select{padding-top:.2em;padding-bottom:.2em}.quarto-dashboard .toolbar .shiny-input-select{min-width:6em}.quarto-dashboard .toolbar div.checkbox{margin-bottom:0px}.quarto-dashboard .toolbar>.checkbox:first-child{margin-top:6px}.quarto-dashboard .toolbar form{width:fit-content}.quarto-dashboard .toolbar form label{padding-top:.2em;padding-bottom:.2em;width:fit-content}.quarto-dashboard .toolbar form input[type=date]{width:fit-content}.quarto-dashboard .toolbar form input[type=color]{width:3em}.quarto-dashboard .toolbar form button{padding:.4em}.quarto-dashboard .toolbar form select{width:fit-content}.quarto-dashboard .toolbar>*{font-size:.9em;flex-grow:0}.quarto-dashboard .toolbar .shiny-input-container label{margin-bottom:1px}.quarto-dashboard .toolbar-bottom{margin-top:1em;margin-bottom:0 !important;order:2}.quarto-dashboard .quarto-dashboard-content>.dashboard-toolbar-container>.toolbar-content>.tab-content>.tab-pane>*:not(.bslib-sidebar-layout){padding:1em}.quarto-dashboard .quarto-dashboard-content>.dashboard-toolbar-container>.toolbar-content>*:not(.tab-content){padding:1em}.quarto-dashboard .quarto-dashboard-content>.tab-content>.dashboard-page>.dashboard-toolbar-container>.toolbar-content,.quarto-dashboard .quarto-dashboard-content>.tab-content>.dashboard-page:not(.dashboard-sidebar-container)>*:not(.dashboard-toolbar-container){padding:1em}.quarto-dashboard .toolbar-content{padding:0}.quarto-dashboard .quarto-dashboard-content.quarto-dashboard-pages .tab-pane>.dashboard-toolbar-container .toolbar{border-radius:0;margin-bottom:0}.quarto-dashboard .dashboard-toolbar-container.toolbar-toplevel .toolbar{border-bottom:1px solid rgba(0,0,0,.175)}.quarto-dashboard .dashboard-toolbar-container.toolbar-toplevel .toolbar-bottom{margin-top:0}.quarto-dashboard .dashboard-toolbar-container:not(.toolbar-toplevel) .toolbar{margin-bottom:1em;border-top:none;border-radius:.25rem;border:1px solid rgba(0,0,0,.175)}.quarto-dashboard .vega-embed.has-actions details{width:1.7em;height:2em;position:absolute !important;top:0;right:0}.quarto-dashboard .dashboard-toolbar-container{padding:0}.quarto-dashboard .card .card-header p:last-child,.quarto-dashboard .card .card-footer p:last-child{margin-bottom:0}.quarto-dashboard .card .card-body>.h4:first-child{margin-top:0}.quarto-dashboard .card .card-body{z-index:4}@media(max-width: 767.98px){.quarto-dashboard .card .card-body .itables div.dataTables_wrapper div.dataTables_length,.quarto-dashboard .card .card-body .itables div.dataTables_wrapper div.dataTables_info,.quarto-dashboard .card .card-body .itables div.dataTables_wrapper div.dataTables_paginate{text-align:initial}.quarto-dashboard .card .card-body .itables div.dataTables_wrapper div.dataTables_filter{text-align:right}.quarto-dashboard .card .card-body .itables div.dataTables_wrapper div.dataTables_paginate ul.pagination{justify-content:initial}}.quarto-dashboard .card .card-body .itables .dataTables_wrapper{display:flex;flex-wrap:wrap;justify-content:space-between;align-items:center;padding-top:0}.quarto-dashboard .card .card-body .itables .dataTables_wrapper table{flex-shrink:0}.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dt-buttons{margin-bottom:.5em;margin-left:auto;width:fit-content;float:right}.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dt-buttons.btn-group{background:#fff;border:none}.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dt-buttons .btn-secondary{background-color:#fff;background-image:none;border:solid #dee2e6 1px;padding:.2em .7em}.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dt-buttons .btn span{font-size:.8em;color:#343a40}.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dataTables_info{margin-left:.5em;margin-bottom:.5em;padding-top:0}@media(min-width: 768px){.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dataTables_info{font-size:.875em}}@media(max-width: 767.98px){.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dataTables_info{font-size:.8em}}.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dataTables_filter{margin-bottom:.5em;font-size:.875em}.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dataTables_filter input[type=search]{padding:1px 5px 1px 5px;font-size:.875em}.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dataTables_length{flex-basis:1 1 50%;margin-bottom:.5em;font-size:.875em}.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dataTables_length select{padding:.4em 3em .4em .5em;font-size:.875em;margin-left:.2em;margin-right:.2em}.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dataTables_paginate{flex-shrink:0}@media(min-width: 768px){.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dataTables_paginate{margin-left:auto}}.quarto-dashboard .card .card-body .itables .dataTables_wrapper .dataTables_paginate ul.pagination .paginate_button .page-link{font-size:.8em}.quarto-dashboard .card .card-footer{font-size:.9em}.quarto-dashboard .card .card-toolbar{display:flex;flex-grow:1;flex-direction:row;width:100%;flex-wrap:wrap}.quarto-dashboard .card .card-toolbar>*{font-size:.8em;flex-grow:0}.quarto-dashboard .card .card-toolbar>.card-title{font-size:1em;flex-grow:1;align-self:flex-start;margin-top:.1em}.quarto-dashboard .card .card-toolbar .cell-output-display{display:flex}.quarto-dashboard .card .card-toolbar .shiny-input-container{padding-bottom:.5em;margin-bottom:.5em;width:inherit}.quarto-dashboard .card .card-toolbar .shiny-input-container>.checkbox:first-child{margin-top:6px}.quarto-dashboard .card .card-toolbar>*:last-child{margin-right:0}.quarto-dashboard .card .card-toolbar>*>*{margin-right:1em;align-items:baseline}.quarto-dashboard .card .card-toolbar>*>*>a{text-decoration:none;margin-top:auto;margin-bottom:auto}.quarto-dashboard .card .card-toolbar form{width:fit-content}.quarto-dashboard .card .card-toolbar form label{padding-top:.2em;padding-bottom:.2em;width:fit-content}.quarto-dashboard .card .card-toolbar form input[type=date]{width:fit-content}.quarto-dashboard .card .card-toolbar form input[type=color]{width:3em}.quarto-dashboard .card .card-toolbar form button{padding:.4em}.quarto-dashboard .card .card-toolbar form select{width:fit-content}.quarto-dashboard .card .card-toolbar .cell-output-display{display:flex}.quarto-dashboard .card .card-toolbar .shiny-input-container{padding-bottom:.5em;margin-bottom:.5em;width:inherit}.quarto-dashboard .card .card-toolbar .shiny-input-container>.checkbox:first-child{margin-top:6px}.quarto-dashboard .card .card-toolbar>*:last-child{margin-right:0}.quarto-dashboard .card .card-toolbar>*>*{margin-right:1em;align-items:baseline}.quarto-dashboard .card .card-toolbar>*>*>a{text-decoration:none;margin-top:auto;margin-bottom:auto}.quarto-dashboard .card .card-toolbar .shiny-input-container{padding-bottom:0;margin-bottom:0}.quarto-dashboard .card .card-toolbar .shiny-input-container>*{flex-shrink:0;flex-grow:0}.quarto-dashboard .card .card-toolbar .form-group.shiny-input-container:not([role=group])>label{margin-bottom:0}.quarto-dashboard .card .card-toolbar .shiny-input-container.no-baseline{align-items:start;padding-top:6px}.quarto-dashboard .card .card-toolbar .shiny-input-container{display:flex;align-items:baseline}.quarto-dashboard .card .card-toolbar .shiny-input-container label{padding-right:.4em}.quarto-dashboard .card .card-toolbar .shiny-input-container .bslib-input-switch{margin-top:6px}.quarto-dashboard .card .card-toolbar input[type=text]{line-height:1;width:inherit}.quarto-dashboard .card .card-toolbar .input-daterange{width:inherit}.quarto-dashboard .card .card-toolbar .input-daterange input[type=text]{height:2.4em;width:10em}.quarto-dashboard .card .card-toolbar .input-daterange .input-group-addon{height:auto;padding:0;margin-left:-5px !important;margin-right:-5px}.quarto-dashboard .card .card-toolbar .input-daterange .input-group-addon .input-group-text{padding-top:0;padding-bottom:0;height:100%}.quarto-dashboard .card .card-toolbar span.irs.irs--shiny{width:10em}.quarto-dashboard .card .card-toolbar span.irs.irs--shiny .irs-line{top:9px}.quarto-dashboard .card .card-toolbar span.irs.irs--shiny .irs-min,.quarto-dashboard .card .card-toolbar span.irs.irs--shiny .irs-max,.quarto-dashboard .card .card-toolbar span.irs.irs--shiny .irs-from,.quarto-dashboard .card .card-toolbar span.irs.irs--shiny .irs-to,.quarto-dashboard .card .card-toolbar span.irs.irs--shiny .irs-single{top:20px}.quarto-dashboard .card .card-toolbar span.irs.irs--shiny .irs-bar{top:8px}.quarto-dashboard .card .card-toolbar span.irs.irs--shiny .irs-handle{top:0px}.quarto-dashboard .card .card-toolbar .shiny-input-checkboxgroup>label{margin-top:6px}.quarto-dashboard .card .card-toolbar .shiny-input-checkboxgroup>.shiny-options-group{margin-top:0;align-items:baseline}.quarto-dashboard .card .card-toolbar .shiny-input-radiogroup>label{margin-top:6px}.quarto-dashboard .card .card-toolbar .shiny-input-radiogroup>.shiny-options-group{align-items:baseline;margin-top:0}.quarto-dashboard .card .card-toolbar .shiny-input-radiogroup>.shiny-options-group>.radio{margin-right:.3em}.quarto-dashboard .card .card-toolbar .form-select{padding-top:.2em;padding-bottom:.2em}.quarto-dashboard .card .card-toolbar .shiny-input-select{min-width:6em}.quarto-dashboard .card .card-toolbar div.checkbox{margin-bottom:0px}.quarto-dashboard .card .card-toolbar>.checkbox:first-child{margin-top:6px}.quarto-dashboard .card-body>table>thead{border-top:none}.quarto-dashboard .card-body>.table>:not(caption)>*>*{background-color:#fff}.tableFloatingHeaderOriginal{background-color:#fff;position:sticky !important;top:0 !important}.dashboard-data-table{margin-top:-1px}div.value-box-area span.observablehq--number{font-size:calc(clamp(.1em,15cqw,5em)*1.25);line-height:1.2;color:inherit;font-family:var(--bs-body-font-family)}.quarto-listing{padding-bottom:1em}.listing-pagination{padding-top:.5em}ul.pagination{float:right;padding-left:8px;padding-top:.5em}ul.pagination li{padding-right:.75em}ul.pagination li.disabled a,ul.pagination li.active a{color:#fff;text-decoration:none}ul.pagination li:last-of-type{padding-right:0}.listing-actions-group{display:flex}.quarto-listing-filter{margin-bottom:1em;width:200px;margin-left:auto}.quarto-listing-sort{margin-bottom:1em;margin-right:auto;width:auto}.quarto-listing-sort .input-group-text{font-size:.8em}.input-group-text{border-right:none}.quarto-listing-sort select.form-select{font-size:.8em}.listing-no-matching{text-align:center;padding-top:2em;padding-bottom:3em;font-size:1em}#quarto-margin-sidebar .quarto-listing-category{padding-top:0;font-size:1rem}#quarto-margin-sidebar .quarto-listing-category-title{cursor:pointer;font-weight:600;font-size:1rem}.quarto-listing-category .category{cursor:pointer}.quarto-listing-category .category.active{font-weight:600}.quarto-listing-category.category-cloud{display:flex;flex-wrap:wrap;align-items:baseline}.quarto-listing-category.category-cloud .category{padding-right:5px}.quarto-listing-category.category-cloud .category-cloud-1{font-size:.75em}.quarto-listing-category.category-cloud .category-cloud-2{font-size:.95em}.quarto-listing-category.category-cloud .category-cloud-3{font-size:1.15em}.quarto-listing-category.category-cloud .category-cloud-4{font-size:1.35em}.quarto-listing-category.category-cloud .category-cloud-5{font-size:1.55em}.quarto-listing-category.category-cloud .category-cloud-6{font-size:1.75em}.quarto-listing-category.category-cloud .category-cloud-7{font-size:1.95em}.quarto-listing-category.category-cloud .category-cloud-8{font-size:2.15em}.quarto-listing-category.category-cloud .category-cloud-9{font-size:2.35em}.quarto-listing-category.category-cloud .category-cloud-10{font-size:2.55em}.quarto-listing-cols-1{grid-template-columns:repeat(1, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-1{grid-template-columns:repeat(1, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-1{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-2{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-2{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-2{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-3{grid-template-columns:repeat(3, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-3{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-3{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-4{grid-template-columns:repeat(4, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-4{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-4{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-5{grid-template-columns:repeat(5, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-5{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-5{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-6{grid-template-columns:repeat(6, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-6{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-6{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-7{grid-template-columns:repeat(7, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-7{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-7{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-8{grid-template-columns:repeat(8, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-8{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-8{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-9{grid-template-columns:repeat(9, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-9{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-9{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-10{grid-template-columns:repeat(10, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-10{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-10{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-11{grid-template-columns:repeat(11, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-11{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-11{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-12{grid-template-columns:repeat(12, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-12{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-12{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-grid{gap:1.5em}.quarto-grid-item.borderless{border:none}.quarto-grid-item.borderless .listing-categories .listing-category:last-of-type,.quarto-grid-item.borderless .listing-categories .listing-category:first-of-type{padding-left:0}.quarto-grid-item.borderless .listing-categories .listing-category{border:0}.quarto-grid-link{text-decoration:none;color:inherit}.quarto-grid-link:hover{text-decoration:none;color:inherit}.quarto-grid-item h5.title,.quarto-grid-item .title.h5{margin-top:0;margin-bottom:0}.quarto-grid-item .card-footer{display:flex;justify-content:space-between;font-size:.8em}.quarto-grid-item .card-footer p{margin-bottom:0}.quarto-grid-item p.card-img-top{margin-bottom:0}.quarto-grid-item p.card-img-top>img{object-fit:cover}.quarto-grid-item .card-other-values{margin-top:.5em;font-size:.8em}.quarto-grid-item .card-other-values tr{margin-bottom:.5em}.quarto-grid-item .card-other-values tr>td:first-of-type{font-weight:600;padding-right:1em;padding-left:1em;vertical-align:top}.quarto-grid-item div.post-contents{display:flex;flex-direction:column;text-decoration:none;height:100%}.quarto-grid-item .listing-item-img-placeholder{background-color:rgba(52,58,64,.25);flex-shrink:0}.quarto-grid-item .card-attribution{padding-top:1em;display:flex;gap:1em;text-transform:uppercase;color:#6c757d;font-weight:500;flex-grow:10;align-items:flex-end}.quarto-grid-item .description{padding-bottom:1em}.quarto-grid-item .card-attribution .date{align-self:flex-end}.quarto-grid-item .card-attribution.justify{justify-content:space-between}.quarto-grid-item .card-attribution.start{justify-content:flex-start}.quarto-grid-item .card-attribution.end{justify-content:flex-end}.quarto-grid-item .card-title{margin-bottom:.1em}.quarto-grid-item .card-subtitle{padding-top:.25em}.quarto-grid-item .card-text{font-size:.9em}.quarto-grid-item .listing-reading-time{padding-bottom:.25em}.quarto-grid-item .card-text-small{font-size:.8em}.quarto-grid-item .card-subtitle.subtitle{font-size:.9em;font-weight:600;padding-bottom:.5em}.quarto-grid-item .listing-categories{display:flex;flex-wrap:wrap;padding-bottom:5px}.quarto-grid-item .listing-categories .listing-category{color:#6c757d;border:solid 1px #dee2e6;border-radius:.25rem;text-transform:uppercase;font-size:.65em;padding-left:.5em;padding-right:.5em;padding-top:.15em;padding-bottom:.15em;cursor:pointer;margin-right:4px;margin-bottom:4px}.quarto-grid-item.card-right{text-align:right}.quarto-grid-item.card-right .listing-categories{justify-content:flex-end}.quarto-grid-item.card-left{text-align:left}.quarto-grid-item.card-center{text-align:center}.quarto-grid-item.card-center .listing-description{text-align:justify}.quarto-grid-item.card-center .listing-categories{justify-content:center}table.quarto-listing-table td.image{padding:0px}table.quarto-listing-table td.image img{width:100%;max-width:50px;object-fit:contain}table.quarto-listing-table a{text-decoration:none;word-break:keep-all}table.quarto-listing-table th a{color:inherit}table.quarto-listing-table th a.asc:after{margin-bottom:-2px;margin-left:5px;display:inline-block;height:1rem;width:1rem;background-repeat:no-repeat;background-size:1rem 1rem;background-image:url('data:image/svg+xml,');content:""}table.quarto-listing-table th a.desc:after{margin-bottom:-2px;margin-left:5px;display:inline-block;height:1rem;width:1rem;background-repeat:no-repeat;background-size:1rem 1rem;background-image:url('data:image/svg+xml,');content:""}table.quarto-listing-table.table-hover td{cursor:pointer}.quarto-post.image-left{flex-direction:row}.quarto-post.image-right{flex-direction:row-reverse}@media(max-width: 767.98px){.quarto-post.image-right,.quarto-post.image-left{gap:0em;flex-direction:column}.quarto-post .metadata{padding-bottom:1em;order:2}.quarto-post .body{order:1}.quarto-post .thumbnail{order:3}}.list.quarto-listing-default div:last-of-type{border-bottom:none}@media(min-width: 992px){.quarto-listing-container-default{margin-right:2em}}div.quarto-post{display:flex;gap:2em;margin-bottom:1.5em;border-bottom:1px solid #dee2e6}@media(max-width: 767.98px){div.quarto-post{padding-bottom:1em}}div.quarto-post .metadata{flex-basis:20%;flex-grow:0;margin-top:.2em;flex-shrink:10}div.quarto-post .thumbnail{flex-basis:30%;flex-grow:0;flex-shrink:0}div.quarto-post .thumbnail img{margin-top:.4em;width:100%;object-fit:cover}div.quarto-post .body{flex-basis:45%;flex-grow:1;flex-shrink:0}div.quarto-post .body h3.listing-title,div.quarto-post .body .listing-title.h3{margin-top:0px;margin-bottom:0px;border-bottom:none}div.quarto-post .body .listing-subtitle{font-size:.875em;margin-bottom:.5em;margin-top:.2em}div.quarto-post .body .description{font-size:.9em}div.quarto-post .body pre code{white-space:pre-wrap}div.quarto-post a{color:#343a40;text-decoration:none}div.quarto-post .metadata{display:flex;flex-direction:column;font-size:.8em;font-family:"Source Sans Pro",-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol";flex-basis:33%}div.quarto-post .listing-categories{display:flex;flex-wrap:wrap;padding-bottom:5px}div.quarto-post .listing-categories .listing-category{color:#6c757d;border:solid 1px #dee2e6;border-radius:.25rem;text-transform:uppercase;font-size:.65em;padding-left:.5em;padding-right:.5em;padding-top:.15em;padding-bottom:.15em;cursor:pointer;margin-right:4px;margin-bottom:4px}div.quarto-post .listing-description{margin-bottom:.5em}div.quarto-about-jolla{display:flex !important;flex-direction:column;align-items:center;margin-top:10%;padding-bottom:1em}div.quarto-about-jolla .about-image{object-fit:cover;margin-left:auto;margin-right:auto;margin-bottom:1.5em}div.quarto-about-jolla img.round{border-radius:50%}div.quarto-about-jolla img.rounded{border-radius:10px}div.quarto-about-jolla .quarto-title h1.title,div.quarto-about-jolla .quarto-title .title.h1{text-align:center}div.quarto-about-jolla .quarto-title .description{text-align:center}div.quarto-about-jolla h2,div.quarto-about-jolla .h2{border-bottom:none}div.quarto-about-jolla .about-sep{width:60%}div.quarto-about-jolla main{text-align:center}div.quarto-about-jolla .about-links{display:flex}@media(min-width: 992px){div.quarto-about-jolla .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-jolla .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-jolla .about-link{color:#626d78;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-jolla .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-jolla .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-jolla .about-link:hover{color:#2761e3}div.quarto-about-jolla .about-link i.bi{margin-right:.15em}div.quarto-about-solana{display:flex !important;flex-direction:column;padding-top:3em !important;padding-bottom:1em}div.quarto-about-solana .about-entity{display:flex !important;align-items:start;justify-content:space-between}@media(min-width: 992px){div.quarto-about-solana .about-entity{flex-direction:row}}@media(max-width: 991.98px){div.quarto-about-solana .about-entity{flex-direction:column-reverse;align-items:center;text-align:center}}div.quarto-about-solana .about-entity .entity-contents{display:flex;flex-direction:column}@media(max-width: 767.98px){div.quarto-about-solana .about-entity .entity-contents{width:100%}}div.quarto-about-solana .about-entity .about-image{object-fit:cover}@media(max-width: 991.98px){div.quarto-about-solana .about-entity .about-image{margin-bottom:1.5em}}div.quarto-about-solana .about-entity img.round{border-radius:50%}div.quarto-about-solana .about-entity img.rounded{border-radius:10px}div.quarto-about-solana .about-entity .about-links{display:flex;justify-content:left;padding-bottom:1.2em}@media(min-width: 992px){div.quarto-about-solana .about-entity .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-solana .about-entity .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-solana .about-entity .about-link{color:#626d78;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-solana .about-entity .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-solana .about-entity .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-solana .about-entity .about-link:hover{color:#2761e3}div.quarto-about-solana .about-entity .about-link i.bi{margin-right:.15em}div.quarto-about-solana .about-contents{padding-right:1.5em;flex-basis:0;flex-grow:1}div.quarto-about-solana .about-contents main.content{margin-top:0}div.quarto-about-solana .about-contents h2,div.quarto-about-solana .about-contents .h2{border-bottom:none}div.quarto-about-trestles{display:flex !important;flex-direction:row;padding-top:3em !important;padding-bottom:1em}@media(max-width: 991.98px){div.quarto-about-trestles{flex-direction:column;padding-top:0em !important}}div.quarto-about-trestles .about-entity{display:flex !important;flex-direction:column;align-items:center;text-align:center;padding-right:1em}@media(min-width: 992px){div.quarto-about-trestles .about-entity{flex:0 0 42%}}div.quarto-about-trestles .about-entity .about-image{object-fit:cover;margin-bottom:1.5em}div.quarto-about-trestles .about-entity img.round{border-radius:50%}div.quarto-about-trestles .about-entity img.rounded{border-radius:10px}div.quarto-about-trestles .about-entity .about-links{display:flex;justify-content:center}@media(min-width: 992px){div.quarto-about-trestles .about-entity .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-trestles .about-entity .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-trestles .about-entity .about-link{color:#626d78;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-trestles .about-entity .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-trestles .about-entity .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-trestles .about-entity .about-link:hover{color:#2761e3}div.quarto-about-trestles .about-entity .about-link i.bi{margin-right:.15em}div.quarto-about-trestles .about-contents{flex-basis:0;flex-grow:1}div.quarto-about-trestles .about-contents h2,div.quarto-about-trestles .about-contents .h2{border-bottom:none}@media(min-width: 992px){div.quarto-about-trestles .about-contents{border-left:solid 1px #dee2e6;padding-left:1.5em}}div.quarto-about-trestles .about-contents main.content{margin-top:0}div.quarto-about-marquee{padding-bottom:1em}div.quarto-about-marquee .about-contents{display:flex;flex-direction:column}div.quarto-about-marquee .about-image{max-height:550px;margin-bottom:1.5em;object-fit:cover}div.quarto-about-marquee img.round{border-radius:50%}div.quarto-about-marquee img.rounded{border-radius:10px}div.quarto-about-marquee h2,div.quarto-about-marquee .h2{border-bottom:none}div.quarto-about-marquee .about-links{display:flex;justify-content:center;padding-top:1.5em}@media(min-width: 992px){div.quarto-about-marquee .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-marquee .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-marquee .about-link{color:#626d78;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-marquee .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-marquee .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-marquee .about-link:hover{color:#2761e3}div.quarto-about-marquee .about-link i.bi{margin-right:.15em}@media(min-width: 992px){div.quarto-about-marquee .about-link{border:none}}div.quarto-about-broadside{display:flex;flex-direction:column;padding-bottom:1em}div.quarto-about-broadside .about-main{display:flex !important;padding-top:0 !important}@media(min-width: 992px){div.quarto-about-broadside .about-main{flex-direction:row;align-items:flex-start}}@media(max-width: 991.98px){div.quarto-about-broadside .about-main{flex-direction:column}}@media(max-width: 991.98px){div.quarto-about-broadside .about-main .about-entity{flex-shrink:0;width:100%;height:450px;margin-bottom:1.5em;background-size:cover;background-repeat:no-repeat}}@media(min-width: 992px){div.quarto-about-broadside .about-main .about-entity{flex:0 10 50%;margin-right:1.5em;width:100%;height:100%;background-size:100%;background-repeat:no-repeat}}div.quarto-about-broadside .about-main .about-contents{padding-top:14px;flex:0 0 50%}div.quarto-about-broadside h2,div.quarto-about-broadside .h2{border-bottom:none}div.quarto-about-broadside .about-sep{margin-top:1.5em;width:60%;align-self:center}div.quarto-about-broadside .about-links{display:flex;justify-content:center;column-gap:20px;padding-top:1.5em}@media(min-width: 992px){div.quarto-about-broadside .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-broadside .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-broadside .about-link{color:#626d78;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-broadside .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-broadside .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-broadside .about-link:hover{color:#2761e3}div.quarto-about-broadside .about-link i.bi{margin-right:.15em}@media(min-width: 992px){div.quarto-about-broadside .about-link{border:none}}.tippy-box[data-theme~=quarto]{background-color:#fff;border:solid 1px #dee2e6;border-radius:.25rem;color:#343a40;font-size:.875rem}.tippy-box[data-theme~=quarto]>.tippy-backdrop{background-color:#fff}.tippy-box[data-theme~=quarto]>.tippy-arrow:after,.tippy-box[data-theme~=quarto]>.tippy-svg-arrow:after{content:"";position:absolute;z-index:-1}.tippy-box[data-theme~=quarto]>.tippy-arrow:after{border-color:rgba(0,0,0,0);border-style:solid}.tippy-box[data-placement^=top]>.tippy-arrow:before{bottom:-6px}.tippy-box[data-placement^=bottom]>.tippy-arrow:before{top:-6px}.tippy-box[data-placement^=right]>.tippy-arrow:before{left:-6px}.tippy-box[data-placement^=left]>.tippy-arrow:before{right:-6px}.tippy-box[data-theme~=quarto][data-placement^=top]>.tippy-arrow:before{border-top-color:#fff}.tippy-box[data-theme~=quarto][data-placement^=top]>.tippy-arrow:after{border-top-color:#dee2e6;border-width:7px 7px 0;top:17px;left:1px}.tippy-box[data-theme~=quarto][data-placement^=top]>.tippy-svg-arrow>svg{top:16px}.tippy-box[data-theme~=quarto][data-placement^=top]>.tippy-svg-arrow:after{top:17px}.tippy-box[data-theme~=quarto][data-placement^=bottom]>.tippy-arrow:before{border-bottom-color:#fff;bottom:16px}.tippy-box[data-theme~=quarto][data-placement^=bottom]>.tippy-arrow:after{border-bottom-color:#dee2e6;border-width:0 7px 7px;bottom:17px;left:1px}.tippy-box[data-theme~=quarto][data-placement^=bottom]>.tippy-svg-arrow>svg{bottom:15px}.tippy-box[data-theme~=quarto][data-placement^=bottom]>.tippy-svg-arrow:after{bottom:17px}.tippy-box[data-theme~=quarto][data-placement^=left]>.tippy-arrow:before{border-left-color:#fff}.tippy-box[data-theme~=quarto][data-placement^=left]>.tippy-arrow:after{border-left-color:#dee2e6;border-width:7px 0 7px 7px;left:17px;top:1px}.tippy-box[data-theme~=quarto][data-placement^=left]>.tippy-svg-arrow>svg{left:11px}.tippy-box[data-theme~=quarto][data-placement^=left]>.tippy-svg-arrow:after{left:12px}.tippy-box[data-theme~=quarto][data-placement^=right]>.tippy-arrow:before{border-right-color:#fff;right:16px}.tippy-box[data-theme~=quarto][data-placement^=right]>.tippy-arrow:after{border-width:7px 7px 7px 0;right:17px;top:1px;border-right-color:#dee2e6}.tippy-box[data-theme~=quarto][data-placement^=right]>.tippy-svg-arrow>svg{right:11px}.tippy-box[data-theme~=quarto][data-placement^=right]>.tippy-svg-arrow:after{right:12px}.tippy-box[data-theme~=quarto]>.tippy-svg-arrow{fill:#343a40}.tippy-box[data-theme~=quarto]>.tippy-svg-arrow:after{background-image:url(data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMTYiIGhlaWdodD0iNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48cGF0aCBkPSJNMCA2czEuNzk2LS4wMTMgNC42Ny0zLjYxNUM1Ljg1MS45IDYuOTMuMDA2IDggMGMxLjA3LS4wMDYgMi4xNDguODg3IDMuMzQzIDIuMzg1QzE0LjIzMyA2LjAwNSAxNiA2IDE2IDZIMHoiIGZpbGw9InJnYmEoMCwgOCwgMTYsIDAuMikiLz48L3N2Zz4=);background-size:16px 6px;width:16px;height:6px}.top-right{position:absolute;top:1em;right:1em}.visually-hidden{border:0;clip:rect(0 0 0 0);height:auto;margin:0;overflow:hidden;padding:0;position:absolute;width:1px;white-space:nowrap}.hidden{display:none !important}.zindex-bottom{z-index:-1 !important}figure.figure{display:block}.quarto-layout-panel{margin-bottom:1em}.quarto-layout-panel>figure{width:100%}.quarto-layout-panel>figure>figcaption,.quarto-layout-panel>.panel-caption{margin-top:10pt}.quarto-layout-panel>.table-caption{margin-top:0px}.table-caption p{margin-bottom:.5em}.quarto-layout-row{display:flex;flex-direction:row;align-items:flex-start}.quarto-layout-valign-top{align-items:flex-start}.quarto-layout-valign-bottom{align-items:flex-end}.quarto-layout-valign-center{align-items:center}.quarto-layout-cell{position:relative;margin-right:20px}.quarto-layout-cell:last-child{margin-right:0}.quarto-layout-cell figure,.quarto-layout-cell>p{margin:.2em}.quarto-layout-cell img{max-width:100%}.quarto-layout-cell .html-widget{width:100% !important}.quarto-layout-cell div figure p{margin:0}.quarto-layout-cell figure{display:block;margin-inline-start:0;margin-inline-end:0}.quarto-layout-cell table{display:inline-table}.quarto-layout-cell-subref figcaption,figure .quarto-layout-row figure figcaption{text-align:center;font-style:italic}.quarto-figure{position:relative;margin-bottom:1em}.quarto-figure>figure{width:100%;margin-bottom:0}.quarto-figure-left>figure>p,.quarto-figure-left>figure>div{text-align:left}.quarto-figure-center>figure>p,.quarto-figure-center>figure>div{text-align:center}.quarto-figure-right>figure>p,.quarto-figure-right>figure>div{text-align:right}.quarto-figure>figure>div.cell-annotation,.quarto-figure>figure>div code{text-align:left}figure>p:empty{display:none}figure>p:first-child{margin-top:0;margin-bottom:0}figure>figcaption.quarto-float-caption-bottom{margin-bottom:.5em}figure>figcaption.quarto-float-caption-top{margin-top:.5em}div[id^=tbl-]{position:relative}.quarto-figure>.anchorjs-link{position:absolute;top:.6em;right:.5em}div[id^=tbl-]>.anchorjs-link{position:absolute;top:.7em;right:.3em}.quarto-figure:hover>.anchorjs-link,div[id^=tbl-]:hover>.anchorjs-link,h2:hover>.anchorjs-link,.h2:hover>.anchorjs-link,h3:hover>.anchorjs-link,.h3:hover>.anchorjs-link,h4:hover>.anchorjs-link,.h4:hover>.anchorjs-link,h5:hover>.anchorjs-link,.h5:hover>.anchorjs-link,h6:hover>.anchorjs-link,.h6:hover>.anchorjs-link,.reveal-anchorjs-link>.anchorjs-link{opacity:1}#title-block-header{margin-block-end:1rem;position:relative;margin-top:-1px}#title-block-header .abstract{margin-block-start:1rem}#title-block-header .abstract .abstract-title{font-weight:600}#title-block-header a{text-decoration:none}#title-block-header .author,#title-block-header .date,#title-block-header .doi{margin-block-end:.2rem}#title-block-header .quarto-title-block>div{display:flex}#title-block-header .quarto-title-block>div>h1,#title-block-header .quarto-title-block>div>.h1{flex-grow:1}#title-block-header .quarto-title-block>div>button{flex-shrink:0;height:2.25rem;margin-top:0}@media(min-width: 992px){#title-block-header .quarto-title-block>div>button{margin-top:5px}}tr.header>th>p:last-of-type{margin-bottom:0px}table,table.table{margin-top:.5rem;margin-bottom:.5rem}caption,.table-caption{padding-top:.5rem;padding-bottom:.5rem;text-align:center}figure.quarto-float-tbl figcaption.quarto-float-caption-top{margin-top:.5rem;margin-bottom:.25rem;text-align:center}figure.quarto-float-tbl figcaption.quarto-float-caption-bottom{padding-top:.25rem;margin-bottom:.5rem;text-align:center}.utterances{max-width:none;margin-left:-8px}iframe{margin-bottom:1em}details{margin-bottom:1em}details[show]{margin-bottom:0}details>summary{color:#6c757d}details>summary>p:only-child{display:inline}pre.sourceCode,code.sourceCode{position:relative}dd code:not(.sourceCode),p code:not(.sourceCode){white-space:pre-wrap}code{white-space:pre}@media print{code{white-space:pre-wrap}}pre>code{display:block}pre>code.sourceCode{white-space:pre}pre>code.sourceCode>span>a:first-child::before{text-decoration:none}pre.code-overflow-wrap>code.sourceCode{white-space:pre-wrap}pre.code-overflow-scroll>code.sourceCode{white-space:pre}code a:any-link{color:inherit;text-decoration:none}code a:hover{color:inherit;text-decoration:underline}ul.task-list{padding-left:1em}[data-tippy-root]{display:inline-block}.tippy-content .footnote-back{display:none}.footnote-back{margin-left:.2em}.tippy-content{overflow-x:auto}.quarto-embedded-source-code{display:none}.quarto-unresolved-ref{font-weight:600}.quarto-cover-image{max-width:35%;float:right;margin-left:30px}.cell-output-display .widget-subarea{margin-bottom:1em}.cell-output-display:not(.no-overflow-x),.knitsql-table:not(.no-overflow-x){overflow-x:auto}.panel-input{margin-bottom:1em}.panel-input>div,.panel-input>div>div{display:inline-block;vertical-align:top;padding-right:12px}.panel-input>p:last-child{margin-bottom:0}.layout-sidebar{margin-bottom:1em}.layout-sidebar .tab-content{border:none}.tab-content>.page-columns.active{display:grid}div.sourceCode>iframe{width:100%;height:300px;margin-bottom:-0.5em}a{text-underline-offset:3px}div.ansi-escaped-output{font-family:monospace;display:block}/*!
+*
+* ansi colors from IPython notebook's
+*
+* we also add `bright-[color]-` synonyms for the `-[color]-intense` classes since
+* that seems to be what ansi_up emits
+*
+*/.ansi-black-fg{color:#3e424d}.ansi-black-bg{background-color:#3e424d}.ansi-black-intense-black,.ansi-bright-black-fg{color:#282c36}.ansi-black-intense-black,.ansi-bright-black-bg{background-color:#282c36}.ansi-red-fg{color:#e75c58}.ansi-red-bg{background-color:#e75c58}.ansi-red-intense-red,.ansi-bright-red-fg{color:#b22b31}.ansi-red-intense-red,.ansi-bright-red-bg{background-color:#b22b31}.ansi-green-fg{color:#00a250}.ansi-green-bg{background-color:#00a250}.ansi-green-intense-green,.ansi-bright-green-fg{color:#007427}.ansi-green-intense-green,.ansi-bright-green-bg{background-color:#007427}.ansi-yellow-fg{color:#ddb62b}.ansi-yellow-bg{background-color:#ddb62b}.ansi-yellow-intense-yellow,.ansi-bright-yellow-fg{color:#b27d12}.ansi-yellow-intense-yellow,.ansi-bright-yellow-bg{background-color:#b27d12}.ansi-blue-fg{color:#208ffb}.ansi-blue-bg{background-color:#208ffb}.ansi-blue-intense-blue,.ansi-bright-blue-fg{color:#0065ca}.ansi-blue-intense-blue,.ansi-bright-blue-bg{background-color:#0065ca}.ansi-magenta-fg{color:#d160c4}.ansi-magenta-bg{background-color:#d160c4}.ansi-magenta-intense-magenta,.ansi-bright-magenta-fg{color:#a03196}.ansi-magenta-intense-magenta,.ansi-bright-magenta-bg{background-color:#a03196}.ansi-cyan-fg{color:#60c6c8}.ansi-cyan-bg{background-color:#60c6c8}.ansi-cyan-intense-cyan,.ansi-bright-cyan-fg{color:#258f8f}.ansi-cyan-intense-cyan,.ansi-bright-cyan-bg{background-color:#258f8f}.ansi-white-fg{color:#c5c1b4}.ansi-white-bg{background-color:#c5c1b4}.ansi-white-intense-white,.ansi-bright-white-fg{color:#a1a6b2}.ansi-white-intense-white,.ansi-bright-white-bg{background-color:#a1a6b2}.ansi-default-inverse-fg{color:#fff}.ansi-default-inverse-bg{background-color:#000}.ansi-bold{font-weight:bold}.ansi-underline{text-decoration:underline}:root{--quarto-body-bg: #fff;--quarto-body-color: #343a40;--quarto-text-muted: #6c757d;--quarto-border-color: #dee2e6;--quarto-border-width: 1px;--quarto-border-radius: 0.25rem}table.gt_table{color:var(--quarto-body-color);font-size:1em;width:100%;background-color:rgba(0,0,0,0);border-top-width:inherit;border-bottom-width:inherit;border-color:var(--quarto-border-color)}table.gt_table th.gt_column_spanner_outer{color:var(--quarto-body-color);background-color:rgba(0,0,0,0);border-top-width:inherit;border-bottom-width:inherit;border-color:var(--quarto-border-color)}table.gt_table th.gt_col_heading{color:var(--quarto-body-color);font-weight:bold;background-color:rgba(0,0,0,0)}table.gt_table thead.gt_col_headings{border-bottom:1px solid currentColor;border-top-width:inherit;border-top-color:var(--quarto-border-color)}table.gt_table thead.gt_col_headings:not(:first-child){border-top-width:1px;border-top-color:var(--quarto-border-color)}table.gt_table td.gt_row{border-bottom-width:1px;border-bottom-color:var(--quarto-border-color);border-top-width:0px}table.gt_table tbody.gt_table_body{border-top-width:1px;border-bottom-width:1px;border-bottom-color:var(--quarto-border-color);border-top-color:currentColor}div.columns{display:initial;gap:initial}div.column{display:inline-block;overflow-x:initial;vertical-align:top;width:50%}.code-annotation-tip-content{word-wrap:break-word}.code-annotation-container-hidden{display:none !important}dl.code-annotation-container-grid{display:grid;grid-template-columns:min-content auto}dl.code-annotation-container-grid dt{grid-column:1}dl.code-annotation-container-grid dd{grid-column:2}pre.sourceCode.code-annotation-code{padding-right:0}code.sourceCode .code-annotation-anchor{z-index:100;position:relative;float:right;background-color:rgba(0,0,0,0)}input[type=checkbox]{margin-right:.5ch}:root{--mermaid-bg-color: #fff;--mermaid-edge-color: #343a40;--mermaid-node-fg-color: #343a40;--mermaid-fg-color: #343a40;--mermaid-fg-color--lighter: #4b545c;--mermaid-fg-color--lightest: #626d78;--mermaid-font-family: Source Sans Pro, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol;--mermaid-label-bg-color: #fff;--mermaid-label-fg-color: #2780e3;--mermaid-node-bg-color: rgba(39, 128, 227, 0.1);--mermaid-node-fg-color: #343a40}@media print{:root{font-size:11pt}#quarto-sidebar,#TOC,.nav-page{display:none}.page-columns .content{grid-column-start:page-start}.fixed-top{position:relative}.panel-caption,.figure-caption,figcaption{color:#666}}.code-copy-button{position:absolute;top:0;right:0;border:0;margin-top:5px;margin-right:5px;background-color:rgba(0,0,0,0);z-index:3}.code-copy-button:focus{outline:none}.code-copy-button-tooltip{font-size:.75em}pre.sourceCode:hover>.code-copy-button>.bi::before{display:inline-block;height:1rem;width:1rem;content:"";vertical-align:-0.125em;background-image:url('data:image/svg+xml,');background-repeat:no-repeat;background-size:1rem 1rem}pre.sourceCode:hover>.code-copy-button-checked>.bi::before{background-image:url('data:image/svg+xml,')}pre.sourceCode:hover>.code-copy-button:hover>.bi::before{background-image:url('data:image/svg+xml,')}pre.sourceCode:hover>.code-copy-button-checked:hover>.bi::before{background-image:url('data:image/svg+xml,')}main ol ol,main ul ul,main ol ul,main ul ol{margin-bottom:1em}ul>li:not(:has(>p))>ul,ol>li:not(:has(>p))>ul,ul>li:not(:has(>p))>ol,ol>li:not(:has(>p))>ol{margin-bottom:0}ul>li:not(:has(>p))>ul>li:has(>p),ol>li:not(:has(>p))>ul>li:has(>p),ul>li:not(:has(>p))>ol>li:has(>p),ol>li:not(:has(>p))>ol>li:has(>p){margin-top:1rem}body{margin:0}main.page-columns>header>h1.title,main.page-columns>header>.title.h1{margin-bottom:0}@media(min-width: 992px){body .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset] 35px [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(500px, calc(850px - 3em)) [body-content-end] 1.5em [body-end] 35px [body-end-outset] minmax(75px, 145px) [page-end-inset] 35px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.fullcontent:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset] 35px [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(500px, calc(850px - 3em)) [body-content-end] 1.5em [body-end] 35px [body-end-outset] 35px [page-end-inset page-end] 5fr [screen-end-inset] 1.5em}body.slimcontent:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset] 35px [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(500px, calc(850px - 3em)) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 35px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.listing:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc(850px - 3em)) [body-content-end] 3em [body-end] 50px [body-end-outset] minmax(0px, 250px) [page-end-inset] minmax(50px, 100px) [page-end] 1fr [screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 35px [page-start-inset] minmax(0px, 175px) [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(450px, calc(800px - 3em)) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 35px [page-start-inset] minmax(0px, 175px) [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(450px, calc(800px - 3em)) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] minmax(25px, 50px) [page-start-inset] minmax(50px, 150px) [body-start-outset] minmax(25px, 50px) [body-start] 1.5em [body-content-start] minmax(500px, calc(800px - 3em)) [body-content-end] 1.5em [body-end] minmax(25px, 50px) [body-end-outset] minmax(50px, 150px) [page-end-inset] minmax(25px, 50px) [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc(1000px - 3em)) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(50px, 100px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc(1000px - 3em)) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 50px [page-start-inset] minmax(50px, 150px) [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc(800px - 3em)) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.slimcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(450px, calc(750px - 3em)) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.listing .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc(1000px - 3em)) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.slimcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 50px [page-start-inset] minmax(50px, 150px) [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(450px, calc(750px - 3em)) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(50px, 150px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.listing .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] minmax(25px, 50px) [page-start-inset] minmax(50px, 150px) [body-start-outset] minmax(25px, 50px) [body-start] 1.5em [body-content-start] minmax(500px, calc(800px - 3em)) [body-content-end] 1.5em [body-end] minmax(25px, 50px) [body-end-outset] minmax(50px, 150px) [page-end-inset] minmax(25px, 50px) [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}}@media(max-width: 991.98px){body .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset] 5fr [body-start] 1.5em [body-content-start] minmax(500px, calc(800px - 3em)) [body-content-end] 1.5em [body-end] 35px [body-end-outset] minmax(75px, 145px) [page-end-inset] 35px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.fullcontent:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset] 5fr [body-start] 1.5em [body-content-start] minmax(500px, calc(800px - 3em)) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.slimcontent:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset] 5fr [body-start] 1.5em [body-content-start] minmax(500px, calc(800px - 3em)) [body-content-end] 1.5em [body-end] 35px [body-end-outset] minmax(75px, 145px) [page-end-inset] 35px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.listing:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset] 5fr [body-start] 1.5em [body-content-start] minmax(500px, calc(1250px - 3em)) [body-content-end body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 35px [page-start-inset] minmax(0px, 145px) [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(450px, calc(800px - 3em)) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 35px [page-start-inset] minmax(0px, 145px) [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(450px, calc(800px - 3em)) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset body-start-outset body-start] 1.5em [body-content-start] minmax(500px, calc(750px - 3em)) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(75px, 150px) [page-end-inset] 25px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(500px, calc(750px - 3em)) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(25px, 50px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(500px, calc(1000px - 3em)) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset body-start-outset body-start] 1em [body-content-start] minmax(500px, calc(800px - 3em)) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 4fr [screen-end-inset] 1.5em [screen-end]}body.docked.slimcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(500px, calc(750px - 3em)) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(25px, 50px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.listing .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(500px, calc(750px - 3em)) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(25px, 50px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.slimcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset body-start-outset body-start] 1em [body-content-start] minmax(500px, calc(750px - 3em)) [body-content-end] 1.5em [body-end] 35px [body-end-outset] minmax(75px, 145px) [page-end-inset] 35px [page-end] 4fr [screen-end-inset] 1.5em [screen-end]}body.floating.listing .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset body-start-outset body-start] 1em [body-content-start] minmax(500px, calc(750px - 3em)) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(75px, 150px) [page-end-inset] 25px [page-end] 4fr [screen-end-inset] 1.5em [screen-end]}}@media(max-width: 767.98px){body .page-columns,body.fullcontent:not(.floating):not(.docked) .page-columns,body.slimcontent:not(.floating):not(.docked) .page-columns,body.docked .page-columns,body.docked.slimcontent .page-columns,body.docked.fullcontent .page-columns,body.floating .page-columns,body.floating.slimcontent .page-columns,body.floating.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(0px, 1fr) [body-content-end body-end body-end-outset page-end-inset page-end screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(0px, 1fr) [body-content-end body-end body-end-outset page-end-inset page-end screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(0px, 1fr) [body-content-end body-end body-end-outset page-end-inset page-end screen-end-inset] 1.5em [screen-end]}nav[role=doc-toc]{display:none}}body,.page-row-navigation{grid-template-rows:[page-top] max-content [contents-top] max-content [contents-bottom] max-content [page-bottom]}.page-rows-contents{grid-template-rows:[content-top] minmax(max-content, 1fr) [content-bottom] minmax(60px, max-content) [page-bottom]}.page-full{grid-column:screen-start/screen-end !important}.page-columns>*{grid-column:body-content-start/body-content-end}.page-columns.column-page>*{grid-column:page-start/page-end}.page-columns.column-page-left .page-columns.page-full>*,.page-columns.column-page-left>*{grid-column:page-start/body-content-end}.page-columns.column-page-right .page-columns.page-full>*,.page-columns.column-page-right>*{grid-column:body-content-start/page-end}.page-rows{grid-auto-rows:auto}.header{grid-column:screen-start/screen-end;grid-row:page-top/contents-top}#quarto-content{padding:0;grid-column:screen-start/screen-end;grid-row:contents-top/contents-bottom}body.floating .sidebar.sidebar-navigation{grid-column:page-start/body-start;grid-row:content-top/page-bottom}body.docked .sidebar.sidebar-navigation{grid-column:screen-start/body-start;grid-row:content-top/page-bottom}.sidebar.toc-left{grid-column:page-start/body-start;grid-row:content-top/page-bottom}.sidebar.margin-sidebar{grid-column:body-end/page-end;grid-row:content-top/page-bottom}.page-columns .content{grid-column:body-content-start/body-content-end;grid-row:content-top/content-bottom;align-content:flex-start}.page-columns .page-navigation{grid-column:body-content-start/body-content-end;grid-row:content-bottom/page-bottom}.page-columns .footer{grid-column:screen-start/screen-end;grid-row:contents-bottom/page-bottom}.page-columns .column-body{grid-column:body-content-start/body-content-end}.page-columns .column-body-fullbleed{grid-column:body-start/body-end}.page-columns .column-body-outset{grid-column:body-start-outset/body-end-outset;z-index:998;opacity:.999}.page-columns .column-body-outset table{background:#fff}.page-columns .column-body-outset-left{grid-column:body-start-outset/body-content-end;z-index:998;opacity:.999}.page-columns .column-body-outset-left table{background:#fff}.page-columns .column-body-outset-right{grid-column:body-content-start/body-end-outset;z-index:998;opacity:.999}.page-columns .column-body-outset-right table{background:#fff}.page-columns .column-page{grid-column:page-start/page-end;z-index:998;opacity:.999}.page-columns .column-page table{background:#fff}.page-columns .column-page-inset{grid-column:page-start-inset/page-end-inset;z-index:998;opacity:.999}.page-columns .column-page-inset table{background:#fff}.page-columns .column-page-inset-left{grid-column:page-start-inset/body-content-end;z-index:998;opacity:.999}.page-columns .column-page-inset-left table{background:#fff}.page-columns .column-page-inset-right{grid-column:body-content-start/page-end-inset;z-index:998;opacity:.999}.page-columns .column-page-inset-right figcaption table{background:#fff}.page-columns .column-page-left{grid-column:page-start/body-content-end;z-index:998;opacity:.999}.page-columns .column-page-left table{background:#fff}.page-columns .column-page-right{grid-column:body-content-start/page-end;z-index:998;opacity:.999}.page-columns .column-page-right figcaption table{background:#fff}#quarto-content.page-columns #quarto-margin-sidebar,#quarto-content.page-columns #quarto-sidebar{z-index:1}@media(max-width: 991.98px){#quarto-content.page-columns #quarto-margin-sidebar.collapse,#quarto-content.page-columns #quarto-sidebar.collapse,#quarto-content.page-columns #quarto-margin-sidebar.collapsing,#quarto-content.page-columns #quarto-sidebar.collapsing{z-index:1055}}#quarto-content.page-columns main.column-page,#quarto-content.page-columns main.column-page-right,#quarto-content.page-columns main.column-page-left{z-index:0}.page-columns .column-screen-inset{grid-column:screen-start-inset/screen-end-inset;z-index:998;opacity:.999}.page-columns .column-screen-inset table{background:#fff}.page-columns .column-screen-inset-left{grid-column:screen-start-inset/body-content-end;z-index:998;opacity:.999}.page-columns .column-screen-inset-left table{background:#fff}.page-columns .column-screen-inset-right{grid-column:body-content-start/screen-end-inset;z-index:998;opacity:.999}.page-columns .column-screen-inset-right table{background:#fff}.page-columns .column-screen{grid-column:screen-start/screen-end;z-index:998;opacity:.999}.page-columns .column-screen table{background:#fff}.page-columns .column-screen-left{grid-column:screen-start/body-content-end;z-index:998;opacity:.999}.page-columns .column-screen-left table{background:#fff}.page-columns .column-screen-right{grid-column:body-content-start/screen-end;z-index:998;opacity:.999}.page-columns .column-screen-right table{background:#fff}.page-columns .column-screen-inset-shaded{grid-column:screen-start/screen-end;padding:1em;background:#f8f9fa;z-index:998;opacity:.999;margin-bottom:1em}.zindex-content{z-index:998;opacity:.999}.zindex-modal{z-index:1055;opacity:.999}.zindex-over-content{z-index:999;opacity:.999}img.img-fluid.column-screen,img.img-fluid.column-screen-inset-shaded,img.img-fluid.column-screen-inset,img.img-fluid.column-screen-inset-left,img.img-fluid.column-screen-inset-right,img.img-fluid.column-screen-left,img.img-fluid.column-screen-right{width:100%}@media(min-width: 992px){.margin-caption,div.aside,aside:not(.footnotes):not(.sidebar),.column-margin{grid-column:body-end/page-end !important;z-index:998}.column-sidebar{grid-column:page-start/body-start !important;z-index:998}.column-leftmargin{grid-column:screen-start-inset/body-start !important;z-index:998}.no-row-height{height:1em;overflow:visible}}@media(max-width: 991.98px){.margin-caption,div.aside,aside:not(.footnotes):not(.sidebar),.column-margin{grid-column:body-end/page-end !important;z-index:998}.no-row-height{height:1em;overflow:visible}.page-columns.page-full{overflow:visible}.page-columns.toc-left .margin-caption,.page-columns.toc-left div.aside,.page-columns.toc-left aside:not(.footnotes):not(.sidebar),.page-columns.toc-left .column-margin{grid-column:body-content-start/body-content-end !important;z-index:998;opacity:.999}.page-columns.toc-left .no-row-height{height:initial;overflow:initial}}@media(max-width: 767.98px){.margin-caption,div.aside,aside:not(.footnotes):not(.sidebar),.column-margin{grid-column:body-content-start/body-content-end !important;z-index:998;opacity:.999}.no-row-height{height:initial;overflow:initial}#quarto-margin-sidebar{display:none}#quarto-sidebar-toc-left{display:none}.hidden-sm{display:none}}.panel-grid{display:grid;grid-template-rows:repeat(1, 1fr);grid-template-columns:repeat(24, 1fr);gap:1em}.panel-grid .g-col-1{grid-column:auto/span 1}.panel-grid .g-col-2{grid-column:auto/span 2}.panel-grid .g-col-3{grid-column:auto/span 3}.panel-grid .g-col-4{grid-column:auto/span 4}.panel-grid .g-col-5{grid-column:auto/span 5}.panel-grid .g-col-6{grid-column:auto/span 6}.panel-grid .g-col-7{grid-column:auto/span 7}.panel-grid .g-col-8{grid-column:auto/span 8}.panel-grid .g-col-9{grid-column:auto/span 9}.panel-grid .g-col-10{grid-column:auto/span 10}.panel-grid .g-col-11{grid-column:auto/span 11}.panel-grid .g-col-12{grid-column:auto/span 12}.panel-grid .g-col-13{grid-column:auto/span 13}.panel-grid .g-col-14{grid-column:auto/span 14}.panel-grid .g-col-15{grid-column:auto/span 15}.panel-grid .g-col-16{grid-column:auto/span 16}.panel-grid .g-col-17{grid-column:auto/span 17}.panel-grid .g-col-18{grid-column:auto/span 18}.panel-grid .g-col-19{grid-column:auto/span 19}.panel-grid .g-col-20{grid-column:auto/span 20}.panel-grid .g-col-21{grid-column:auto/span 21}.panel-grid .g-col-22{grid-column:auto/span 22}.panel-grid .g-col-23{grid-column:auto/span 23}.panel-grid .g-col-24{grid-column:auto/span 24}.panel-grid .g-start-1{grid-column-start:1}.panel-grid .g-start-2{grid-column-start:2}.panel-grid .g-start-3{grid-column-start:3}.panel-grid .g-start-4{grid-column-start:4}.panel-grid .g-start-5{grid-column-start:5}.panel-grid .g-start-6{grid-column-start:6}.panel-grid .g-start-7{grid-column-start:7}.panel-grid .g-start-8{grid-column-start:8}.panel-grid .g-start-9{grid-column-start:9}.panel-grid .g-start-10{grid-column-start:10}.panel-grid .g-start-11{grid-column-start:11}.panel-grid .g-start-12{grid-column-start:12}.panel-grid .g-start-13{grid-column-start:13}.panel-grid .g-start-14{grid-column-start:14}.panel-grid .g-start-15{grid-column-start:15}.panel-grid .g-start-16{grid-column-start:16}.panel-grid .g-start-17{grid-column-start:17}.panel-grid .g-start-18{grid-column-start:18}.panel-grid .g-start-19{grid-column-start:19}.panel-grid .g-start-20{grid-column-start:20}.panel-grid .g-start-21{grid-column-start:21}.panel-grid .g-start-22{grid-column-start:22}.panel-grid .g-start-23{grid-column-start:23}@media(min-width: 576px){.panel-grid .g-col-sm-1{grid-column:auto/span 1}.panel-grid .g-col-sm-2{grid-column:auto/span 2}.panel-grid .g-col-sm-3{grid-column:auto/span 3}.panel-grid .g-col-sm-4{grid-column:auto/span 4}.panel-grid .g-col-sm-5{grid-column:auto/span 5}.panel-grid .g-col-sm-6{grid-column:auto/span 6}.panel-grid .g-col-sm-7{grid-column:auto/span 7}.panel-grid .g-col-sm-8{grid-column:auto/span 8}.panel-grid .g-col-sm-9{grid-column:auto/span 9}.panel-grid .g-col-sm-10{grid-column:auto/span 10}.panel-grid .g-col-sm-11{grid-column:auto/span 11}.panel-grid .g-col-sm-12{grid-column:auto/span 12}.panel-grid .g-col-sm-13{grid-column:auto/span 13}.panel-grid .g-col-sm-14{grid-column:auto/span 14}.panel-grid .g-col-sm-15{grid-column:auto/span 15}.panel-grid .g-col-sm-16{grid-column:auto/span 16}.panel-grid .g-col-sm-17{grid-column:auto/span 17}.panel-grid .g-col-sm-18{grid-column:auto/span 18}.panel-grid .g-col-sm-19{grid-column:auto/span 19}.panel-grid .g-col-sm-20{grid-column:auto/span 20}.panel-grid .g-col-sm-21{grid-column:auto/span 21}.panel-grid .g-col-sm-22{grid-column:auto/span 22}.panel-grid .g-col-sm-23{grid-column:auto/span 23}.panel-grid .g-col-sm-24{grid-column:auto/span 24}.panel-grid .g-start-sm-1{grid-column-start:1}.panel-grid .g-start-sm-2{grid-column-start:2}.panel-grid .g-start-sm-3{grid-column-start:3}.panel-grid .g-start-sm-4{grid-column-start:4}.panel-grid .g-start-sm-5{grid-column-start:5}.panel-grid .g-start-sm-6{grid-column-start:6}.panel-grid .g-start-sm-7{grid-column-start:7}.panel-grid .g-start-sm-8{grid-column-start:8}.panel-grid .g-start-sm-9{grid-column-start:9}.panel-grid .g-start-sm-10{grid-column-start:10}.panel-grid .g-start-sm-11{grid-column-start:11}.panel-grid .g-start-sm-12{grid-column-start:12}.panel-grid .g-start-sm-13{grid-column-start:13}.panel-grid .g-start-sm-14{grid-column-start:14}.panel-grid .g-start-sm-15{grid-column-start:15}.panel-grid .g-start-sm-16{grid-column-start:16}.panel-grid .g-start-sm-17{grid-column-start:17}.panel-grid .g-start-sm-18{grid-column-start:18}.panel-grid .g-start-sm-19{grid-column-start:19}.panel-grid .g-start-sm-20{grid-column-start:20}.panel-grid .g-start-sm-21{grid-column-start:21}.panel-grid .g-start-sm-22{grid-column-start:22}.panel-grid .g-start-sm-23{grid-column-start:23}}@media(min-width: 768px){.panel-grid .g-col-md-1{grid-column:auto/span 1}.panel-grid .g-col-md-2{grid-column:auto/span 2}.panel-grid .g-col-md-3{grid-column:auto/span 3}.panel-grid .g-col-md-4{grid-column:auto/span 4}.panel-grid .g-col-md-5{grid-column:auto/span 5}.panel-grid .g-col-md-6{grid-column:auto/span 6}.panel-grid .g-col-md-7{grid-column:auto/span 7}.panel-grid .g-col-md-8{grid-column:auto/span 8}.panel-grid .g-col-md-9{grid-column:auto/span 9}.panel-grid .g-col-md-10{grid-column:auto/span 10}.panel-grid .g-col-md-11{grid-column:auto/span 11}.panel-grid .g-col-md-12{grid-column:auto/span 12}.panel-grid .g-col-md-13{grid-column:auto/span 13}.panel-grid .g-col-md-14{grid-column:auto/span 14}.panel-grid .g-col-md-15{grid-column:auto/span 15}.panel-grid .g-col-md-16{grid-column:auto/span 16}.panel-grid .g-col-md-17{grid-column:auto/span 17}.panel-grid .g-col-md-18{grid-column:auto/span 18}.panel-grid .g-col-md-19{grid-column:auto/span 19}.panel-grid .g-col-md-20{grid-column:auto/span 20}.panel-grid .g-col-md-21{grid-column:auto/span 21}.panel-grid .g-col-md-22{grid-column:auto/span 22}.panel-grid .g-col-md-23{grid-column:auto/span 23}.panel-grid .g-col-md-24{grid-column:auto/span 24}.panel-grid .g-start-md-1{grid-column-start:1}.panel-grid .g-start-md-2{grid-column-start:2}.panel-grid .g-start-md-3{grid-column-start:3}.panel-grid .g-start-md-4{grid-column-start:4}.panel-grid .g-start-md-5{grid-column-start:5}.panel-grid .g-start-md-6{grid-column-start:6}.panel-grid .g-start-md-7{grid-column-start:7}.panel-grid .g-start-md-8{grid-column-start:8}.panel-grid .g-start-md-9{grid-column-start:9}.panel-grid .g-start-md-10{grid-column-start:10}.panel-grid .g-start-md-11{grid-column-start:11}.panel-grid .g-start-md-12{grid-column-start:12}.panel-grid .g-start-md-13{grid-column-start:13}.panel-grid .g-start-md-14{grid-column-start:14}.panel-grid .g-start-md-15{grid-column-start:15}.panel-grid .g-start-md-16{grid-column-start:16}.panel-grid .g-start-md-17{grid-column-start:17}.panel-grid .g-start-md-18{grid-column-start:18}.panel-grid .g-start-md-19{grid-column-start:19}.panel-grid .g-start-md-20{grid-column-start:20}.panel-grid .g-start-md-21{grid-column-start:21}.panel-grid .g-start-md-22{grid-column-start:22}.panel-grid .g-start-md-23{grid-column-start:23}}@media(min-width: 992px){.panel-grid .g-col-lg-1{grid-column:auto/span 1}.panel-grid .g-col-lg-2{grid-column:auto/span 2}.panel-grid .g-col-lg-3{grid-column:auto/span 3}.panel-grid .g-col-lg-4{grid-column:auto/span 4}.panel-grid .g-col-lg-5{grid-column:auto/span 5}.panel-grid .g-col-lg-6{grid-column:auto/span 6}.panel-grid .g-col-lg-7{grid-column:auto/span 7}.panel-grid .g-col-lg-8{grid-column:auto/span 8}.panel-grid .g-col-lg-9{grid-column:auto/span 9}.panel-grid .g-col-lg-10{grid-column:auto/span 10}.panel-grid .g-col-lg-11{grid-column:auto/span 11}.panel-grid .g-col-lg-12{grid-column:auto/span 12}.panel-grid .g-col-lg-13{grid-column:auto/span 13}.panel-grid .g-col-lg-14{grid-column:auto/span 14}.panel-grid .g-col-lg-15{grid-column:auto/span 15}.panel-grid .g-col-lg-16{grid-column:auto/span 16}.panel-grid .g-col-lg-17{grid-column:auto/span 17}.panel-grid .g-col-lg-18{grid-column:auto/span 18}.panel-grid .g-col-lg-19{grid-column:auto/span 19}.panel-grid .g-col-lg-20{grid-column:auto/span 20}.panel-grid .g-col-lg-21{grid-column:auto/span 21}.panel-grid .g-col-lg-22{grid-column:auto/span 22}.panel-grid .g-col-lg-23{grid-column:auto/span 23}.panel-grid .g-col-lg-24{grid-column:auto/span 24}.panel-grid .g-start-lg-1{grid-column-start:1}.panel-grid .g-start-lg-2{grid-column-start:2}.panel-grid .g-start-lg-3{grid-column-start:3}.panel-grid .g-start-lg-4{grid-column-start:4}.panel-grid .g-start-lg-5{grid-column-start:5}.panel-grid .g-start-lg-6{grid-column-start:6}.panel-grid .g-start-lg-7{grid-column-start:7}.panel-grid .g-start-lg-8{grid-column-start:8}.panel-grid .g-start-lg-9{grid-column-start:9}.panel-grid .g-start-lg-10{grid-column-start:10}.panel-grid .g-start-lg-11{grid-column-start:11}.panel-grid .g-start-lg-12{grid-column-start:12}.panel-grid .g-start-lg-13{grid-column-start:13}.panel-grid .g-start-lg-14{grid-column-start:14}.panel-grid .g-start-lg-15{grid-column-start:15}.panel-grid .g-start-lg-16{grid-column-start:16}.panel-grid .g-start-lg-17{grid-column-start:17}.panel-grid .g-start-lg-18{grid-column-start:18}.panel-grid .g-start-lg-19{grid-column-start:19}.panel-grid .g-start-lg-20{grid-column-start:20}.panel-grid .g-start-lg-21{grid-column-start:21}.panel-grid .g-start-lg-22{grid-column-start:22}.panel-grid .g-start-lg-23{grid-column-start:23}}@media(min-width: 1200px){.panel-grid .g-col-xl-1{grid-column:auto/span 1}.panel-grid .g-col-xl-2{grid-column:auto/span 2}.panel-grid .g-col-xl-3{grid-column:auto/span 3}.panel-grid .g-col-xl-4{grid-column:auto/span 4}.panel-grid .g-col-xl-5{grid-column:auto/span 5}.panel-grid .g-col-xl-6{grid-column:auto/span 6}.panel-grid .g-col-xl-7{grid-column:auto/span 7}.panel-grid .g-col-xl-8{grid-column:auto/span 8}.panel-grid .g-col-xl-9{grid-column:auto/span 9}.panel-grid .g-col-xl-10{grid-column:auto/span 10}.panel-grid .g-col-xl-11{grid-column:auto/span 11}.panel-grid .g-col-xl-12{grid-column:auto/span 12}.panel-grid .g-col-xl-13{grid-column:auto/span 13}.panel-grid .g-col-xl-14{grid-column:auto/span 14}.panel-grid .g-col-xl-15{grid-column:auto/span 15}.panel-grid .g-col-xl-16{grid-column:auto/span 16}.panel-grid .g-col-xl-17{grid-column:auto/span 17}.panel-grid .g-col-xl-18{grid-column:auto/span 18}.panel-grid .g-col-xl-19{grid-column:auto/span 19}.panel-grid .g-col-xl-20{grid-column:auto/span 20}.panel-grid .g-col-xl-21{grid-column:auto/span 21}.panel-grid .g-col-xl-22{grid-column:auto/span 22}.panel-grid .g-col-xl-23{grid-column:auto/span 23}.panel-grid .g-col-xl-24{grid-column:auto/span 24}.panel-grid .g-start-xl-1{grid-column-start:1}.panel-grid .g-start-xl-2{grid-column-start:2}.panel-grid .g-start-xl-3{grid-column-start:3}.panel-grid .g-start-xl-4{grid-column-start:4}.panel-grid .g-start-xl-5{grid-column-start:5}.panel-grid .g-start-xl-6{grid-column-start:6}.panel-grid .g-start-xl-7{grid-column-start:7}.panel-grid .g-start-xl-8{grid-column-start:8}.panel-grid .g-start-xl-9{grid-column-start:9}.panel-grid .g-start-xl-10{grid-column-start:10}.panel-grid .g-start-xl-11{grid-column-start:11}.panel-grid .g-start-xl-12{grid-column-start:12}.panel-grid .g-start-xl-13{grid-column-start:13}.panel-grid .g-start-xl-14{grid-column-start:14}.panel-grid .g-start-xl-15{grid-column-start:15}.panel-grid .g-start-xl-16{grid-column-start:16}.panel-grid .g-start-xl-17{grid-column-start:17}.panel-grid .g-start-xl-18{grid-column-start:18}.panel-grid .g-start-xl-19{grid-column-start:19}.panel-grid .g-start-xl-20{grid-column-start:20}.panel-grid .g-start-xl-21{grid-column-start:21}.panel-grid .g-start-xl-22{grid-column-start:22}.panel-grid .g-start-xl-23{grid-column-start:23}}@media(min-width: 1400px){.panel-grid .g-col-xxl-1{grid-column:auto/span 1}.panel-grid .g-col-xxl-2{grid-column:auto/span 2}.panel-grid .g-col-xxl-3{grid-column:auto/span 3}.panel-grid .g-col-xxl-4{grid-column:auto/span 4}.panel-grid .g-col-xxl-5{grid-column:auto/span 5}.panel-grid .g-col-xxl-6{grid-column:auto/span 6}.panel-grid .g-col-xxl-7{grid-column:auto/span 7}.panel-grid .g-col-xxl-8{grid-column:auto/span 8}.panel-grid .g-col-xxl-9{grid-column:auto/span 9}.panel-grid .g-col-xxl-10{grid-column:auto/span 10}.panel-grid .g-col-xxl-11{grid-column:auto/span 11}.panel-grid .g-col-xxl-12{grid-column:auto/span 12}.panel-grid .g-col-xxl-13{grid-column:auto/span 13}.panel-grid .g-col-xxl-14{grid-column:auto/span 14}.panel-grid .g-col-xxl-15{grid-column:auto/span 15}.panel-grid .g-col-xxl-16{grid-column:auto/span 16}.panel-grid .g-col-xxl-17{grid-column:auto/span 17}.panel-grid .g-col-xxl-18{grid-column:auto/span 18}.panel-grid .g-col-xxl-19{grid-column:auto/span 19}.panel-grid .g-col-xxl-20{grid-column:auto/span 20}.panel-grid .g-col-xxl-21{grid-column:auto/span 21}.panel-grid .g-col-xxl-22{grid-column:auto/span 22}.panel-grid .g-col-xxl-23{grid-column:auto/span 23}.panel-grid .g-col-xxl-24{grid-column:auto/span 24}.panel-grid .g-start-xxl-1{grid-column-start:1}.panel-grid .g-start-xxl-2{grid-column-start:2}.panel-grid .g-start-xxl-3{grid-column-start:3}.panel-grid .g-start-xxl-4{grid-column-start:4}.panel-grid .g-start-xxl-5{grid-column-start:5}.panel-grid .g-start-xxl-6{grid-column-start:6}.panel-grid .g-start-xxl-7{grid-column-start:7}.panel-grid .g-start-xxl-8{grid-column-start:8}.panel-grid .g-start-xxl-9{grid-column-start:9}.panel-grid .g-start-xxl-10{grid-column-start:10}.panel-grid .g-start-xxl-11{grid-column-start:11}.panel-grid .g-start-xxl-12{grid-column-start:12}.panel-grid .g-start-xxl-13{grid-column-start:13}.panel-grid .g-start-xxl-14{grid-column-start:14}.panel-grid .g-start-xxl-15{grid-column-start:15}.panel-grid .g-start-xxl-16{grid-column-start:16}.panel-grid .g-start-xxl-17{grid-column-start:17}.panel-grid .g-start-xxl-18{grid-column-start:18}.panel-grid .g-start-xxl-19{grid-column-start:19}.panel-grid .g-start-xxl-20{grid-column-start:20}.panel-grid .g-start-xxl-21{grid-column-start:21}.panel-grid .g-start-xxl-22{grid-column-start:22}.panel-grid .g-start-xxl-23{grid-column-start:23}}main{margin-top:1em;margin-bottom:1em}h1,.h1,h2,.h2{color:inherit;margin-top:2rem;margin-bottom:1rem;font-weight:600}h1.title,.title.h1{margin-top:0}main.content>section:first-of-type>h2:first-child,main.content>section:first-of-type>.h2:first-child{margin-top:0}h2,.h2{border-bottom:1px solid #dee2e6;padding-bottom:.5rem}h3,.h3{font-weight:600}h3,.h3,h4,.h4{opacity:.9;margin-top:1.5rem}h5,.h5,h6,.h6{opacity:.9}.header-section-number{color:#6d7a86}.nav-link.active .header-section-number{color:inherit}mark,.mark{padding:0em}.panel-caption,.figure-caption,.subfigure-caption,.table-caption,figcaption,caption{font-size:.9rem;color:#6d7a86}.quarto-layout-cell[data-ref-parent] caption{color:#6d7a86}.column-margin figcaption,.margin-caption,div.aside,aside,.column-margin{color:#6d7a86;font-size:.825rem}.panel-caption.margin-caption{text-align:inherit}.column-margin.column-container p{margin-bottom:0}.column-margin.column-container>*:not(.collapse):first-child{padding-bottom:.5em;display:block}.column-margin.column-container>*:not(.collapse):not(:first-child){padding-top:.5em;padding-bottom:.5em;display:block}.column-margin.column-container>*.collapse:not(.show){display:none}@media(min-width: 768px){.column-margin.column-container .callout-margin-content:first-child{margin-top:4.5em}.column-margin.column-container .callout-margin-content-simple:first-child{margin-top:3.5em}}.margin-caption>*{padding-top:.5em;padding-bottom:.5em}@media(max-width: 767.98px){.quarto-layout-row{flex-direction:column}}.nav-tabs .nav-item{margin-top:1px;cursor:pointer}.tab-content{margin-top:0px;border-left:#dee2e6 1px solid;border-right:#dee2e6 1px solid;border-bottom:#dee2e6 1px solid;margin-left:0;padding:1em;margin-bottom:1em}@media(max-width: 767.98px){.layout-sidebar{margin-left:0;margin-right:0}}.panel-sidebar,.panel-sidebar .form-control,.panel-input,.panel-input .form-control,.selectize-dropdown{font-size:.9rem}.panel-sidebar .form-control,.panel-input .form-control{padding-top:.1rem}.tab-pane div.sourceCode{margin-top:0px}.tab-pane>p{padding-top:0}.tab-pane>p:nth-child(1){padding-top:0}.tab-pane>p:last-child{margin-bottom:0}.tab-pane>pre:last-child{margin-bottom:0}.tab-content>.tab-pane:not(.active){display:none !important}div.sourceCode{background-color:rgba(233,236,239,.65);border:1px solid rgba(233,236,239,.65);border-radius:.25rem}pre.sourceCode{background-color:rgba(0,0,0,0)}pre.sourceCode{border:none;font-size:.875em;overflow:visible !important;padding:.4em}.callout pre.sourceCode{padding-left:0}div.sourceCode{overflow-y:hidden}.callout div.sourceCode{margin-left:initial}.blockquote{font-size:inherit;padding-left:1rem;padding-right:1.5rem;color:#6d7a86}.blockquote h1:first-child,.blockquote .h1:first-child,.blockquote h2:first-child,.blockquote .h2:first-child,.blockquote h3:first-child,.blockquote .h3:first-child,.blockquote h4:first-child,.blockquote .h4:first-child,.blockquote h5:first-child,.blockquote .h5:first-child{margin-top:0}pre{background-color:initial;padding:initial;border:initial}p pre code:not(.sourceCode),li pre code:not(.sourceCode),pre code:not(.sourceCode){background-color:initial}p code:not(.sourceCode),li code:not(.sourceCode),td code:not(.sourceCode){background-color:#f8f9fa;padding:.2em}nav p code:not(.sourceCode),nav li code:not(.sourceCode),nav td code:not(.sourceCode){background-color:rgba(0,0,0,0);padding:0}td code:not(.sourceCode){white-space:pre-wrap}#quarto-embedded-source-code-modal>.modal-dialog{max-width:1000px;padding-left:1.75rem;padding-right:1.75rem}#quarto-embedded-source-code-modal>.modal-dialog>.modal-content>.modal-body{padding:0}#quarto-embedded-source-code-modal>.modal-dialog>.modal-content>.modal-body div.sourceCode{margin:0;padding:.2rem .2rem;border-radius:0px;border:none}#quarto-embedded-source-code-modal>.modal-dialog>.modal-content>.modal-header{padding:.7rem}.code-tools-button{font-size:1rem;padding:.15rem .15rem;margin-left:5px;color:#6c757d;background-color:rgba(0,0,0,0);transition:initial;cursor:pointer}.code-tools-button>.bi::before{display:inline-block;height:1rem;width:1rem;content:"";vertical-align:-0.125em;background-image:url('data:image/svg+xml,');background-repeat:no-repeat;background-size:1rem 1rem}.code-tools-button:hover>.bi::before{background-image:url('data:image/svg+xml,')}#quarto-embedded-source-code-modal .code-copy-button>.bi::before{background-image:url('data:image/svg+xml,')}#quarto-embedded-source-code-modal .code-copy-button-checked>.bi::before{background-image:url('data:image/svg+xml,')}.sidebar{will-change:top;transition:top 200ms linear;position:sticky;overflow-y:auto;padding-top:1.2em;max-height:100vh}.sidebar.toc-left,.sidebar.margin-sidebar{top:0px;padding-top:1em}.sidebar.quarto-banner-title-block-sidebar>*{padding-top:1.65em}figure .quarto-notebook-link{margin-top:.5em}.quarto-notebook-link{font-size:.75em;color:#6c757d;margin-bottom:1em;text-decoration:none;display:block}.quarto-notebook-link:hover{text-decoration:underline;color:#2761e3}.quarto-notebook-link::before{display:inline-block;height:.75rem;width:.75rem;margin-bottom:0em;margin-right:.25em;content:"";vertical-align:-0.125em;background-image:url('data:image/svg+xml,');background-repeat:no-repeat;background-size:.75rem .75rem}.toc-actions i.bi,.quarto-code-links i.bi,.quarto-other-links i.bi,.quarto-alternate-notebooks i.bi,.quarto-alternate-formats i.bi{margin-right:.4em;font-size:.8rem}.quarto-other-links-text-target .quarto-code-links i.bi,.quarto-other-links-text-target .quarto-other-links i.bi{margin-right:.2em}.quarto-other-formats-text-target .quarto-alternate-formats i.bi{margin-right:.1em}.toc-actions i.bi.empty,.quarto-code-links i.bi.empty,.quarto-other-links i.bi.empty,.quarto-alternate-notebooks i.bi.empty,.quarto-alternate-formats i.bi.empty{padding-left:1em}.quarto-notebook h2,.quarto-notebook .h2{border-bottom:none}.quarto-notebook .cell-container{display:flex}.quarto-notebook .cell-container .cell{flex-grow:4}.quarto-notebook .cell-container .cell-decorator{padding-top:1.5em;padding-right:1em;text-align:right}.quarto-notebook .cell-container.code-fold .cell-decorator{padding-top:3em}.quarto-notebook .cell-code code{white-space:pre-wrap}.quarto-notebook .cell .cell-output-stderr pre code,.quarto-notebook .cell .cell-output-stdout pre code{white-space:pre-wrap;overflow-wrap:anywhere}.toc-actions,.quarto-alternate-formats,.quarto-other-links,.quarto-code-links,.quarto-alternate-notebooks{padding-left:0em}.sidebar .toc-actions a,.sidebar .quarto-alternate-formats a,.sidebar .quarto-other-links a,.sidebar .quarto-code-links a,.sidebar .quarto-alternate-notebooks a,.sidebar nav[role=doc-toc] a{text-decoration:none}.sidebar .toc-actions a:hover,.sidebar .quarto-other-links a:hover,.sidebar .quarto-code-links a:hover,.sidebar .quarto-alternate-formats a:hover,.sidebar .quarto-alternate-notebooks a:hover{color:#2761e3}.sidebar .toc-actions h2,.sidebar .toc-actions .h2,.sidebar .quarto-code-links h2,.sidebar .quarto-code-links .h2,.sidebar .quarto-other-links h2,.sidebar .quarto-other-links .h2,.sidebar .quarto-alternate-notebooks h2,.sidebar .quarto-alternate-notebooks .h2,.sidebar .quarto-alternate-formats h2,.sidebar .quarto-alternate-formats .h2,.sidebar nav[role=doc-toc]>h2,.sidebar nav[role=doc-toc]>.h2{font-weight:500;margin-bottom:.2rem;margin-top:.3rem;font-family:inherit;border-bottom:0;padding-bottom:0;padding-top:0px}.sidebar .toc-actions>h2,.sidebar .toc-actions>.h2,.sidebar .quarto-code-links>h2,.sidebar .quarto-code-links>.h2,.sidebar .quarto-other-links>h2,.sidebar .quarto-other-links>.h2,.sidebar .quarto-alternate-notebooks>h2,.sidebar .quarto-alternate-notebooks>.h2,.sidebar .quarto-alternate-formats>h2,.sidebar .quarto-alternate-formats>.h2{font-size:.8rem}.sidebar nav[role=doc-toc]>h2,.sidebar nav[role=doc-toc]>.h2{font-size:.875rem}.sidebar nav[role=doc-toc]>ul a{border-left:1px solid #e9ecef;padding-left:.6rem}.sidebar .toc-actions h2>ul a,.sidebar .toc-actions .h2>ul a,.sidebar .quarto-code-links h2>ul a,.sidebar .quarto-code-links .h2>ul a,.sidebar .quarto-other-links h2>ul a,.sidebar .quarto-other-links .h2>ul a,.sidebar .quarto-alternate-notebooks h2>ul a,.sidebar .quarto-alternate-notebooks .h2>ul a,.sidebar .quarto-alternate-formats h2>ul a,.sidebar .quarto-alternate-formats .h2>ul a{border-left:none;padding-left:.6rem}.sidebar .toc-actions ul a:empty,.sidebar .quarto-code-links ul a:empty,.sidebar .quarto-other-links ul a:empty,.sidebar .quarto-alternate-notebooks ul a:empty,.sidebar .quarto-alternate-formats ul a:empty,.sidebar nav[role=doc-toc]>ul a:empty{display:none}.sidebar .toc-actions ul,.sidebar .quarto-code-links ul,.sidebar .quarto-other-links ul,.sidebar .quarto-alternate-notebooks ul,.sidebar .quarto-alternate-formats ul{padding-left:0;list-style:none}.sidebar nav[role=doc-toc] ul{list-style:none;padding-left:0;list-style:none}.sidebar nav[role=doc-toc]>ul{margin-left:.45em}.quarto-margin-sidebar nav[role=doc-toc]{padding-left:.5em}.sidebar .toc-actions>ul,.sidebar .quarto-code-links>ul,.sidebar .quarto-other-links>ul,.sidebar .quarto-alternate-notebooks>ul,.sidebar .quarto-alternate-formats>ul{font-size:.8rem}.sidebar nav[role=doc-toc]>ul{font-size:.875rem}.sidebar .toc-actions ul li a,.sidebar .quarto-code-links ul li a,.sidebar .quarto-other-links ul li a,.sidebar .quarto-alternate-notebooks ul li a,.sidebar .quarto-alternate-formats ul li a,.sidebar nav[role=doc-toc]>ul li a{line-height:1.1rem;padding-bottom:.2rem;padding-top:.2rem;color:inherit}.sidebar nav[role=doc-toc] ul>li>ul>li>a{padding-left:1.2em}.sidebar nav[role=doc-toc] ul>li>ul>li>ul>li>a{padding-left:2.4em}.sidebar nav[role=doc-toc] ul>li>ul>li>ul>li>ul>li>a{padding-left:3.6em}.sidebar nav[role=doc-toc] ul>li>ul>li>ul>li>ul>li>ul>li>a{padding-left:4.8em}.sidebar nav[role=doc-toc] ul>li>ul>li>ul>li>ul>li>ul>li>ul>li>a{padding-left:6em}.sidebar nav[role=doc-toc] ul>li>a.active,.sidebar nav[role=doc-toc] ul>li>ul>li>a.active{border-left:1px solid #2761e3;color:#2761e3 !important}.sidebar nav[role=doc-toc] ul>li>a:hover,.sidebar nav[role=doc-toc] ul>li>ul>li>a:hover{color:#2761e3 !important}kbd,.kbd{color:#343a40;background-color:#f8f9fa;border:1px solid;border-radius:5px;border-color:#dee2e6}.quarto-appendix-contents div.hanging-indent{margin-left:0em}.quarto-appendix-contents div.hanging-indent div.csl-entry{margin-left:1em;text-indent:-1em}.citation a,.footnote-ref{text-decoration:none}.footnotes ol{padding-left:1em}.tippy-content>*{margin-bottom:.7em}.tippy-content>*:last-child{margin-bottom:0}.callout{margin-top:1.25rem;margin-bottom:1.25rem;border-radius:.25rem;overflow-wrap:break-word}.callout .callout-title-container{overflow-wrap:anywhere}.callout.callout-style-simple{padding:.4em .7em;border-left:5px solid;border-right:1px solid #dee2e6;border-top:1px solid #dee2e6;border-bottom:1px solid #dee2e6}.callout.callout-style-default{border-left:5px solid;border-right:1px solid #dee2e6;border-top:1px solid #dee2e6;border-bottom:1px solid #dee2e6}.callout .callout-body-container{flex-grow:1}.callout.callout-style-simple .callout-body{font-size:.9rem;font-weight:400}.callout.callout-style-default .callout-body{font-size:.9rem;font-weight:400}.callout:not(.no-icon).callout-titled.callout-style-simple .callout-body{padding-left:1.6em}.callout.callout-titled>.callout-header{padding-top:.2em;margin-bottom:-0.2em}.callout.callout-style-simple>div.callout-header{border-bottom:none;font-size:.9rem;font-weight:600;opacity:75%}.callout.callout-style-default>div.callout-header{border-bottom:none;font-weight:600;opacity:85%;font-size:.9rem;padding-left:.5em;padding-right:.5em}.callout.callout-style-default .callout-body{padding-left:.5em;padding-right:.5em}.callout.callout-style-default .callout-body>:first-child{padding-top:.5rem;margin-top:0}.callout>div.callout-header[data-bs-toggle=collapse]{cursor:pointer}.callout.callout-style-default .callout-header[aria-expanded=false],.callout.callout-style-default .callout-header[aria-expanded=true]{padding-top:0px;margin-bottom:0px;align-items:center}.callout.callout-titled .callout-body>:last-child:not(.sourceCode),.callout.callout-titled .callout-body>div>:last-child:not(.sourceCode){padding-bottom:.5rem;margin-bottom:0}.callout:not(.callout-titled) .callout-body>:first-child,.callout:not(.callout-titled) .callout-body>div>:first-child{margin-top:.25rem}.callout:not(.callout-titled) .callout-body>:last-child,.callout:not(.callout-titled) .callout-body>div>:last-child{margin-bottom:.2rem}.callout.callout-style-simple .callout-icon::before,.callout.callout-style-simple .callout-toggle::before{height:1rem;width:1rem;display:inline-block;content:"";background-repeat:no-repeat;background-size:1rem 1rem}.callout.callout-style-default .callout-icon::before,.callout.callout-style-default .callout-toggle::before{height:.9rem;width:.9rem;display:inline-block;content:"";background-repeat:no-repeat;background-size:.9rem .9rem}.callout.callout-style-default .callout-toggle::before{margin-top:5px}.callout .callout-btn-toggle .callout-toggle::before{transition:transform .2s linear}.callout .callout-header[aria-expanded=false] .callout-toggle::before{transform:rotate(-90deg)}.callout .callout-header[aria-expanded=true] .callout-toggle::before{transform:none}.callout.callout-style-simple:not(.no-icon) div.callout-icon-container{padding-top:.2em;padding-right:.55em}.callout.callout-style-default:not(.no-icon) div.callout-icon-container{padding-top:.1em;padding-right:.35em}.callout.callout-style-default:not(.no-icon) div.callout-title-container{margin-top:-1px}.callout.callout-style-default.callout-caution:not(.no-icon) div.callout-icon-container{padding-top:.3em;padding-right:.35em}.callout>.callout-body>.callout-icon-container>.no-icon,.callout>.callout-header>.callout-icon-container>.no-icon{display:none}div.callout.callout{border-left-color:#6c757d}div.callout.callout-style-default>.callout-header{background-color:#6c757d}div.callout-note.callout{border-left-color:#2780e3}div.callout-note.callout-style-default>.callout-header{background-color:#e9f2fc}div.callout-note:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-note.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-note .callout-toggle::before{background-image:url('data:image/svg+xml,')}div.callout-tip.callout{border-left-color:#3fb618}div.callout-tip.callout-style-default>.callout-header{background-color:#ecf8e8}div.callout-tip:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-tip.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-tip .callout-toggle::before{background-image:url('data:image/svg+xml,')}div.callout-warning.callout{border-left-color:#ff7518}div.callout-warning.callout-style-default>.callout-header{background-color:#fff1e8}div.callout-warning:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-warning.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-warning .callout-toggle::before{background-image:url('data:image/svg+xml,')}div.callout-caution.callout{border-left-color:#f0ad4e}div.callout-caution.callout-style-default>.callout-header{background-color:#fef7ed}div.callout-caution:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-caution.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-caution .callout-toggle::before{background-image:url('data:image/svg+xml,')}div.callout-important.callout{border-left-color:#ff0039}div.callout-important.callout-style-default>.callout-header{background-color:#ffe6eb}div.callout-important:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-important.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-important .callout-toggle::before{background-image:url('data:image/svg+xml,')}.quarto-toggle-container{display:flex;align-items:center}.quarto-reader-toggle .bi::before,.quarto-color-scheme-toggle .bi::before{display:inline-block;height:1rem;width:1rem;content:"";background-repeat:no-repeat;background-size:1rem 1rem}.sidebar-navigation{padding-left:20px}.navbar{background-color:#f8f9fa;color:#545555}.navbar .quarto-color-scheme-toggle:not(.alternate) .bi::before{background-image:url('data:image/svg+xml,')}.navbar .quarto-color-scheme-toggle.alternate .bi::before{background-image:url('data:image/svg+xml,')}.sidebar-navigation .quarto-color-scheme-toggle:not(.alternate) .bi::before{background-image:url('data:image/svg+xml,')}.sidebar-navigation .quarto-color-scheme-toggle.alternate .bi::before{background-image:url('data:image/svg+xml,')}.quarto-sidebar-toggle{border-color:#dee2e6;border-bottom-left-radius:.25rem;border-bottom-right-radius:.25rem;border-style:solid;border-width:1px;overflow:hidden;border-top-width:0px;padding-top:0px !important}.quarto-sidebar-toggle-title{cursor:pointer;padding-bottom:2px;margin-left:.25em;text-align:center;font-weight:400;font-size:.775em}#quarto-content .quarto-sidebar-toggle{background:#fafafa}#quarto-content .quarto-sidebar-toggle-title{color:#343a40}.quarto-sidebar-toggle-icon{color:#dee2e6;margin-right:.5em;float:right;transition:transform .2s ease}.quarto-sidebar-toggle-icon::before{padding-top:5px}.quarto-sidebar-toggle.expanded .quarto-sidebar-toggle-icon{transform:rotate(-180deg)}.quarto-sidebar-toggle.expanded .quarto-sidebar-toggle-title{border-bottom:solid #dee2e6 1px}.quarto-sidebar-toggle-contents{background-color:#fff;padding-right:10px;padding-left:10px;margin-top:0px !important;transition:max-height .5s ease}.quarto-sidebar-toggle.expanded .quarto-sidebar-toggle-contents{padding-top:1em;padding-bottom:10px}@media(max-width: 767.98px){.sidebar-menu-container{padding-bottom:5em}}.quarto-sidebar-toggle:not(.expanded) .quarto-sidebar-toggle-contents{padding-top:0px !important;padding-bottom:0px}nav[role=doc-toc]{z-index:1020}#quarto-sidebar>*,nav[role=doc-toc]>*{transition:opacity .1s ease,border .1s ease}#quarto-sidebar.slow>*,nav[role=doc-toc].slow>*{transition:opacity .4s ease,border .4s ease}.quarto-color-scheme-toggle:not(.alternate).top-right .bi::before{background-image:url('data:image/svg+xml,')}.quarto-color-scheme-toggle.alternate.top-right .bi::before{background-image:url('data:image/svg+xml,')}#quarto-appendix.default{border-top:1px solid #dee2e6}#quarto-appendix.default{background-color:#fff;padding-top:1.5em;margin-top:2em;z-index:998}#quarto-appendix.default .quarto-appendix-heading{margin-top:0;line-height:1.4em;font-weight:600;opacity:.9;border-bottom:none;margin-bottom:0}#quarto-appendix.default .footnotes ol,#quarto-appendix.default .footnotes ol li>p:last-of-type,#quarto-appendix.default .quarto-appendix-contents>p:last-of-type{margin-bottom:0}#quarto-appendix.default .footnotes ol{margin-left:.5em}#quarto-appendix.default .quarto-appendix-secondary-label{margin-bottom:.4em}#quarto-appendix.default .quarto-appendix-bibtex{font-size:.7em;padding:1em;border:solid 1px #dee2e6;margin-bottom:1em}#quarto-appendix.default .quarto-appendix-bibtex code.sourceCode{white-space:pre-wrap}#quarto-appendix.default .quarto-appendix-citeas{font-size:.9em;padding:1em;border:solid 1px #dee2e6;margin-bottom:1em}#quarto-appendix.default .quarto-appendix-heading{font-size:1em !important}#quarto-appendix.default *[role=doc-endnotes]>ol,#quarto-appendix.default .quarto-appendix-contents>*:not(h2):not(.h2){font-size:.9em}#quarto-appendix.default section{padding-bottom:1.5em}#quarto-appendix.default section *[role=doc-endnotes],#quarto-appendix.default section>*:not(a){opacity:.9;word-wrap:break-word}.btn.btn-quarto,div.cell-output-display .btn-quarto{--bs-btn-color: #cacccd;--bs-btn-bg: #343a40;--bs-btn-border-color: #343a40;--bs-btn-hover-color: #cacccd;--bs-btn-hover-bg: #52585d;--bs-btn-hover-border-color: #484e53;--bs-btn-focus-shadow-rgb: 75, 80, 85;--bs-btn-active-color: #fff;--bs-btn-active-bg: #5d6166;--bs-btn-active-border-color: #484e53;--bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125);--bs-btn-disabled-color: #fff;--bs-btn-disabled-bg: #343a40;--bs-btn-disabled-border-color: #343a40}nav.quarto-secondary-nav.color-navbar{background-color:#f8f9fa;color:#545555}nav.quarto-secondary-nav.color-navbar h1,nav.quarto-secondary-nav.color-navbar .h1,nav.quarto-secondary-nav.color-navbar .quarto-btn-toggle{color:#545555}@media(max-width: 991.98px){body.nav-sidebar .quarto-title-banner{margin-bottom:0;padding-bottom:1em}body.nav-sidebar #title-block-header{margin-block-end:0}}p.subtitle{margin-top:.25em;margin-bottom:.5em}code a:any-link{color:inherit;text-decoration-color:#6c757d}/*! light */div.observablehq table thead tr th{background-color:var(--bs-body-bg)}input,button,select,optgroup,textarea{background-color:var(--bs-body-bg)}.code-annotated .code-copy-button{margin-right:1.25em;margin-top:0;padding-bottom:0;padding-top:3px}.code-annotation-gutter-bg{background-color:#fff}.code-annotation-gutter{background-color:rgba(233,236,239,.65)}.code-annotation-gutter,.code-annotation-gutter-bg{height:100%;width:calc(20px + .5em);position:absolute;top:0;right:0}dl.code-annotation-container-grid dt{margin-right:1em;margin-top:.25rem}dl.code-annotation-container-grid dt{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;color:#4b545c;border:solid #4b545c 1px;border-radius:50%;height:22px;width:22px;line-height:22px;font-size:11px;text-align:center;vertical-align:middle;text-decoration:none}dl.code-annotation-container-grid dt[data-target-cell]{cursor:pointer}dl.code-annotation-container-grid dt[data-target-cell].code-annotation-active{color:#fff;border:solid #aaa 1px;background-color:#aaa}pre.code-annotation-code{padding-top:0;padding-bottom:0}pre.code-annotation-code code{z-index:3}#code-annotation-line-highlight-gutter{width:100%;border-top:solid rgba(170,170,170,.2666666667) 1px;border-bottom:solid rgba(170,170,170,.2666666667) 1px;z-index:2;background-color:rgba(170,170,170,.1333333333)}#code-annotation-line-highlight{margin-left:-4em;width:calc(100% + 4em);border-top:solid rgba(170,170,170,.2666666667) 1px;border-bottom:solid rgba(170,170,170,.2666666667) 1px;z-index:2;background-color:rgba(170,170,170,.1333333333)}code.sourceCode .code-annotation-anchor.code-annotation-active{background-color:var(--quarto-hl-normal-color, #aaaaaa);border:solid var(--quarto-hl-normal-color, #aaaaaa) 1px;color:#e9ecef;font-weight:bolder}code.sourceCode .code-annotation-anchor{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;color:var(--quarto-hl-co-color);border:solid var(--quarto-hl-co-color) 1px;border-radius:50%;height:18px;width:18px;font-size:9px;margin-top:2px}code.sourceCode button.code-annotation-anchor{padding:2px;user-select:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;-o-user-select:none}code.sourceCode a.code-annotation-anchor{line-height:18px;text-align:center;vertical-align:middle;cursor:default;text-decoration:none}@media print{.page-columns .column-screen-inset{grid-column:page-start-inset/page-end-inset;z-index:998;opacity:.999}.page-columns .column-screen-inset table{background:#fff}.page-columns .column-screen-inset-left{grid-column:page-start-inset/body-content-end;z-index:998;opacity:.999}.page-columns .column-screen-inset-left table{background:#fff}.page-columns .column-screen-inset-right{grid-column:body-content-start/page-end-inset;z-index:998;opacity:.999}.page-columns .column-screen-inset-right table{background:#fff}.page-columns .column-screen{grid-column:page-start/page-end;z-index:998;opacity:.999}.page-columns .column-screen table{background:#fff}.page-columns .column-screen-left{grid-column:page-start/body-content-end;z-index:998;opacity:.999}.page-columns .column-screen-left table{background:#fff}.page-columns .column-screen-right{grid-column:body-content-start/page-end;z-index:998;opacity:.999}.page-columns .column-screen-right table{background:#fff}.page-columns .column-screen-inset-shaded{grid-column:page-start-inset/page-end-inset;padding:1em;background:#f8f9fa;z-index:998;opacity:.999;margin-bottom:1em}}.quarto-video{margin-bottom:1em}.table{border-top:1px solid #ebedee;border-bottom:1px solid #ebedee}.table>thead{border-top-width:0;border-bottom:1px solid #b2bac1}.table a{word-break:break-word}.table>:not(caption)>*>*{background-color:unset;color:unset}#quarto-document-content .crosstalk-input .checkbox input[type=checkbox],#quarto-document-content .crosstalk-input .checkbox-inline input[type=checkbox]{position:unset;margin-top:unset;margin-left:unset}#quarto-document-content .row{margin-left:unset;margin-right:unset}.quarto-xref{white-space:nowrap}#quarto-draft-alert{margin-top:0px;margin-bottom:0px;padding:.3em;text-align:center;font-size:.9em}#quarto-draft-alert i{margin-right:.3em}a.external:after{content:"";background-image:url('data:image/svg+xml,');background-size:contain;background-repeat:no-repeat;background-position:center center;margin-left:.2em;padding-right:.75em}div.sourceCode code a.external:after{content:none}a.external:after:hover{cursor:pointer}.quarto-ext-icon{display:inline-block;font-size:.75em;padding-left:.3em}.code-with-filename .code-with-filename-file{margin-bottom:0;padding-bottom:2px;padding-top:2px;padding-left:.7em;border:var(--quarto-border-width) solid var(--quarto-border-color);border-radius:var(--quarto-border-radius);border-bottom:0;border-bottom-left-radius:0%;border-bottom-right-radius:0%}.code-with-filename div.sourceCode,.reveal .code-with-filename div.sourceCode{margin-top:0;border-top-left-radius:0%;border-top-right-radius:0%}.code-with-filename .code-with-filename-file pre{margin-bottom:0}.code-with-filename .code-with-filename-file{background-color:rgba(219,219,219,.8)}.quarto-dark .code-with-filename .code-with-filename-file{background-color:#555}.code-with-filename .code-with-filename-file strong{font-weight:400}.quarto-title-banner{margin-bottom:1em;color:#545555;background:#f8f9fa}.quarto-title-banner a{color:#545555}.quarto-title-banner h1,.quarto-title-banner .h1,.quarto-title-banner h2,.quarto-title-banner .h2{color:#545555}.quarto-title-banner .code-tools-button{color:#878888}.quarto-title-banner .code-tools-button:hover{color:#545555}.quarto-title-banner .code-tools-button>.bi::before{background-image:url('data:image/svg+xml,')}.quarto-title-banner .code-tools-button:hover>.bi::before{background-image:url('data:image/svg+xml,')}.quarto-title-banner .quarto-title .title{font-weight:600}.quarto-title-banner .quarto-categories{margin-top:.75em}@media(min-width: 992px){.quarto-title-banner{padding-top:2.5em;padding-bottom:2.5em}}@media(max-width: 991.98px){.quarto-title-banner{padding-top:1em;padding-bottom:1em}}@media(max-width: 767.98px){body.hypothesis-enabled #title-block-header>*{padding-right:20px}}main.quarto-banner-title-block>section:first-child>h2,main.quarto-banner-title-block>section:first-child>.h2,main.quarto-banner-title-block>section:first-child>h3,main.quarto-banner-title-block>section:first-child>.h3,main.quarto-banner-title-block>section:first-child>h4,main.quarto-banner-title-block>section:first-child>.h4{margin-top:0}.quarto-title .quarto-categories{display:flex;flex-wrap:wrap;row-gap:.5em;column-gap:.4em;padding-bottom:.5em;margin-top:.75em}.quarto-title .quarto-categories .quarto-category{padding:.25em .75em;font-size:.65em;text-transform:uppercase;border:solid 1px;border-radius:.25rem;opacity:.6}.quarto-title .quarto-categories .quarto-category a{color:inherit}.quarto-title-meta-container{display:grid;grid-template-columns:1fr auto}.quarto-title-meta-column-end{display:flex;flex-direction:column;padding-left:1em}.quarto-title-meta-column-end a .bi{margin-right:.3em}#title-block-header.quarto-title-block.default .quarto-title-meta{display:grid;grid-template-columns:repeat(2, 1fr);grid-column-gap:1em}#title-block-header.quarto-title-block.default .quarto-title .title{margin-bottom:0}#title-block-header.quarto-title-block.default .quarto-title-author-orcid img{margin-top:-0.2em;height:.8em;width:.8em}#title-block-header.quarto-title-block.default .quarto-title-author-email{opacity:.7}#title-block-header.quarto-title-block.default .quarto-description p:last-of-type{margin-bottom:0}#title-block-header.quarto-title-block.default .quarto-title-meta-contents p,#title-block-header.quarto-title-block.default .quarto-title-authors p,#title-block-header.quarto-title-block.default .quarto-title-affiliations p{margin-bottom:.1em}#title-block-header.quarto-title-block.default .quarto-title-meta-heading{text-transform:uppercase;margin-top:1em;font-size:.8em;opacity:.8;font-weight:400}#title-block-header.quarto-title-block.default .quarto-title-meta-contents{font-size:.9em}#title-block-header.quarto-title-block.default .quarto-title-meta-contents p.affiliation:last-of-type{margin-bottom:.1em}#title-block-header.quarto-title-block.default p.affiliation{margin-bottom:.1em}#title-block-header.quarto-title-block.default .keywords,#title-block-header.quarto-title-block.default .description,#title-block-header.quarto-title-block.default .abstract{margin-top:0}#title-block-header.quarto-title-block.default .keywords>p,#title-block-header.quarto-title-block.default .description>p,#title-block-header.quarto-title-block.default .abstract>p{font-size:.9em}#title-block-header.quarto-title-block.default .keywords>p:last-of-type,#title-block-header.quarto-title-block.default .description>p:last-of-type,#title-block-header.quarto-title-block.default .abstract>p:last-of-type{margin-bottom:0}#title-block-header.quarto-title-block.default .keywords .block-title,#title-block-header.quarto-title-block.default .description .block-title,#title-block-header.quarto-title-block.default .abstract .block-title{margin-top:1em;text-transform:uppercase;font-size:.8em;opacity:.8;font-weight:400}#title-block-header.quarto-title-block.default .quarto-title-meta-author{display:grid;grid-template-columns:minmax(max-content, 1fr) 1fr;grid-column-gap:1em}.quarto-title-tools-only{display:flex;justify-content:right}body{-webkit-font-smoothing:antialiased}.badge.bg-light{color:#343a40}.progress .progress-bar{font-size:8px;line-height:8px}
diff --git a/site_libs/bootstrap/bootstrap.min.js b/site_libs/bootstrap/bootstrap.min.js
new file mode 100644
index 0000000..e8f21f7
--- /dev/null
+++ b/site_libs/bootstrap/bootstrap.min.js
@@ -0,0 +1,7 @@
+/*!
+ * Bootstrap v5.3.1 (https://getbootstrap.com/)
+ * Copyright 2011-2023 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors)
+ * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
+ */
+!function(t,e){"object"==typeof exports&&"undefined"!=typeof module?module.exports=e():"function"==typeof define&&define.amd?define(e):(t="undefined"!=typeof globalThis?globalThis:t||self).bootstrap=e()}(this,(function(){"use strict";const t=new Map,e={set(e,i,n){t.has(e)||t.set(e,new Map);const s=t.get(e);s.has(i)||0===s.size?s.set(i,n):console.error(`Bootstrap doesn't allow more than one instance per element. Bound instance: ${Array.from(s.keys())[0]}.`)},get:(e,i)=>t.has(e)&&t.get(e).get(i)||null,remove(e,i){if(!t.has(e))return;const n=t.get(e);n.delete(i),0===n.size&&t.delete(e)}},i="transitionend",n=t=>(t&&window.CSS&&window.CSS.escape&&(t=t.replace(/#([^\s"#']+)/g,((t,e)=>`#${CSS.escape(e)}`))),t),s=t=>{t.dispatchEvent(new Event(i))},o=t=>!(!t||"object"!=typeof t)&&(void 0!==t.jquery&&(t=t[0]),void 0!==t.nodeType),r=t=>o(t)?t.jquery?t[0]:t:"string"==typeof t&&t.length>0?document.querySelector(n(t)):null,a=t=>{if(!o(t)||0===t.getClientRects().length)return!1;const e="visible"===getComputedStyle(t).getPropertyValue("visibility"),i=t.closest("details:not([open])");if(!i)return e;if(i!==t){const e=t.closest("summary");if(e&&e.parentNode!==i)return!1;if(null===e)return!1}return e},l=t=>!t||t.nodeType!==Node.ELEMENT_NODE||!!t.classList.contains("disabled")||(void 0!==t.disabled?t.disabled:t.hasAttribute("disabled")&&"false"!==t.getAttribute("disabled")),c=t=>{if(!document.documentElement.attachShadow)return null;if("function"==typeof t.getRootNode){const e=t.getRootNode();return e instanceof ShadowRoot?e:null}return t instanceof ShadowRoot?t:t.parentNode?c(t.parentNode):null},h=()=>{},d=t=>{t.offsetHeight},u=()=>window.jQuery&&!document.body.hasAttribute("data-bs-no-jquery")?window.jQuery:null,f=[],p=()=>"rtl"===document.documentElement.dir,m=t=>{var e;e=()=>{const e=u();if(e){const i=t.NAME,n=e.fn[i];e.fn[i]=t.jQueryInterface,e.fn[i].Constructor=t,e.fn[i].noConflict=()=>(e.fn[i]=n,t.jQueryInterface)}},"loading"===document.readyState?(f.length||document.addEventListener("DOMContentLoaded",(()=>{for(const t of f)t()})),f.push(e)):e()},g=(t,e=[],i=t)=>"function"==typeof t?t(...e):i,_=(t,e,n=!0)=>{if(!n)return void g(t);const o=(t=>{if(!t)return 0;let{transitionDuration:e,transitionDelay:i}=window.getComputedStyle(t);const n=Number.parseFloat(e),s=Number.parseFloat(i);return n||s?(e=e.split(",")[0],i=i.split(",")[0],1e3*(Number.parseFloat(e)+Number.parseFloat(i))):0})(e)+5;let r=!1;const a=({target:n})=>{n===e&&(r=!0,e.removeEventListener(i,a),g(t))};e.addEventListener(i,a),setTimeout((()=>{r||s(e)}),o)},b=(t,e,i,n)=>{const s=t.length;let o=t.indexOf(e);return-1===o?!i&&n?t[s-1]:t[0]:(o+=i?1:-1,n&&(o=(o+s)%s),t[Math.max(0,Math.min(o,s-1))])},v=/[^.]*(?=\..*)\.|.*/,y=/\..*/,w=/::\d+$/,A={};let E=1;const T={mouseenter:"mouseover",mouseleave:"mouseout"},C=new Set(["click","dblclick","mouseup","mousedown","contextmenu","mousewheel","DOMMouseScroll","mouseover","mouseout","mousemove","selectstart","selectend","keydown","keypress","keyup","orientationchange","touchstart","touchmove","touchend","touchcancel","pointerdown","pointermove","pointerup","pointerleave","pointercancel","gesturestart","gesturechange","gestureend","focus","blur","change","reset","select","submit","focusin","focusout","load","unload","beforeunload","resize","move","DOMContentLoaded","readystatechange","error","abort","scroll"]);function O(t,e){return e&&`${e}::${E++}`||t.uidEvent||E++}function x(t){const e=O(t);return t.uidEvent=e,A[e]=A[e]||{},A[e]}function k(t,e,i=null){return Object.values(t).find((t=>t.callable===e&&t.delegationSelector===i))}function L(t,e,i){const n="string"==typeof e,s=n?i:e||i;let o=I(t);return C.has(o)||(o=t),[n,s,o]}function S(t,e,i,n,s){if("string"!=typeof e||!t)return;let[o,r,a]=L(e,i,n);if(e in T){const t=t=>function(e){if(!e.relatedTarget||e.relatedTarget!==e.delegateTarget&&!e.delegateTarget.contains(e.relatedTarget))return t.call(this,e)};r=t(r)}const l=x(t),c=l[a]||(l[a]={}),h=k(c,r,o?i:null);if(h)return void(h.oneOff=h.oneOff&&s);const d=O(r,e.replace(v,"")),u=o?function(t,e,i){return function n(s){const o=t.querySelectorAll(e);for(let{target:r}=s;r&&r!==this;r=r.parentNode)for(const a of o)if(a===r)return P(s,{delegateTarget:r}),n.oneOff&&N.off(t,s.type,e,i),i.apply(r,[s])}}(t,i,r):function(t,e){return function i(n){return P(n,{delegateTarget:t}),i.oneOff&&N.off(t,n.type,e),e.apply(t,[n])}}(t,r);u.delegationSelector=o?i:null,u.callable=r,u.oneOff=s,u.uidEvent=d,c[d]=u,t.addEventListener(a,u,o)}function D(t,e,i,n,s){const o=k(e[i],n,s);o&&(t.removeEventListener(i,o,Boolean(s)),delete e[i][o.uidEvent])}function $(t,e,i,n){const s=e[i]||{};for(const[o,r]of Object.entries(s))o.includes(n)&&D(t,e,i,r.callable,r.delegationSelector)}function I(t){return t=t.replace(y,""),T[t]||t}const N={on(t,e,i,n){S(t,e,i,n,!1)},one(t,e,i,n){S(t,e,i,n,!0)},off(t,e,i,n){if("string"!=typeof e||!t)return;const[s,o,r]=L(e,i,n),a=r!==e,l=x(t),c=l[r]||{},h=e.startsWith(".");if(void 0===o){if(h)for(const i of Object.keys(l))$(t,l,i,e.slice(1));for(const[i,n]of Object.entries(c)){const s=i.replace(w,"");a&&!e.includes(s)||D(t,l,r,n.callable,n.delegationSelector)}}else{if(!Object.keys(c).length)return;D(t,l,r,o,s?i:null)}},trigger(t,e,i){if("string"!=typeof e||!t)return null;const n=u();let s=null,o=!0,r=!0,a=!1;e!==I(e)&&n&&(s=n.Event(e,i),n(t).trigger(s),o=!s.isPropagationStopped(),r=!s.isImmediatePropagationStopped(),a=s.isDefaultPrevented());const l=P(new Event(e,{bubbles:o,cancelable:!0}),i);return a&&l.preventDefault(),r&&t.dispatchEvent(l),l.defaultPrevented&&s&&s.preventDefault(),l}};function P(t,e={}){for(const[i,n]of Object.entries(e))try{t[i]=n}catch(e){Object.defineProperty(t,i,{configurable:!0,get:()=>n})}return t}function M(t){if("true"===t)return!0;if("false"===t)return!1;if(t===Number(t).toString())return Number(t);if(""===t||"null"===t)return null;if("string"!=typeof t)return t;try{return JSON.parse(decodeURIComponent(t))}catch(e){return t}}function j(t){return t.replace(/[A-Z]/g,(t=>`-${t.toLowerCase()}`))}const F={setDataAttribute(t,e,i){t.setAttribute(`data-bs-${j(e)}`,i)},removeDataAttribute(t,e){t.removeAttribute(`data-bs-${j(e)}`)},getDataAttributes(t){if(!t)return{};const e={},i=Object.keys(t.dataset).filter((t=>t.startsWith("bs")&&!t.startsWith("bsConfig")));for(const n of i){let i=n.replace(/^bs/,"");i=i.charAt(0).toLowerCase()+i.slice(1,i.length),e[i]=M(t.dataset[n])}return e},getDataAttribute:(t,e)=>M(t.getAttribute(`data-bs-${j(e)}`))};class H{static get Default(){return{}}static get DefaultType(){return{}}static get NAME(){throw new Error('You have to implement the static method "NAME", for each component!')}_getConfig(t){return t=this._mergeConfigObj(t),t=this._configAfterMerge(t),this._typeCheckConfig(t),t}_configAfterMerge(t){return t}_mergeConfigObj(t,e){const i=o(e)?F.getDataAttribute(e,"config"):{};return{...this.constructor.Default,..."object"==typeof i?i:{},...o(e)?F.getDataAttributes(e):{},..."object"==typeof t?t:{}}}_typeCheckConfig(t,e=this.constructor.DefaultType){for(const[n,s]of Object.entries(e)){const e=t[n],r=o(e)?"element":null==(i=e)?`${i}`:Object.prototype.toString.call(i).match(/\s([a-z]+)/i)[1].toLowerCase();if(!new RegExp(s).test(r))throw new TypeError(`${this.constructor.NAME.toUpperCase()}: Option "${n}" provided type "${r}" but expected type "${s}".`)}var i}}class W extends H{constructor(t,i){super(),(t=r(t))&&(this._element=t,this._config=this._getConfig(i),e.set(this._element,this.constructor.DATA_KEY,this))}dispose(){e.remove(this._element,this.constructor.DATA_KEY),N.off(this._element,this.constructor.EVENT_KEY);for(const t of Object.getOwnPropertyNames(this))this[t]=null}_queueCallback(t,e,i=!0){_(t,e,i)}_getConfig(t){return t=this._mergeConfigObj(t,this._element),t=this._configAfterMerge(t),this._typeCheckConfig(t),t}static getInstance(t){return e.get(r(t),this.DATA_KEY)}static getOrCreateInstance(t,e={}){return this.getInstance(t)||new this(t,"object"==typeof e?e:null)}static get VERSION(){return"5.3.1"}static get DATA_KEY(){return`bs.${this.NAME}`}static get EVENT_KEY(){return`.${this.DATA_KEY}`}static eventName(t){return`${t}${this.EVENT_KEY}`}}const B=t=>{let e=t.getAttribute("data-bs-target");if(!e||"#"===e){let i=t.getAttribute("href");if(!i||!i.includes("#")&&!i.startsWith("."))return null;i.includes("#")&&!i.startsWith("#")&&(i=`#${i.split("#")[1]}`),e=i&&"#"!==i?i.trim():null}return n(e)},z={find:(t,e=document.documentElement)=>[].concat(...Element.prototype.querySelectorAll.call(e,t)),findOne:(t,e=document.documentElement)=>Element.prototype.querySelector.call(e,t),children:(t,e)=>[].concat(...t.children).filter((t=>t.matches(e))),parents(t,e){const i=[];let n=t.parentNode.closest(e);for(;n;)i.push(n),n=n.parentNode.closest(e);return i},prev(t,e){let i=t.previousElementSibling;for(;i;){if(i.matches(e))return[i];i=i.previousElementSibling}return[]},next(t,e){let i=t.nextElementSibling;for(;i;){if(i.matches(e))return[i];i=i.nextElementSibling}return[]},focusableChildren(t){const e=["a","button","input","textarea","select","details","[tabindex]",'[contenteditable="true"]'].map((t=>`${t}:not([tabindex^="-"])`)).join(",");return this.find(e,t).filter((t=>!l(t)&&a(t)))},getSelectorFromElement(t){const e=B(t);return e&&z.findOne(e)?e:null},getElementFromSelector(t){const e=B(t);return e?z.findOne(e):null},getMultipleElementsFromSelector(t){const e=B(t);return e?z.find(e):[]}},R=(t,e="hide")=>{const i=`click.dismiss${t.EVENT_KEY}`,n=t.NAME;N.on(document,i,`[data-bs-dismiss="${n}"]`,(function(i){if(["A","AREA"].includes(this.tagName)&&i.preventDefault(),l(this))return;const s=z.getElementFromSelector(this)||this.closest(`.${n}`);t.getOrCreateInstance(s)[e]()}))},q=".bs.alert",V=`close${q}`,K=`closed${q}`;class Q extends W{static get NAME(){return"alert"}close(){if(N.trigger(this._element,V).defaultPrevented)return;this._element.classList.remove("show");const t=this._element.classList.contains("fade");this._queueCallback((()=>this._destroyElement()),this._element,t)}_destroyElement(){this._element.remove(),N.trigger(this._element,K),this.dispose()}static jQueryInterface(t){return this.each((function(){const e=Q.getOrCreateInstance(this);if("string"==typeof t){if(void 0===e[t]||t.startsWith("_")||"constructor"===t)throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}R(Q,"close"),m(Q);const X='[data-bs-toggle="button"]';class Y extends W{static get NAME(){return"button"}toggle(){this._element.setAttribute("aria-pressed",this._element.classList.toggle("active"))}static jQueryInterface(t){return this.each((function(){const e=Y.getOrCreateInstance(this);"toggle"===t&&e[t]()}))}}N.on(document,"click.bs.button.data-api",X,(t=>{t.preventDefault();const e=t.target.closest(X);Y.getOrCreateInstance(e).toggle()})),m(Y);const U=".bs.swipe",G=`touchstart${U}`,J=`touchmove${U}`,Z=`touchend${U}`,tt=`pointerdown${U}`,et=`pointerup${U}`,it={endCallback:null,leftCallback:null,rightCallback:null},nt={endCallback:"(function|null)",leftCallback:"(function|null)",rightCallback:"(function|null)"};class st extends H{constructor(t,e){super(),this._element=t,t&&st.isSupported()&&(this._config=this._getConfig(e),this._deltaX=0,this._supportPointerEvents=Boolean(window.PointerEvent),this._initEvents())}static get Default(){return it}static get DefaultType(){return nt}static get NAME(){return"swipe"}dispose(){N.off(this._element,U)}_start(t){this._supportPointerEvents?this._eventIsPointerPenTouch(t)&&(this._deltaX=t.clientX):this._deltaX=t.touches[0].clientX}_end(t){this._eventIsPointerPenTouch(t)&&(this._deltaX=t.clientX-this._deltaX),this._handleSwipe(),g(this._config.endCallback)}_move(t){this._deltaX=t.touches&&t.touches.length>1?0:t.touches[0].clientX-this._deltaX}_handleSwipe(){const t=Math.abs(this._deltaX);if(t<=40)return;const e=t/this._deltaX;this._deltaX=0,e&&g(e>0?this._config.rightCallback:this._config.leftCallback)}_initEvents(){this._supportPointerEvents?(N.on(this._element,tt,(t=>this._start(t))),N.on(this._element,et,(t=>this._end(t))),this._element.classList.add("pointer-event")):(N.on(this._element,G,(t=>this._start(t))),N.on(this._element,J,(t=>this._move(t))),N.on(this._element,Z,(t=>this._end(t))))}_eventIsPointerPenTouch(t){return this._supportPointerEvents&&("pen"===t.pointerType||"touch"===t.pointerType)}static isSupported(){return"ontouchstart"in document.documentElement||navigator.maxTouchPoints>0}}const ot=".bs.carousel",rt=".data-api",at="next",lt="prev",ct="left",ht="right",dt=`slide${ot}`,ut=`slid${ot}`,ft=`keydown${ot}`,pt=`mouseenter${ot}`,mt=`mouseleave${ot}`,gt=`dragstart${ot}`,_t=`load${ot}${rt}`,bt=`click${ot}${rt}`,vt="carousel",yt="active",wt=".active",At=".carousel-item",Et=wt+At,Tt={ArrowLeft:ht,ArrowRight:ct},Ct={interval:5e3,keyboard:!0,pause:"hover",ride:!1,touch:!0,wrap:!0},Ot={interval:"(number|boolean)",keyboard:"boolean",pause:"(string|boolean)",ride:"(boolean|string)",touch:"boolean",wrap:"boolean"};class xt extends W{constructor(t,e){super(t,e),this._interval=null,this._activeElement=null,this._isSliding=!1,this.touchTimeout=null,this._swipeHelper=null,this._indicatorsElement=z.findOne(".carousel-indicators",this._element),this._addEventListeners(),this._config.ride===vt&&this.cycle()}static get Default(){return Ct}static get DefaultType(){return Ot}static get NAME(){return"carousel"}next(){this._slide(at)}nextWhenVisible(){!document.hidden&&a(this._element)&&this.next()}prev(){this._slide(lt)}pause(){this._isSliding&&s(this._element),this._clearInterval()}cycle(){this._clearInterval(),this._updateInterval(),this._interval=setInterval((()=>this.nextWhenVisible()),this._config.interval)}_maybeEnableCycle(){this._config.ride&&(this._isSliding?N.one(this._element,ut,(()=>this.cycle())):this.cycle())}to(t){const e=this._getItems();if(t>e.length-1||t<0)return;if(this._isSliding)return void N.one(this._element,ut,(()=>this.to(t)));const i=this._getItemIndex(this._getActive());if(i===t)return;const n=t>i?at:lt;this._slide(n,e[t])}dispose(){this._swipeHelper&&this._swipeHelper.dispose(),super.dispose()}_configAfterMerge(t){return t.defaultInterval=t.interval,t}_addEventListeners(){this._config.keyboard&&N.on(this._element,ft,(t=>this._keydown(t))),"hover"===this._config.pause&&(N.on(this._element,pt,(()=>this.pause())),N.on(this._element,mt,(()=>this._maybeEnableCycle()))),this._config.touch&&st.isSupported()&&this._addTouchEventListeners()}_addTouchEventListeners(){for(const t of z.find(".carousel-item img",this._element))N.on(t,gt,(t=>t.preventDefault()));const t={leftCallback:()=>this._slide(this._directionToOrder(ct)),rightCallback:()=>this._slide(this._directionToOrder(ht)),endCallback:()=>{"hover"===this._config.pause&&(this.pause(),this.touchTimeout&&clearTimeout(this.touchTimeout),this.touchTimeout=setTimeout((()=>this._maybeEnableCycle()),500+this._config.interval))}};this._swipeHelper=new st(this._element,t)}_keydown(t){if(/input|textarea/i.test(t.target.tagName))return;const e=Tt[t.key];e&&(t.preventDefault(),this._slide(this._directionToOrder(e)))}_getItemIndex(t){return this._getItems().indexOf(t)}_setActiveIndicatorElement(t){if(!this._indicatorsElement)return;const e=z.findOne(wt,this._indicatorsElement);e.classList.remove(yt),e.removeAttribute("aria-current");const i=z.findOne(`[data-bs-slide-to="${t}"]`,this._indicatorsElement);i&&(i.classList.add(yt),i.setAttribute("aria-current","true"))}_updateInterval(){const t=this._activeElement||this._getActive();if(!t)return;const e=Number.parseInt(t.getAttribute("data-bs-interval"),10);this._config.interval=e||this._config.defaultInterval}_slide(t,e=null){if(this._isSliding)return;const i=this._getActive(),n=t===at,s=e||b(this._getItems(),i,n,this._config.wrap);if(s===i)return;const o=this._getItemIndex(s),r=e=>N.trigger(this._element,e,{relatedTarget:s,direction:this._orderToDirection(t),from:this._getItemIndex(i),to:o});if(r(dt).defaultPrevented)return;if(!i||!s)return;const a=Boolean(this._interval);this.pause(),this._isSliding=!0,this._setActiveIndicatorElement(o),this._activeElement=s;const l=n?"carousel-item-start":"carousel-item-end",c=n?"carousel-item-next":"carousel-item-prev";s.classList.add(c),d(s),i.classList.add(l),s.classList.add(l),this._queueCallback((()=>{s.classList.remove(l,c),s.classList.add(yt),i.classList.remove(yt,c,l),this._isSliding=!1,r(ut)}),i,this._isAnimated()),a&&this.cycle()}_isAnimated(){return this._element.classList.contains("slide")}_getActive(){return z.findOne(Et,this._element)}_getItems(){return z.find(At,this._element)}_clearInterval(){this._interval&&(clearInterval(this._interval),this._interval=null)}_directionToOrder(t){return p()?t===ct?lt:at:t===ct?at:lt}_orderToDirection(t){return p()?t===lt?ct:ht:t===lt?ht:ct}static jQueryInterface(t){return this.each((function(){const e=xt.getOrCreateInstance(this,t);if("number"!=typeof t){if("string"==typeof t){if(void 0===e[t]||t.startsWith("_")||"constructor"===t)throw new TypeError(`No method named "${t}"`);e[t]()}}else e.to(t)}))}}N.on(document,bt,"[data-bs-slide], [data-bs-slide-to]",(function(t){const e=z.getElementFromSelector(this);if(!e||!e.classList.contains(vt))return;t.preventDefault();const i=xt.getOrCreateInstance(e),n=this.getAttribute("data-bs-slide-to");return n?(i.to(n),void i._maybeEnableCycle()):"next"===F.getDataAttribute(this,"slide")?(i.next(),void i._maybeEnableCycle()):(i.prev(),void i._maybeEnableCycle())})),N.on(window,_t,(()=>{const t=z.find('[data-bs-ride="carousel"]');for(const e of t)xt.getOrCreateInstance(e)})),m(xt);const kt=".bs.collapse",Lt=`show${kt}`,St=`shown${kt}`,Dt=`hide${kt}`,$t=`hidden${kt}`,It=`click${kt}.data-api`,Nt="show",Pt="collapse",Mt="collapsing",jt=`:scope .${Pt} .${Pt}`,Ft='[data-bs-toggle="collapse"]',Ht={parent:null,toggle:!0},Wt={parent:"(null|element)",toggle:"boolean"};class Bt extends W{constructor(t,e){super(t,e),this._isTransitioning=!1,this._triggerArray=[];const i=z.find(Ft);for(const t of i){const e=z.getSelectorFromElement(t),i=z.find(e).filter((t=>t===this._element));null!==e&&i.length&&this._triggerArray.push(t)}this._initializeChildren(),this._config.parent||this._addAriaAndCollapsedClass(this._triggerArray,this._isShown()),this._config.toggle&&this.toggle()}static get Default(){return Ht}static get DefaultType(){return Wt}static get NAME(){return"collapse"}toggle(){this._isShown()?this.hide():this.show()}show(){if(this._isTransitioning||this._isShown())return;let t=[];if(this._config.parent&&(t=this._getFirstLevelChildren(".collapse.show, .collapse.collapsing").filter((t=>t!==this._element)).map((t=>Bt.getOrCreateInstance(t,{toggle:!1})))),t.length&&t[0]._isTransitioning)return;if(N.trigger(this._element,Lt).defaultPrevented)return;for(const e of t)e.hide();const e=this._getDimension();this._element.classList.remove(Pt),this._element.classList.add(Mt),this._element.style[e]=0,this._addAriaAndCollapsedClass(this._triggerArray,!0),this._isTransitioning=!0;const i=`scroll${e[0].toUpperCase()+e.slice(1)}`;this._queueCallback((()=>{this._isTransitioning=!1,this._element.classList.remove(Mt),this._element.classList.add(Pt,Nt),this._element.style[e]="",N.trigger(this._element,St)}),this._element,!0),this._element.style[e]=`${this._element[i]}px`}hide(){if(this._isTransitioning||!this._isShown())return;if(N.trigger(this._element,Dt).defaultPrevented)return;const t=this._getDimension();this._element.style[t]=`${this._element.getBoundingClientRect()[t]}px`,d(this._element),this._element.classList.add(Mt),this._element.classList.remove(Pt,Nt);for(const t of this._triggerArray){const e=z.getElementFromSelector(t);e&&!this._isShown(e)&&this._addAriaAndCollapsedClass([t],!1)}this._isTransitioning=!0,this._element.style[t]="",this._queueCallback((()=>{this._isTransitioning=!1,this._element.classList.remove(Mt),this._element.classList.add(Pt),N.trigger(this._element,$t)}),this._element,!0)}_isShown(t=this._element){return t.classList.contains(Nt)}_configAfterMerge(t){return t.toggle=Boolean(t.toggle),t.parent=r(t.parent),t}_getDimension(){return this._element.classList.contains("collapse-horizontal")?"width":"height"}_initializeChildren(){if(!this._config.parent)return;const t=this._getFirstLevelChildren(Ft);for(const e of t){const t=z.getElementFromSelector(e);t&&this._addAriaAndCollapsedClass([e],this._isShown(t))}}_getFirstLevelChildren(t){const e=z.find(jt,this._config.parent);return z.find(t,this._config.parent).filter((t=>!e.includes(t)))}_addAriaAndCollapsedClass(t,e){if(t.length)for(const i of t)i.classList.toggle("collapsed",!e),i.setAttribute("aria-expanded",e)}static jQueryInterface(t){const e={};return"string"==typeof t&&/show|hide/.test(t)&&(e.toggle=!1),this.each((function(){const i=Bt.getOrCreateInstance(this,e);if("string"==typeof t){if(void 0===i[t])throw new TypeError(`No method named "${t}"`);i[t]()}}))}}N.on(document,It,Ft,(function(t){("A"===t.target.tagName||t.delegateTarget&&"A"===t.delegateTarget.tagName)&&t.preventDefault();for(const t of z.getMultipleElementsFromSelector(this))Bt.getOrCreateInstance(t,{toggle:!1}).toggle()})),m(Bt);var zt="top",Rt="bottom",qt="right",Vt="left",Kt="auto",Qt=[zt,Rt,qt,Vt],Xt="start",Yt="end",Ut="clippingParents",Gt="viewport",Jt="popper",Zt="reference",te=Qt.reduce((function(t,e){return t.concat([e+"-"+Xt,e+"-"+Yt])}),[]),ee=[].concat(Qt,[Kt]).reduce((function(t,e){return t.concat([e,e+"-"+Xt,e+"-"+Yt])}),[]),ie="beforeRead",ne="read",se="afterRead",oe="beforeMain",re="main",ae="afterMain",le="beforeWrite",ce="write",he="afterWrite",de=[ie,ne,se,oe,re,ae,le,ce,he];function ue(t){return t?(t.nodeName||"").toLowerCase():null}function fe(t){if(null==t)return window;if("[object Window]"!==t.toString()){var e=t.ownerDocument;return e&&e.defaultView||window}return t}function pe(t){return t instanceof fe(t).Element||t instanceof Element}function me(t){return t instanceof fe(t).HTMLElement||t instanceof HTMLElement}function ge(t){return"undefined"!=typeof ShadowRoot&&(t instanceof fe(t).ShadowRoot||t instanceof ShadowRoot)}const _e={name:"applyStyles",enabled:!0,phase:"write",fn:function(t){var e=t.state;Object.keys(e.elements).forEach((function(t){var i=e.styles[t]||{},n=e.attributes[t]||{},s=e.elements[t];me(s)&&ue(s)&&(Object.assign(s.style,i),Object.keys(n).forEach((function(t){var e=n[t];!1===e?s.removeAttribute(t):s.setAttribute(t,!0===e?"":e)})))}))},effect:function(t){var e=t.state,i={popper:{position:e.options.strategy,left:"0",top:"0",margin:"0"},arrow:{position:"absolute"},reference:{}};return Object.assign(e.elements.popper.style,i.popper),e.styles=i,e.elements.arrow&&Object.assign(e.elements.arrow.style,i.arrow),function(){Object.keys(e.elements).forEach((function(t){var n=e.elements[t],s=e.attributes[t]||{},o=Object.keys(e.styles.hasOwnProperty(t)?e.styles[t]:i[t]).reduce((function(t,e){return t[e]="",t}),{});me(n)&&ue(n)&&(Object.assign(n.style,o),Object.keys(s).forEach((function(t){n.removeAttribute(t)})))}))}},requires:["computeStyles"]};function be(t){return t.split("-")[0]}var ve=Math.max,ye=Math.min,we=Math.round;function Ae(){var t=navigator.userAgentData;return null!=t&&t.brands&&Array.isArray(t.brands)?t.brands.map((function(t){return t.brand+"/"+t.version})).join(" "):navigator.userAgent}function Ee(){return!/^((?!chrome|android).)*safari/i.test(Ae())}function Te(t,e,i){void 0===e&&(e=!1),void 0===i&&(i=!1);var n=t.getBoundingClientRect(),s=1,o=1;e&&me(t)&&(s=t.offsetWidth>0&&we(n.width)/t.offsetWidth||1,o=t.offsetHeight>0&&we(n.height)/t.offsetHeight||1);var r=(pe(t)?fe(t):window).visualViewport,a=!Ee()&&i,l=(n.left+(a&&r?r.offsetLeft:0))/s,c=(n.top+(a&&r?r.offsetTop:0))/o,h=n.width/s,d=n.height/o;return{width:h,height:d,top:c,right:l+h,bottom:c+d,left:l,x:l,y:c}}function Ce(t){var e=Te(t),i=t.offsetWidth,n=t.offsetHeight;return Math.abs(e.width-i)<=1&&(i=e.width),Math.abs(e.height-n)<=1&&(n=e.height),{x:t.offsetLeft,y:t.offsetTop,width:i,height:n}}function Oe(t,e){var i=e.getRootNode&&e.getRootNode();if(t.contains(e))return!0;if(i&&ge(i)){var n=e;do{if(n&&t.isSameNode(n))return!0;n=n.parentNode||n.host}while(n)}return!1}function xe(t){return fe(t).getComputedStyle(t)}function ke(t){return["table","td","th"].indexOf(ue(t))>=0}function Le(t){return((pe(t)?t.ownerDocument:t.document)||window.document).documentElement}function Se(t){return"html"===ue(t)?t:t.assignedSlot||t.parentNode||(ge(t)?t.host:null)||Le(t)}function De(t){return me(t)&&"fixed"!==xe(t).position?t.offsetParent:null}function $e(t){for(var e=fe(t),i=De(t);i&&ke(i)&&"static"===xe(i).position;)i=De(i);return i&&("html"===ue(i)||"body"===ue(i)&&"static"===xe(i).position)?e:i||function(t){var e=/firefox/i.test(Ae());if(/Trident/i.test(Ae())&&me(t)&&"fixed"===xe(t).position)return null;var i=Se(t);for(ge(i)&&(i=i.host);me(i)&&["html","body"].indexOf(ue(i))<0;){var n=xe(i);if("none"!==n.transform||"none"!==n.perspective||"paint"===n.contain||-1!==["transform","perspective"].indexOf(n.willChange)||e&&"filter"===n.willChange||e&&n.filter&&"none"!==n.filter)return i;i=i.parentNode}return null}(t)||e}function Ie(t){return["top","bottom"].indexOf(t)>=0?"x":"y"}function Ne(t,e,i){return ve(t,ye(e,i))}function Pe(t){return Object.assign({},{top:0,right:0,bottom:0,left:0},t)}function Me(t,e){return e.reduce((function(e,i){return e[i]=t,e}),{})}const je={name:"arrow",enabled:!0,phase:"main",fn:function(t){var e,i=t.state,n=t.name,s=t.options,o=i.elements.arrow,r=i.modifiersData.popperOffsets,a=be(i.placement),l=Ie(a),c=[Vt,qt].indexOf(a)>=0?"height":"width";if(o&&r){var h=function(t,e){return Pe("number"!=typeof(t="function"==typeof t?t(Object.assign({},e.rects,{placement:e.placement})):t)?t:Me(t,Qt))}(s.padding,i),d=Ce(o),u="y"===l?zt:Vt,f="y"===l?Rt:qt,p=i.rects.reference[c]+i.rects.reference[l]-r[l]-i.rects.popper[c],m=r[l]-i.rects.reference[l],g=$e(o),_=g?"y"===l?g.clientHeight||0:g.clientWidth||0:0,b=p/2-m/2,v=h[u],y=_-d[c]-h[f],w=_/2-d[c]/2+b,A=Ne(v,w,y),E=l;i.modifiersData[n]=((e={})[E]=A,e.centerOffset=A-w,e)}},effect:function(t){var e=t.state,i=t.options.element,n=void 0===i?"[data-popper-arrow]":i;null!=n&&("string"!=typeof n||(n=e.elements.popper.querySelector(n)))&&Oe(e.elements.popper,n)&&(e.elements.arrow=n)},requires:["popperOffsets"],requiresIfExists:["preventOverflow"]};function Fe(t){return t.split("-")[1]}var He={top:"auto",right:"auto",bottom:"auto",left:"auto"};function We(t){var e,i=t.popper,n=t.popperRect,s=t.placement,o=t.variation,r=t.offsets,a=t.position,l=t.gpuAcceleration,c=t.adaptive,h=t.roundOffsets,d=t.isFixed,u=r.x,f=void 0===u?0:u,p=r.y,m=void 0===p?0:p,g="function"==typeof h?h({x:f,y:m}):{x:f,y:m};f=g.x,m=g.y;var _=r.hasOwnProperty("x"),b=r.hasOwnProperty("y"),v=Vt,y=zt,w=window;if(c){var A=$e(i),E="clientHeight",T="clientWidth";A===fe(i)&&"static"!==xe(A=Le(i)).position&&"absolute"===a&&(E="scrollHeight",T="scrollWidth"),(s===zt||(s===Vt||s===qt)&&o===Yt)&&(y=Rt,m-=(d&&A===w&&w.visualViewport?w.visualViewport.height:A[E])-n.height,m*=l?1:-1),s!==Vt&&(s!==zt&&s!==Rt||o!==Yt)||(v=qt,f-=(d&&A===w&&w.visualViewport?w.visualViewport.width:A[T])-n.width,f*=l?1:-1)}var C,O=Object.assign({position:a},c&&He),x=!0===h?function(t,e){var i=t.x,n=t.y,s=e.devicePixelRatio||1;return{x:we(i*s)/s||0,y:we(n*s)/s||0}}({x:f,y:m},fe(i)):{x:f,y:m};return f=x.x,m=x.y,l?Object.assign({},O,((C={})[y]=b?"0":"",C[v]=_?"0":"",C.transform=(w.devicePixelRatio||1)<=1?"translate("+f+"px, "+m+"px)":"translate3d("+f+"px, "+m+"px, 0)",C)):Object.assign({},O,((e={})[y]=b?m+"px":"",e[v]=_?f+"px":"",e.transform="",e))}const Be={name:"computeStyles",enabled:!0,phase:"beforeWrite",fn:function(t){var e=t.state,i=t.options,n=i.gpuAcceleration,s=void 0===n||n,o=i.adaptive,r=void 0===o||o,a=i.roundOffsets,l=void 0===a||a,c={placement:be(e.placement),variation:Fe(e.placement),popper:e.elements.popper,popperRect:e.rects.popper,gpuAcceleration:s,isFixed:"fixed"===e.options.strategy};null!=e.modifiersData.popperOffsets&&(e.styles.popper=Object.assign({},e.styles.popper,We(Object.assign({},c,{offsets:e.modifiersData.popperOffsets,position:e.options.strategy,adaptive:r,roundOffsets:l})))),null!=e.modifiersData.arrow&&(e.styles.arrow=Object.assign({},e.styles.arrow,We(Object.assign({},c,{offsets:e.modifiersData.arrow,position:"absolute",adaptive:!1,roundOffsets:l})))),e.attributes.popper=Object.assign({},e.attributes.popper,{"data-popper-placement":e.placement})},data:{}};var ze={passive:!0};const Re={name:"eventListeners",enabled:!0,phase:"write",fn:function(){},effect:function(t){var e=t.state,i=t.instance,n=t.options,s=n.scroll,o=void 0===s||s,r=n.resize,a=void 0===r||r,l=fe(e.elements.popper),c=[].concat(e.scrollParents.reference,e.scrollParents.popper);return o&&c.forEach((function(t){t.addEventListener("scroll",i.update,ze)})),a&&l.addEventListener("resize",i.update,ze),function(){o&&c.forEach((function(t){t.removeEventListener("scroll",i.update,ze)})),a&&l.removeEventListener("resize",i.update,ze)}},data:{}};var qe={left:"right",right:"left",bottom:"top",top:"bottom"};function Ve(t){return t.replace(/left|right|bottom|top/g,(function(t){return qe[t]}))}var Ke={start:"end",end:"start"};function Qe(t){return t.replace(/start|end/g,(function(t){return Ke[t]}))}function Xe(t){var e=fe(t);return{scrollLeft:e.pageXOffset,scrollTop:e.pageYOffset}}function Ye(t){return Te(Le(t)).left+Xe(t).scrollLeft}function Ue(t){var e=xe(t),i=e.overflow,n=e.overflowX,s=e.overflowY;return/auto|scroll|overlay|hidden/.test(i+s+n)}function Ge(t){return["html","body","#document"].indexOf(ue(t))>=0?t.ownerDocument.body:me(t)&&Ue(t)?t:Ge(Se(t))}function Je(t,e){var i;void 0===e&&(e=[]);var n=Ge(t),s=n===(null==(i=t.ownerDocument)?void 0:i.body),o=fe(n),r=s?[o].concat(o.visualViewport||[],Ue(n)?n:[]):n,a=e.concat(r);return s?a:a.concat(Je(Se(r)))}function Ze(t){return Object.assign({},t,{left:t.x,top:t.y,right:t.x+t.width,bottom:t.y+t.height})}function ti(t,e,i){return e===Gt?Ze(function(t,e){var i=fe(t),n=Le(t),s=i.visualViewport,o=n.clientWidth,r=n.clientHeight,a=0,l=0;if(s){o=s.width,r=s.height;var c=Ee();(c||!c&&"fixed"===e)&&(a=s.offsetLeft,l=s.offsetTop)}return{width:o,height:r,x:a+Ye(t),y:l}}(t,i)):pe(e)?function(t,e){var i=Te(t,!1,"fixed"===e);return i.top=i.top+t.clientTop,i.left=i.left+t.clientLeft,i.bottom=i.top+t.clientHeight,i.right=i.left+t.clientWidth,i.width=t.clientWidth,i.height=t.clientHeight,i.x=i.left,i.y=i.top,i}(e,i):Ze(function(t){var e,i=Le(t),n=Xe(t),s=null==(e=t.ownerDocument)?void 0:e.body,o=ve(i.scrollWidth,i.clientWidth,s?s.scrollWidth:0,s?s.clientWidth:0),r=ve(i.scrollHeight,i.clientHeight,s?s.scrollHeight:0,s?s.clientHeight:0),a=-n.scrollLeft+Ye(t),l=-n.scrollTop;return"rtl"===xe(s||i).direction&&(a+=ve(i.clientWidth,s?s.clientWidth:0)-o),{width:o,height:r,x:a,y:l}}(Le(t)))}function ei(t){var e,i=t.reference,n=t.element,s=t.placement,o=s?be(s):null,r=s?Fe(s):null,a=i.x+i.width/2-n.width/2,l=i.y+i.height/2-n.height/2;switch(o){case zt:e={x:a,y:i.y-n.height};break;case Rt:e={x:a,y:i.y+i.height};break;case qt:e={x:i.x+i.width,y:l};break;case Vt:e={x:i.x-n.width,y:l};break;default:e={x:i.x,y:i.y}}var c=o?Ie(o):null;if(null!=c){var h="y"===c?"height":"width";switch(r){case Xt:e[c]=e[c]-(i[h]/2-n[h]/2);break;case Yt:e[c]=e[c]+(i[h]/2-n[h]/2)}}return e}function ii(t,e){void 0===e&&(e={});var i=e,n=i.placement,s=void 0===n?t.placement:n,o=i.strategy,r=void 0===o?t.strategy:o,a=i.boundary,l=void 0===a?Ut:a,c=i.rootBoundary,h=void 0===c?Gt:c,d=i.elementContext,u=void 0===d?Jt:d,f=i.altBoundary,p=void 0!==f&&f,m=i.padding,g=void 0===m?0:m,_=Pe("number"!=typeof g?g:Me(g,Qt)),b=u===Jt?Zt:Jt,v=t.rects.popper,y=t.elements[p?b:u],w=function(t,e,i,n){var s="clippingParents"===e?function(t){var e=Je(Se(t)),i=["absolute","fixed"].indexOf(xe(t).position)>=0&&me(t)?$e(t):t;return pe(i)?e.filter((function(t){return pe(t)&&Oe(t,i)&&"body"!==ue(t)})):[]}(t):[].concat(e),o=[].concat(s,[i]),r=o[0],a=o.reduce((function(e,i){var s=ti(t,i,n);return e.top=ve(s.top,e.top),e.right=ye(s.right,e.right),e.bottom=ye(s.bottom,e.bottom),e.left=ve(s.left,e.left),e}),ti(t,r,n));return a.width=a.right-a.left,a.height=a.bottom-a.top,a.x=a.left,a.y=a.top,a}(pe(y)?y:y.contextElement||Le(t.elements.popper),l,h,r),A=Te(t.elements.reference),E=ei({reference:A,element:v,strategy:"absolute",placement:s}),T=Ze(Object.assign({},v,E)),C=u===Jt?T:A,O={top:w.top-C.top+_.top,bottom:C.bottom-w.bottom+_.bottom,left:w.left-C.left+_.left,right:C.right-w.right+_.right},x=t.modifiersData.offset;if(u===Jt&&x){var k=x[s];Object.keys(O).forEach((function(t){var e=[qt,Rt].indexOf(t)>=0?1:-1,i=[zt,Rt].indexOf(t)>=0?"y":"x";O[t]+=k[i]*e}))}return O}function ni(t,e){void 0===e&&(e={});var i=e,n=i.placement,s=i.boundary,o=i.rootBoundary,r=i.padding,a=i.flipVariations,l=i.allowedAutoPlacements,c=void 0===l?ee:l,h=Fe(n),d=h?a?te:te.filter((function(t){return Fe(t)===h})):Qt,u=d.filter((function(t){return c.indexOf(t)>=0}));0===u.length&&(u=d);var f=u.reduce((function(e,i){return e[i]=ii(t,{placement:i,boundary:s,rootBoundary:o,padding:r})[be(i)],e}),{});return Object.keys(f).sort((function(t,e){return f[t]-f[e]}))}const si={name:"flip",enabled:!0,phase:"main",fn:function(t){var e=t.state,i=t.options,n=t.name;if(!e.modifiersData[n]._skip){for(var s=i.mainAxis,o=void 0===s||s,r=i.altAxis,a=void 0===r||r,l=i.fallbackPlacements,c=i.padding,h=i.boundary,d=i.rootBoundary,u=i.altBoundary,f=i.flipVariations,p=void 0===f||f,m=i.allowedAutoPlacements,g=e.options.placement,_=be(g),b=l||(_!==g&&p?function(t){if(be(t)===Kt)return[];var e=Ve(t);return[Qe(t),e,Qe(e)]}(g):[Ve(g)]),v=[g].concat(b).reduce((function(t,i){return t.concat(be(i)===Kt?ni(e,{placement:i,boundary:h,rootBoundary:d,padding:c,flipVariations:p,allowedAutoPlacements:m}):i)}),[]),y=e.rects.reference,w=e.rects.popper,A=new Map,E=!0,T=v[0],C=0;C=0,S=L?"width":"height",D=ii(e,{placement:O,boundary:h,rootBoundary:d,altBoundary:u,padding:c}),$=L?k?qt:Vt:k?Rt:zt;y[S]>w[S]&&($=Ve($));var I=Ve($),N=[];if(o&&N.push(D[x]<=0),a&&N.push(D[$]<=0,D[I]<=0),N.every((function(t){return t}))){T=O,E=!1;break}A.set(O,N)}if(E)for(var P=function(t){var e=v.find((function(e){var i=A.get(e);if(i)return i.slice(0,t).every((function(t){return t}))}));if(e)return T=e,"break"},M=p?3:1;M>0&&"break"!==P(M);M--);e.placement!==T&&(e.modifiersData[n]._skip=!0,e.placement=T,e.reset=!0)}},requiresIfExists:["offset"],data:{_skip:!1}};function oi(t,e,i){return void 0===i&&(i={x:0,y:0}),{top:t.top-e.height-i.y,right:t.right-e.width+i.x,bottom:t.bottom-e.height+i.y,left:t.left-e.width-i.x}}function ri(t){return[zt,qt,Rt,Vt].some((function(e){return t[e]>=0}))}const ai={name:"hide",enabled:!0,phase:"main",requiresIfExists:["preventOverflow"],fn:function(t){var e=t.state,i=t.name,n=e.rects.reference,s=e.rects.popper,o=e.modifiersData.preventOverflow,r=ii(e,{elementContext:"reference"}),a=ii(e,{altBoundary:!0}),l=oi(r,n),c=oi(a,s,o),h=ri(l),d=ri(c);e.modifiersData[i]={referenceClippingOffsets:l,popperEscapeOffsets:c,isReferenceHidden:h,hasPopperEscaped:d},e.attributes.popper=Object.assign({},e.attributes.popper,{"data-popper-reference-hidden":h,"data-popper-escaped":d})}},li={name:"offset",enabled:!0,phase:"main",requires:["popperOffsets"],fn:function(t){var e=t.state,i=t.options,n=t.name,s=i.offset,o=void 0===s?[0,0]:s,r=ee.reduce((function(t,i){return t[i]=function(t,e,i){var n=be(t),s=[Vt,zt].indexOf(n)>=0?-1:1,o="function"==typeof i?i(Object.assign({},e,{placement:t})):i,r=o[0],a=o[1];return r=r||0,a=(a||0)*s,[Vt,qt].indexOf(n)>=0?{x:a,y:r}:{x:r,y:a}}(i,e.rects,o),t}),{}),a=r[e.placement],l=a.x,c=a.y;null!=e.modifiersData.popperOffsets&&(e.modifiersData.popperOffsets.x+=l,e.modifiersData.popperOffsets.y+=c),e.modifiersData[n]=r}},ci={name:"popperOffsets",enabled:!0,phase:"read",fn:function(t){var e=t.state,i=t.name;e.modifiersData[i]=ei({reference:e.rects.reference,element:e.rects.popper,strategy:"absolute",placement:e.placement})},data:{}},hi={name:"preventOverflow",enabled:!0,phase:"main",fn:function(t){var e=t.state,i=t.options,n=t.name,s=i.mainAxis,o=void 0===s||s,r=i.altAxis,a=void 0!==r&&r,l=i.boundary,c=i.rootBoundary,h=i.altBoundary,d=i.padding,u=i.tether,f=void 0===u||u,p=i.tetherOffset,m=void 0===p?0:p,g=ii(e,{boundary:l,rootBoundary:c,padding:d,altBoundary:h}),_=be(e.placement),b=Fe(e.placement),v=!b,y=Ie(_),w="x"===y?"y":"x",A=e.modifiersData.popperOffsets,E=e.rects.reference,T=e.rects.popper,C="function"==typeof m?m(Object.assign({},e.rects,{placement:e.placement})):m,O="number"==typeof C?{mainAxis:C,altAxis:C}:Object.assign({mainAxis:0,altAxis:0},C),x=e.modifiersData.offset?e.modifiersData.offset[e.placement]:null,k={x:0,y:0};if(A){if(o){var L,S="y"===y?zt:Vt,D="y"===y?Rt:qt,$="y"===y?"height":"width",I=A[y],N=I+g[S],P=I-g[D],M=f?-T[$]/2:0,j=b===Xt?E[$]:T[$],F=b===Xt?-T[$]:-E[$],H=e.elements.arrow,W=f&&H?Ce(H):{width:0,height:0},B=e.modifiersData["arrow#persistent"]?e.modifiersData["arrow#persistent"].padding:{top:0,right:0,bottom:0,left:0},z=B[S],R=B[D],q=Ne(0,E[$],W[$]),V=v?E[$]/2-M-q-z-O.mainAxis:j-q-z-O.mainAxis,K=v?-E[$]/2+M+q+R+O.mainAxis:F+q+R+O.mainAxis,Q=e.elements.arrow&&$e(e.elements.arrow),X=Q?"y"===y?Q.clientTop||0:Q.clientLeft||0:0,Y=null!=(L=null==x?void 0:x[y])?L:0,U=I+K-Y,G=Ne(f?ye(N,I+V-Y-X):N,I,f?ve(P,U):P);A[y]=G,k[y]=G-I}if(a){var J,Z="x"===y?zt:Vt,tt="x"===y?Rt:qt,et=A[w],it="y"===w?"height":"width",nt=et+g[Z],st=et-g[tt],ot=-1!==[zt,Vt].indexOf(_),rt=null!=(J=null==x?void 0:x[w])?J:0,at=ot?nt:et-E[it]-T[it]-rt+O.altAxis,lt=ot?et+E[it]+T[it]-rt-O.altAxis:st,ct=f&&ot?function(t,e,i){var n=Ne(t,e,i);return n>i?i:n}(at,et,lt):Ne(f?at:nt,et,f?lt:st);A[w]=ct,k[w]=ct-et}e.modifiersData[n]=k}},requiresIfExists:["offset"]};function di(t,e,i){void 0===i&&(i=!1);var n,s,o=me(e),r=me(e)&&function(t){var e=t.getBoundingClientRect(),i=we(e.width)/t.offsetWidth||1,n=we(e.height)/t.offsetHeight||1;return 1!==i||1!==n}(e),a=Le(e),l=Te(t,r,i),c={scrollLeft:0,scrollTop:0},h={x:0,y:0};return(o||!o&&!i)&&(("body"!==ue(e)||Ue(a))&&(c=(n=e)!==fe(n)&&me(n)?{scrollLeft:(s=n).scrollLeft,scrollTop:s.scrollTop}:Xe(n)),me(e)?((h=Te(e,!0)).x+=e.clientLeft,h.y+=e.clientTop):a&&(h.x=Ye(a))),{x:l.left+c.scrollLeft-h.x,y:l.top+c.scrollTop-h.y,width:l.width,height:l.height}}function ui(t){var e=new Map,i=new Set,n=[];function s(t){i.add(t.name),[].concat(t.requires||[],t.requiresIfExists||[]).forEach((function(t){if(!i.has(t)){var n=e.get(t);n&&s(n)}})),n.push(t)}return t.forEach((function(t){e.set(t.name,t)})),t.forEach((function(t){i.has(t.name)||s(t)})),n}var fi={placement:"bottom",modifiers:[],strategy:"absolute"};function pi(){for(var t=arguments.length,e=new Array(t),i=0;iNumber.parseInt(t,10))):"function"==typeof t?e=>t(e,this._element):t}_getPopperConfig(){const t={placement:this._getPlacement(),modifiers:[{name:"preventOverflow",options:{boundary:this._config.boundary}},{name:"offset",options:{offset:this._getOffset()}}]};return(this._inNavbar||"static"===this._config.display)&&(F.setDataAttribute(this._menu,"popper","static"),t.modifiers=[{name:"applyStyles",enabled:!1}]),{...t,...g(this._config.popperConfig,[t])}}_selectMenuItem({key:t,target:e}){const i=z.find(".dropdown-menu .dropdown-item:not(.disabled):not(:disabled)",this._menu).filter((t=>a(t)));i.length&&b(i,e,t===Ti,!i.includes(e)).focus()}static jQueryInterface(t){return this.each((function(){const e=qi.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}static clearMenus(t){if(2===t.button||"keyup"===t.type&&"Tab"!==t.key)return;const e=z.find(Ni);for(const i of e){const e=qi.getInstance(i);if(!e||!1===e._config.autoClose)continue;const n=t.composedPath(),s=n.includes(e._menu);if(n.includes(e._element)||"inside"===e._config.autoClose&&!s||"outside"===e._config.autoClose&&s)continue;if(e._menu.contains(t.target)&&("keyup"===t.type&&"Tab"===t.key||/input|select|option|textarea|form/i.test(t.target.tagName)))continue;const o={relatedTarget:e._element};"click"===t.type&&(o.clickEvent=t),e._completeHide(o)}}static dataApiKeydownHandler(t){const e=/input|textarea/i.test(t.target.tagName),i="Escape"===t.key,n=[Ei,Ti].includes(t.key);if(!n&&!i)return;if(e&&!i)return;t.preventDefault();const s=this.matches(Ii)?this:z.prev(this,Ii)[0]||z.next(this,Ii)[0]||z.findOne(Ii,t.delegateTarget.parentNode),o=qi.getOrCreateInstance(s);if(n)return t.stopPropagation(),o.show(),void o._selectMenuItem(t);o._isShown()&&(t.stopPropagation(),o.hide(),s.focus())}}N.on(document,Si,Ii,qi.dataApiKeydownHandler),N.on(document,Si,Pi,qi.dataApiKeydownHandler),N.on(document,Li,qi.clearMenus),N.on(document,Di,qi.clearMenus),N.on(document,Li,Ii,(function(t){t.preventDefault(),qi.getOrCreateInstance(this).toggle()})),m(qi);const Vi="backdrop",Ki="show",Qi=`mousedown.bs.${Vi}`,Xi={className:"modal-backdrop",clickCallback:null,isAnimated:!1,isVisible:!0,rootElement:"body"},Yi={className:"string",clickCallback:"(function|null)",isAnimated:"boolean",isVisible:"boolean",rootElement:"(element|string)"};class Ui extends H{constructor(t){super(),this._config=this._getConfig(t),this._isAppended=!1,this._element=null}static get Default(){return Xi}static get DefaultType(){return Yi}static get NAME(){return Vi}show(t){if(!this._config.isVisible)return void g(t);this._append();const e=this._getElement();this._config.isAnimated&&d(e),e.classList.add(Ki),this._emulateAnimation((()=>{g(t)}))}hide(t){this._config.isVisible?(this._getElement().classList.remove(Ki),this._emulateAnimation((()=>{this.dispose(),g(t)}))):g(t)}dispose(){this._isAppended&&(N.off(this._element,Qi),this._element.remove(),this._isAppended=!1)}_getElement(){if(!this._element){const t=document.createElement("div");t.className=this._config.className,this._config.isAnimated&&t.classList.add("fade"),this._element=t}return this._element}_configAfterMerge(t){return t.rootElement=r(t.rootElement),t}_append(){if(this._isAppended)return;const t=this._getElement();this._config.rootElement.append(t),N.on(t,Qi,(()=>{g(this._config.clickCallback)})),this._isAppended=!0}_emulateAnimation(t){_(t,this._getElement(),this._config.isAnimated)}}const Gi=".bs.focustrap",Ji=`focusin${Gi}`,Zi=`keydown.tab${Gi}`,tn="backward",en={autofocus:!0,trapElement:null},nn={autofocus:"boolean",trapElement:"element"};class sn extends H{constructor(t){super(),this._config=this._getConfig(t),this._isActive=!1,this._lastTabNavDirection=null}static get Default(){return en}static get DefaultType(){return nn}static get NAME(){return"focustrap"}activate(){this._isActive||(this._config.autofocus&&this._config.trapElement.focus(),N.off(document,Gi),N.on(document,Ji,(t=>this._handleFocusin(t))),N.on(document,Zi,(t=>this._handleKeydown(t))),this._isActive=!0)}deactivate(){this._isActive&&(this._isActive=!1,N.off(document,Gi))}_handleFocusin(t){const{trapElement:e}=this._config;if(t.target===document||t.target===e||e.contains(t.target))return;const i=z.focusableChildren(e);0===i.length?e.focus():this._lastTabNavDirection===tn?i[i.length-1].focus():i[0].focus()}_handleKeydown(t){"Tab"===t.key&&(this._lastTabNavDirection=t.shiftKey?tn:"forward")}}const on=".fixed-top, .fixed-bottom, .is-fixed, .sticky-top",rn=".sticky-top",an="padding-right",ln="margin-right";class cn{constructor(){this._element=document.body}getWidth(){const t=document.documentElement.clientWidth;return Math.abs(window.innerWidth-t)}hide(){const t=this.getWidth();this._disableOverFlow(),this._setElementAttributes(this._element,an,(e=>e+t)),this._setElementAttributes(on,an,(e=>e+t)),this._setElementAttributes(rn,ln,(e=>e-t))}reset(){this._resetElementAttributes(this._element,"overflow"),this._resetElementAttributes(this._element,an),this._resetElementAttributes(on,an),this._resetElementAttributes(rn,ln)}isOverflowing(){return this.getWidth()>0}_disableOverFlow(){this._saveInitialAttribute(this._element,"overflow"),this._element.style.overflow="hidden"}_setElementAttributes(t,e,i){const n=this.getWidth();this._applyManipulationCallback(t,(t=>{if(t!==this._element&&window.innerWidth>t.clientWidth+n)return;this._saveInitialAttribute(t,e);const s=window.getComputedStyle(t).getPropertyValue(e);t.style.setProperty(e,`${i(Number.parseFloat(s))}px`)}))}_saveInitialAttribute(t,e){const i=t.style.getPropertyValue(e);i&&F.setDataAttribute(t,e,i)}_resetElementAttributes(t,e){this._applyManipulationCallback(t,(t=>{const i=F.getDataAttribute(t,e);null!==i?(F.removeDataAttribute(t,e),t.style.setProperty(e,i)):t.style.removeProperty(e)}))}_applyManipulationCallback(t,e){if(o(t))e(t);else for(const i of z.find(t,this._element))e(i)}}const hn=".bs.modal",dn=`hide${hn}`,un=`hidePrevented${hn}`,fn=`hidden${hn}`,pn=`show${hn}`,mn=`shown${hn}`,gn=`resize${hn}`,_n=`click.dismiss${hn}`,bn=`mousedown.dismiss${hn}`,vn=`keydown.dismiss${hn}`,yn=`click${hn}.data-api`,wn="modal-open",An="show",En="modal-static",Tn={backdrop:!0,focus:!0,keyboard:!0},Cn={backdrop:"(boolean|string)",focus:"boolean",keyboard:"boolean"};class On extends W{constructor(t,e){super(t,e),this._dialog=z.findOne(".modal-dialog",this._element),this._backdrop=this._initializeBackDrop(),this._focustrap=this._initializeFocusTrap(),this._isShown=!1,this._isTransitioning=!1,this._scrollBar=new cn,this._addEventListeners()}static get Default(){return Tn}static get DefaultType(){return Cn}static get NAME(){return"modal"}toggle(t){return this._isShown?this.hide():this.show(t)}show(t){this._isShown||this._isTransitioning||N.trigger(this._element,pn,{relatedTarget:t}).defaultPrevented||(this._isShown=!0,this._isTransitioning=!0,this._scrollBar.hide(),document.body.classList.add(wn),this._adjustDialog(),this._backdrop.show((()=>this._showElement(t))))}hide(){this._isShown&&!this._isTransitioning&&(N.trigger(this._element,dn).defaultPrevented||(this._isShown=!1,this._isTransitioning=!0,this._focustrap.deactivate(),this._element.classList.remove(An),this._queueCallback((()=>this._hideModal()),this._element,this._isAnimated())))}dispose(){N.off(window,hn),N.off(this._dialog,hn),this._backdrop.dispose(),this._focustrap.deactivate(),super.dispose()}handleUpdate(){this._adjustDialog()}_initializeBackDrop(){return new Ui({isVisible:Boolean(this._config.backdrop),isAnimated:this._isAnimated()})}_initializeFocusTrap(){return new sn({trapElement:this._element})}_showElement(t){document.body.contains(this._element)||document.body.append(this._element),this._element.style.display="block",this._element.removeAttribute("aria-hidden"),this._element.setAttribute("aria-modal",!0),this._element.setAttribute("role","dialog"),this._element.scrollTop=0;const e=z.findOne(".modal-body",this._dialog);e&&(e.scrollTop=0),d(this._element),this._element.classList.add(An),this._queueCallback((()=>{this._config.focus&&this._focustrap.activate(),this._isTransitioning=!1,N.trigger(this._element,mn,{relatedTarget:t})}),this._dialog,this._isAnimated())}_addEventListeners(){N.on(this._element,vn,(t=>{"Escape"===t.key&&(this._config.keyboard?this.hide():this._triggerBackdropTransition())})),N.on(window,gn,(()=>{this._isShown&&!this._isTransitioning&&this._adjustDialog()})),N.on(this._element,bn,(t=>{N.one(this._element,_n,(e=>{this._element===t.target&&this._element===e.target&&("static"!==this._config.backdrop?this._config.backdrop&&this.hide():this._triggerBackdropTransition())}))}))}_hideModal(){this._element.style.display="none",this._element.setAttribute("aria-hidden",!0),this._element.removeAttribute("aria-modal"),this._element.removeAttribute("role"),this._isTransitioning=!1,this._backdrop.hide((()=>{document.body.classList.remove(wn),this._resetAdjustments(),this._scrollBar.reset(),N.trigger(this._element,fn)}))}_isAnimated(){return this._element.classList.contains("fade")}_triggerBackdropTransition(){if(N.trigger(this._element,un).defaultPrevented)return;const t=this._element.scrollHeight>document.documentElement.clientHeight,e=this._element.style.overflowY;"hidden"===e||this._element.classList.contains(En)||(t||(this._element.style.overflowY="hidden"),this._element.classList.add(En),this._queueCallback((()=>{this._element.classList.remove(En),this._queueCallback((()=>{this._element.style.overflowY=e}),this._dialog)}),this._dialog),this._element.focus())}_adjustDialog(){const t=this._element.scrollHeight>document.documentElement.clientHeight,e=this._scrollBar.getWidth(),i=e>0;if(i&&!t){const t=p()?"paddingLeft":"paddingRight";this._element.style[t]=`${e}px`}if(!i&&t){const t=p()?"paddingRight":"paddingLeft";this._element.style[t]=`${e}px`}}_resetAdjustments(){this._element.style.paddingLeft="",this._element.style.paddingRight=""}static jQueryInterface(t,e){return this.each((function(){const i=On.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===i[t])throw new TypeError(`No method named "${t}"`);i[t](e)}}))}}N.on(document,yn,'[data-bs-toggle="modal"]',(function(t){const e=z.getElementFromSelector(this);["A","AREA"].includes(this.tagName)&&t.preventDefault(),N.one(e,pn,(t=>{t.defaultPrevented||N.one(e,fn,(()=>{a(this)&&this.focus()}))}));const i=z.findOne(".modal.show");i&&On.getInstance(i).hide(),On.getOrCreateInstance(e).toggle(this)})),R(On),m(On);const xn=".bs.offcanvas",kn=".data-api",Ln=`load${xn}${kn}`,Sn="show",Dn="showing",$n="hiding",In=".offcanvas.show",Nn=`show${xn}`,Pn=`shown${xn}`,Mn=`hide${xn}`,jn=`hidePrevented${xn}`,Fn=`hidden${xn}`,Hn=`resize${xn}`,Wn=`click${xn}${kn}`,Bn=`keydown.dismiss${xn}`,zn={backdrop:!0,keyboard:!0,scroll:!1},Rn={backdrop:"(boolean|string)",keyboard:"boolean",scroll:"boolean"};class qn extends W{constructor(t,e){super(t,e),this._isShown=!1,this._backdrop=this._initializeBackDrop(),this._focustrap=this._initializeFocusTrap(),this._addEventListeners()}static get Default(){return zn}static get DefaultType(){return Rn}static get NAME(){return"offcanvas"}toggle(t){return this._isShown?this.hide():this.show(t)}show(t){this._isShown||N.trigger(this._element,Nn,{relatedTarget:t}).defaultPrevented||(this._isShown=!0,this._backdrop.show(),this._config.scroll||(new cn).hide(),this._element.setAttribute("aria-modal",!0),this._element.setAttribute("role","dialog"),this._element.classList.add(Dn),this._queueCallback((()=>{this._config.scroll&&!this._config.backdrop||this._focustrap.activate(),this._element.classList.add(Sn),this._element.classList.remove(Dn),N.trigger(this._element,Pn,{relatedTarget:t})}),this._element,!0))}hide(){this._isShown&&(N.trigger(this._element,Mn).defaultPrevented||(this._focustrap.deactivate(),this._element.blur(),this._isShown=!1,this._element.classList.add($n),this._backdrop.hide(),this._queueCallback((()=>{this._element.classList.remove(Sn,$n),this._element.removeAttribute("aria-modal"),this._element.removeAttribute("role"),this._config.scroll||(new cn).reset(),N.trigger(this._element,Fn)}),this._element,!0)))}dispose(){this._backdrop.dispose(),this._focustrap.deactivate(),super.dispose()}_initializeBackDrop(){const t=Boolean(this._config.backdrop);return new Ui({className:"offcanvas-backdrop",isVisible:t,isAnimated:!0,rootElement:this._element.parentNode,clickCallback:t?()=>{"static"!==this._config.backdrop?this.hide():N.trigger(this._element,jn)}:null})}_initializeFocusTrap(){return new sn({trapElement:this._element})}_addEventListeners(){N.on(this._element,Bn,(t=>{"Escape"===t.key&&(this._config.keyboard?this.hide():N.trigger(this._element,jn))}))}static jQueryInterface(t){return this.each((function(){const e=qn.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t]||t.startsWith("_")||"constructor"===t)throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}N.on(document,Wn,'[data-bs-toggle="offcanvas"]',(function(t){const e=z.getElementFromSelector(this);if(["A","AREA"].includes(this.tagName)&&t.preventDefault(),l(this))return;N.one(e,Fn,(()=>{a(this)&&this.focus()}));const i=z.findOne(In);i&&i!==e&&qn.getInstance(i).hide(),qn.getOrCreateInstance(e).toggle(this)})),N.on(window,Ln,(()=>{for(const t of z.find(In))qn.getOrCreateInstance(t).show()})),N.on(window,Hn,(()=>{for(const t of z.find("[aria-modal][class*=show][class*=offcanvas-]"))"fixed"!==getComputedStyle(t).position&&qn.getOrCreateInstance(t).hide()})),R(qn),m(qn);const Vn={"*":["class","dir","id","lang","role",/^aria-[\w-]*$/i],a:["target","href","title","rel"],area:[],b:[],br:[],col:[],code:[],div:[],em:[],hr:[],h1:[],h2:[],h3:[],h4:[],h5:[],h6:[],i:[],img:["src","srcset","alt","title","width","height"],li:[],ol:[],p:[],pre:[],s:[],small:[],span:[],sub:[],sup:[],strong:[],u:[],ul:[]},Kn=new Set(["background","cite","href","itemtype","longdesc","poster","src","xlink:href"]),Qn=/^(?!javascript:)(?:[a-z0-9+.-]+:|[^&:/?#]*(?:[/?#]|$))/i,Xn=(t,e)=>{const i=t.nodeName.toLowerCase();return e.includes(i)?!Kn.has(i)||Boolean(Qn.test(t.nodeValue)):e.filter((t=>t instanceof RegExp)).some((t=>t.test(i)))},Yn={allowList:Vn,content:{},extraClass:"",html:!1,sanitize:!0,sanitizeFn:null,template:""},Un={allowList:"object",content:"object",extraClass:"(string|function)",html:"boolean",sanitize:"boolean",sanitizeFn:"(null|function)",template:"string"},Gn={entry:"(string|element|function|null)",selector:"(string|element)"};class Jn extends H{constructor(t){super(),this._config=this._getConfig(t)}static get Default(){return Yn}static get DefaultType(){return Un}static get NAME(){return"TemplateFactory"}getContent(){return Object.values(this._config.content).map((t=>this._resolvePossibleFunction(t))).filter(Boolean)}hasContent(){return this.getContent().length>0}changeContent(t){return this._checkContent(t),this._config.content={...this._config.content,...t},this}toHtml(){const t=document.createElement("div");t.innerHTML=this._maybeSanitize(this._config.template);for(const[e,i]of Object.entries(this._config.content))this._setContent(t,i,e);const e=t.children[0],i=this._resolvePossibleFunction(this._config.extraClass);return i&&e.classList.add(...i.split(" ")),e}_typeCheckConfig(t){super._typeCheckConfig(t),this._checkContent(t.content)}_checkContent(t){for(const[e,i]of Object.entries(t))super._typeCheckConfig({selector:e,entry:i},Gn)}_setContent(t,e,i){const n=z.findOne(i,t);n&&((e=this._resolvePossibleFunction(e))?o(e)?this._putElementInTemplate(r(e),n):this._config.html?n.innerHTML=this._maybeSanitize(e):n.textContent=e:n.remove())}_maybeSanitize(t){return this._config.sanitize?function(t,e,i){if(!t.length)return t;if(i&&"function"==typeof i)return i(t);const n=(new window.DOMParser).parseFromString(t,"text/html"),s=[].concat(...n.body.querySelectorAll("*"));for(const t of s){const i=t.nodeName.toLowerCase();if(!Object.keys(e).includes(i)){t.remove();continue}const n=[].concat(...t.attributes),s=[].concat(e["*"]||[],e[i]||[]);for(const e of n)Xn(e,s)||t.removeAttribute(e.nodeName)}return n.body.innerHTML}(t,this._config.allowList,this._config.sanitizeFn):t}_resolvePossibleFunction(t){return g(t,[this])}_putElementInTemplate(t,e){if(this._config.html)return e.innerHTML="",void e.append(t);e.textContent=t.textContent}}const Zn=new Set(["sanitize","allowList","sanitizeFn"]),ts="fade",es="show",is=".modal",ns="hide.bs.modal",ss="hover",os="focus",rs={AUTO:"auto",TOP:"top",RIGHT:p()?"left":"right",BOTTOM:"bottom",LEFT:p()?"right":"left"},as={allowList:Vn,animation:!0,boundary:"clippingParents",container:!1,customClass:"",delay:0,fallbackPlacements:["top","right","bottom","left"],html:!1,offset:[0,6],placement:"top",popperConfig:null,sanitize:!0,sanitizeFn:null,selector:!1,template:'
',title:"",trigger:"hover focus"},ls={allowList:"object",animation:"boolean",boundary:"(string|element)",container:"(string|element|boolean)",customClass:"(string|function)",delay:"(number|object)",fallbackPlacements:"array",html:"boolean",offset:"(array|string|function)",placement:"(string|function)",popperConfig:"(null|object|function)",sanitize:"boolean",sanitizeFn:"(null|function)",selector:"(string|boolean)",template:"string",title:"(string|element|function)",trigger:"string"};class cs extends W{constructor(t,e){if(void 0===vi)throw new TypeError("Bootstrap's tooltips require Popper (https://popper.js.org)");super(t,e),this._isEnabled=!0,this._timeout=0,this._isHovered=null,this._activeTrigger={},this._popper=null,this._templateFactory=null,this._newContent=null,this.tip=null,this._setListeners(),this._config.selector||this._fixTitle()}static get Default(){return as}static get DefaultType(){return ls}static get NAME(){return"tooltip"}enable(){this._isEnabled=!0}disable(){this._isEnabled=!1}toggleEnabled(){this._isEnabled=!this._isEnabled}toggle(){this._isEnabled&&(this._activeTrigger.click=!this._activeTrigger.click,this._isShown()?this._leave():this._enter())}dispose(){clearTimeout(this._timeout),N.off(this._element.closest(is),ns,this._hideModalHandler),this._element.getAttribute("data-bs-original-title")&&this._element.setAttribute("title",this._element.getAttribute("data-bs-original-title")),this._disposePopper(),super.dispose()}show(){if("none"===this._element.style.display)throw new Error("Please use show on visible elements");if(!this._isWithContent()||!this._isEnabled)return;const t=N.trigger(this._element,this.constructor.eventName("show")),e=(c(this._element)||this._element.ownerDocument.documentElement).contains(this._element);if(t.defaultPrevented||!e)return;this._disposePopper();const i=this._getTipElement();this._element.setAttribute("aria-describedby",i.getAttribute("id"));const{container:n}=this._config;if(this._element.ownerDocument.documentElement.contains(this.tip)||(n.append(i),N.trigger(this._element,this.constructor.eventName("inserted"))),this._popper=this._createPopper(i),i.classList.add(es),"ontouchstart"in document.documentElement)for(const t of[].concat(...document.body.children))N.on(t,"mouseover",h);this._queueCallback((()=>{N.trigger(this._element,this.constructor.eventName("shown")),!1===this._isHovered&&this._leave(),this._isHovered=!1}),this.tip,this._isAnimated())}hide(){if(this._isShown()&&!N.trigger(this._element,this.constructor.eventName("hide")).defaultPrevented){if(this._getTipElement().classList.remove(es),"ontouchstart"in document.documentElement)for(const t of[].concat(...document.body.children))N.off(t,"mouseover",h);this._activeTrigger.click=!1,this._activeTrigger[os]=!1,this._activeTrigger[ss]=!1,this._isHovered=null,this._queueCallback((()=>{this._isWithActiveTrigger()||(this._isHovered||this._disposePopper(),this._element.removeAttribute("aria-describedby"),N.trigger(this._element,this.constructor.eventName("hidden")))}),this.tip,this._isAnimated())}}update(){this._popper&&this._popper.update()}_isWithContent(){return Boolean(this._getTitle())}_getTipElement(){return this.tip||(this.tip=this._createTipElement(this._newContent||this._getContentForTemplate())),this.tip}_createTipElement(t){const e=this._getTemplateFactory(t).toHtml();if(!e)return null;e.classList.remove(ts,es),e.classList.add(`bs-${this.constructor.NAME}-auto`);const i=(t=>{do{t+=Math.floor(1e6*Math.random())}while(document.getElementById(t));return t})(this.constructor.NAME).toString();return e.setAttribute("id",i),this._isAnimated()&&e.classList.add(ts),e}setContent(t){this._newContent=t,this._isShown()&&(this._disposePopper(),this.show())}_getTemplateFactory(t){return this._templateFactory?this._templateFactory.changeContent(t):this._templateFactory=new Jn({...this._config,content:t,extraClass:this._resolvePossibleFunction(this._config.customClass)}),this._templateFactory}_getContentForTemplate(){return{".tooltip-inner":this._getTitle()}}_getTitle(){return this._resolvePossibleFunction(this._config.title)||this._element.getAttribute("data-bs-original-title")}_initializeOnDelegatedTarget(t){return this.constructor.getOrCreateInstance(t.delegateTarget,this._getDelegateConfig())}_isAnimated(){return this._config.animation||this.tip&&this.tip.classList.contains(ts)}_isShown(){return this.tip&&this.tip.classList.contains(es)}_createPopper(t){const e=g(this._config.placement,[this,t,this._element]),i=rs[e.toUpperCase()];return bi(this._element,t,this._getPopperConfig(i))}_getOffset(){const{offset:t}=this._config;return"string"==typeof t?t.split(",").map((t=>Number.parseInt(t,10))):"function"==typeof t?e=>t(e,this._element):t}_resolvePossibleFunction(t){return g(t,[this._element])}_getPopperConfig(t){const e={placement:t,modifiers:[{name:"flip",options:{fallbackPlacements:this._config.fallbackPlacements}},{name:"offset",options:{offset:this._getOffset()}},{name:"preventOverflow",options:{boundary:this._config.boundary}},{name:"arrow",options:{element:`.${this.constructor.NAME}-arrow`}},{name:"preSetPlacement",enabled:!0,phase:"beforeMain",fn:t=>{this._getTipElement().setAttribute("data-popper-placement",t.state.placement)}}]};return{...e,...g(this._config.popperConfig,[e])}}_setListeners(){const t=this._config.trigger.split(" ");for(const e of t)if("click"===e)N.on(this._element,this.constructor.eventName("click"),this._config.selector,(t=>{this._initializeOnDelegatedTarget(t).toggle()}));else if("manual"!==e){const t=e===ss?this.constructor.eventName("mouseenter"):this.constructor.eventName("focusin"),i=e===ss?this.constructor.eventName("mouseleave"):this.constructor.eventName("focusout");N.on(this._element,t,this._config.selector,(t=>{const e=this._initializeOnDelegatedTarget(t);e._activeTrigger["focusin"===t.type?os:ss]=!0,e._enter()})),N.on(this._element,i,this._config.selector,(t=>{const e=this._initializeOnDelegatedTarget(t);e._activeTrigger["focusout"===t.type?os:ss]=e._element.contains(t.relatedTarget),e._leave()}))}this._hideModalHandler=()=>{this._element&&this.hide()},N.on(this._element.closest(is),ns,this._hideModalHandler)}_fixTitle(){const t=this._element.getAttribute("title");t&&(this._element.getAttribute("aria-label")||this._element.textContent.trim()||this._element.setAttribute("aria-label",t),this._element.setAttribute("data-bs-original-title",t),this._element.removeAttribute("title"))}_enter(){this._isShown()||this._isHovered?this._isHovered=!0:(this._isHovered=!0,this._setTimeout((()=>{this._isHovered&&this.show()}),this._config.delay.show))}_leave(){this._isWithActiveTrigger()||(this._isHovered=!1,this._setTimeout((()=>{this._isHovered||this.hide()}),this._config.delay.hide))}_setTimeout(t,e){clearTimeout(this._timeout),this._timeout=setTimeout(t,e)}_isWithActiveTrigger(){return Object.values(this._activeTrigger).includes(!0)}_getConfig(t){const e=F.getDataAttributes(this._element);for(const t of Object.keys(e))Zn.has(t)&&delete e[t];return t={...e,..."object"==typeof t&&t?t:{}},t=this._mergeConfigObj(t),t=this._configAfterMerge(t),this._typeCheckConfig(t),t}_configAfterMerge(t){return t.container=!1===t.container?document.body:r(t.container),"number"==typeof t.delay&&(t.delay={show:t.delay,hide:t.delay}),"number"==typeof t.title&&(t.title=t.title.toString()),"number"==typeof t.content&&(t.content=t.content.toString()),t}_getDelegateConfig(){const t={};for(const[e,i]of Object.entries(this._config))this.constructor.Default[e]!==i&&(t[e]=i);return t.selector=!1,t.trigger="manual",t}_disposePopper(){this._popper&&(this._popper.destroy(),this._popper=null),this.tip&&(this.tip.remove(),this.tip=null)}static jQueryInterface(t){return this.each((function(){const e=cs.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}m(cs);const hs={...cs.Default,content:"",offset:[0,8],placement:"right",template:'