I’m making another, more thorough pass of course.fast.ai, including all notebooks and videos and this time I am going to focus more on the projects. I’ll also be logging a lot more notes as doing so is by far the most effective way that I learn things.
The course materials are very detailed but I’ve still run into some rough edges. The image search for bird vs. forest image classifier didn’t quite work without some modifications to make the search work. Also, the recommended approach for working the textbook notebooks is on Google Colab, which requests a large number of account permissions for accessing my Google account masquerading as “Google Drive for Desktop” and doesn’t make me feel great. I was able to run most of the examples on my personal computer, but training the model for the IMDB movie review classifier was quite slow. I decided it might be worth trying out Colab, since I imagine there could be several more models of this size/complexity I’ll want to train and finding a reasonably fast way to do that will be useful. I went back to the Colab notebook and tried running the cat-or-not classification example. This seemed to take longer than it did on my local machine with an apparent ETA of ~30 minutes.
I probably need to subscribe to Colab if I want this to be performant, but the free tier no longer seems to be a great option. I guess Deep Learning and this course have become more popular than Google (understandably) wants to foot the bill for. If this becomes prohibitive locally, I might subscribe but considering this book is several years old, it’s possible hardware has caught up to handle the examples. I guess I will find out.
Using the handy and recommended doc
command, I learned how the magic untar_data
command works – mostly, where it downloads the data.
untar_data(url:str, archive:pathlib.Path=None, data:pathlib.Path=None, c_key:str='data', force_download:bool=False, base:str='~/.fastai')
Unsurprisingly, I found all the datasets I’d downloaded so far in ~/.fastai
and was also reminded how many dotfiles/folders were in my home directory from software I had tried once.
To put it bluntly, if you’re a senior decision maker in your organization (or you’re advising senior decision makers), the most important takeaway is this: if you ensure that you really understand what test and validation sets are and why they’re important, then you’ll avoid the single biggest source of failures we’ve seen when organizations decide to use AI. For instance, if you’re considering bringing in an external vendor or service, make sure that you hold out some test data that the vendor never gets to see. Then you check their model on your test data, using a metric that you choose based on what actually matters to you in practice, and you decide what level of performance is adequate. (It’s also a good idea for you to try out some simple baseline yourself, so you know what a really simple model can achieve. Often it’ll turn out that your simple model performs just as well as one produced by an external “expert”!)