Lecture 1 - PyTorch Basics & Linear Regression

Hello there @ PrajwalPrashanth sorry to reply late.
I get this error message when I run the code import torch and the rest of the code. But normal python is able to work

I already marked the attendance will it be okay

1 Like

I get the error when I am trying to import torch. and I already uncommented the line and it says

Collecting package metadata (current_repodata.json): …working… done
Solving environment: …working… done

All requested packages already installed.

Then when I try to run the next cell:

import torch

I get

image

and

image

3 Likes

A humble request do show the steps to install and how to go forward with Jovian into our notebooks.
It will be helpful - thank you!

2 Likes

I also got the same error…now I am using my base conda enviroment…in that I am not getting any error

1 Like

@PrajwalPrashanth when I try to install the pip jovian it came out an error is “Some pip package fail to install” as the picture below… Is this causing the problem I mention above
piperror

When you say you’re running your base conda environment what do you mean? I’m getting the same error as alvertosk84

1 Like

https://jovian.ml/aakashns/machine-learning-intro This is the first notebook. Don’t worry you if you’re facing issues with installation. You have option to run the notebook on kaggle/colab.

I will update the list if installation instructions later on

2 Likes

Great Session Akash!! Thanks!!

1 Like

Do we get a confirmation that we have attended the first lecture?

5 Likes

Just finished the lecture 1, have been learning python for over 4 months and now i think data science is what I’m gonna pursue my career on. Thank you so much. Looking forward in practicing right now and learning more.

3 Likes

it was really good session to kick start for all !!! . Thanks Aakash !!!

2 Likes

How mark my
attendance

Hey, great lecture but it left me with a couple questions:

  1. What happens if the weight, by chance, starts at a local maxima? Since the derivative would be zero, which direction would the weight change?
  2. Can someone clarify what .backward() actually does?
  3. Is there any “rule of thumb” surrounding epoch amount, training set length, and learning size/step size?
  4. Why is squared loss used as opposed to the absolute value?
4 Likes

It was an amazing session sir .
thanks a lot for making it free.

  1. Then you subtract the zero from the weights, so they stays the same.
  2. It’s to actually calculate the gradients (I think they’re not calculated automatically when performing operations).
  3. Sometimes you train until your loss stops decreasing (with unspecified amount of epochs). What’s really important is the data - the more the better.
  4. Just a preference, it could be used as well (it’s called L1 loss or sth in PyTorch).
1 Like

Finally found the attendance button. lol

1 Like

Ah, thanks for the answers. If gradients aren’t automatically calculated, what does enable_grad do?
Is it just to “flag” them to be calculated when .backwards() is called?

I suppose you think about requires_grad. Yes, without it the .grad is None.

I also did get the same issue.