Conversation
update workflow main.yml 1:36 AM
| @@ -0,0 +1,201 @@ | |||
| # Deep Neural Networks | |||
There was a problem hiding this comment.
Try to have a different format for title and write authors' names in your lecture note
| Deep learning is a subfield of machine learning that deals with algorithms inspired by the structure and function of the brain. Deep learning is a subset of machine learning, which is a part of artificial intelligence (AI). | ||
|  | ||
|
|
||
| CNN's are models to solve deep learning problems. Suppose that you have high-dimensional inputs such as images or videos. If we want to use MLPs, 2 (or more) dimensional inputs need to be converted to 1-dimensional vectors. This conversion increases the number of trainable parameters exponentially. Also, one important thing in these data is locality, it means that for example in an image, you can find features in near pixels (for examples corners and edges) but, far pixels can't give you efficient features. The solution for solving these problems is using CNNs. |
| Deep learning is a subfield of machine learning that deals with algorithms inspired by the structure and function of the brain. Deep learning is a subset of machine learning, which is a part of artificial intelligence (AI). | ||
|  | ||
|
|
||
| CNN's are models to solve deep learning problems. Suppose that you have high-dimensional inputs such as images or videos. If we want to use MLPs, 2 (or more) dimensional inputs need to be converted to 1-dimensional vectors. This conversion increases the number of trainable parameters exponentially. Also, one important thing in these data is locality, it means that for example in an image, you can find features in near pixels (for examples corners and edges) but, far pixels can't give you efficient features. The solution for solving these problems is using CNNs. |
| Deep learning is a subfield of machine learning that deals with algorithms inspired by the structure and function of the brain. Deep learning is a subset of machine learning, which is a part of artificial intelligence (AI). | ||
|  | ||
|
|
||
| CNN's are models to solve deep learning problems. Suppose that you have high-dimensional inputs such as images or videos. If we want to use MLPs, 2 (or more) dimensional inputs need to be converted to 1-dimensional vectors. This conversion increases the number of trainable parameters exponentially. Also, one important thing in these data is locality, it means that for example in an image, you can find features in near pixels (for examples corners and edges) but, far pixels can't give you efficient features. The solution for solving these problems is using CNNs. |
| Deep learning is a subfield of machine learning that deals with algorithms inspired by the structure and function of the brain. Deep learning is a subset of machine learning, which is a part of artificial intelligence (AI). | ||
|  | ||
|
|
||
| CNN's are models to solve deep learning problems. Suppose that you have high-dimensional inputs such as images or videos. If we want to use MLPs, 2 (or more) dimensional inputs need to be converted to 1-dimensional vectors. This conversion increases the number of trainable parameters exponentially. Also, one important thing in these data is locality, it means that for example in an image, you can find features in near pixels (for examples corners and edges) but, far pixels can't give you efficient features. The solution for solving these problems is using CNNs. |
| A CNN architecture is formed by a stack of distinct layers that transform the input volume into an output volume through a differentiable function. A few distinct types of layers are commonly used: | ||
|
|
||
| * Fully Connected Layer | ||
| * Convolutional layer |
There was a problem hiding this comment.
layer -> Layer (to be synced with other bullets)
| @@ -0,0 +1,201 @@ | |||
| # Deep Neural Networks | |||
|
|
|||
| ## Table of Content | |||
There was a problem hiding this comment.
The order of contents doesn't match with the table. Revise them
|  | ||
|
|
||
| ## Conv Layer | ||
| This layer is the main difference between CNNs and MLPs. Convolution in the word refers to two operators between two functions. In mathematics convolution define as below: |
|
|
||
|
|
||
| Here we’ll not talk about details, but convolutional layers are somehow enabling convolution operator on sub-matrices of the image. These layers have formed from some kernel with the same height, width, and depth. The number of these kernels is equal to the depth of the output. Also, the depth of each kernel must be equal to the depth of input. For example, if you have RGB data, your first convolutional layer kernels depth must be 3. | ||
| In the context of a convolutional neural network, convolution is a linear operation that involves the multiplication of a set of weights with the input. A convolution layer has formed by 1 or more of these operations that each of them called a kernel. All kernels have the same height, width, and depth. To find the output of the layer, we put the first kernel on the top-right of the input and calculate the output of the kernel, and put it as the first cell of a matrix. After that, we move it to right and calculate again, and put the result in the second cell. When we receive to end of columns, we move the kernel down. we do this until we rich to the end of the image. We do this for all kernels and this is how we make the output of the convolutional layer. |
|
|
||
|
|
||
| Here we’ll not talk about details, but convolutional layers are somehow enabling convolution operator on sub-matrices of the image. These layers have formed from some kernel with the same height, width, and depth. The number of these kernels is equal to the depth of the output. Also, the depth of each kernel must be equal to the depth of input. For example, if you have RGB data, your first convolutional layer kernels depth must be 3. | ||
| In the context of a convolutional neural network, convolution is a linear operation that involves the multiplication of a set of weights with the input. A convolution layer has formed by 1 or more of these operations that each of them called a kernel. All kernels have the same height, width, and depth. To find the output of the layer, we put the first kernel on the top-right of the input and calculate the output of the kernel, and put it as the first cell of a matrix. After that, we move it to right and calculate again, and put the result in the second cell. When we receive to end of columns, we move the kernel down. we do this until we rich to the end of the image. We do this for all kernels and this is how we make the output of the convolutional layer. |
|
|
||
|
|
||
| Here we’ll not talk about details, but convolutional layers are somehow enabling convolution operator on sub-matrices of the image. These layers have formed from some kernel with the same height, width, and depth. The number of these kernels is equal to the depth of the output. Also, the depth of each kernel must be equal to the depth of input. For example, if you have RGB data, your first convolutional layer kernels depth must be 3. | ||
| In the context of a convolutional neural network, convolution is a linear operation that involves the multiplication of a set of weights with the input. A convolution layer has formed by 1 or more of these operations that each of them called a kernel. All kernels have the same height, width, and depth. To find the output of the layer, we put the first kernel on the top-right of the input and calculate the output of the kernel, and put it as the first cell of a matrix. After that, we move it to right and calculate again, and put the result in the second cell. When we receive to end of columns, we move the kernel down. we do this until we rich to the end of the image. We do this for all kernels and this is how we make the output of the convolutional layer. |
There was a problem hiding this comment.
Using so many "we"s in the context is a bad smell!
|
|
||
| ## Pooling | ||
|
|
||
| Similar to the Convolutional Layer, the Pooling layer is responsible for reducing the spatial size of the Convolved Feature. |
| ## Pooling | ||
|
|
||
| Similar to the Convolutional Layer, the Pooling layer is responsible for reducing the spatial size of the Convolved Feature. | ||
| While a lot of information is lost in the pooling layer, it also has a number of benefits to the Convolutional neural network. They help to reduce complexity, improve efficiency, and limit the risk of overfitting. |
There was a problem hiding this comment.
if you want to use capital case, use it for all words of a phrase: Convolutional Neural Network
| There are two types of Pooling: | ||
|
|
||
| 1. Max Pooling: it returns the maximum value from the portion of the image covered by the Kernel. and also performs as a Noise Suppressant. It discards the noisy activations altogether and also performs de-noising along with dimensionality reduction. | ||
| 2. Average Pooling: it returns the average of all the values from the portion of the image covered by the Kernel. and simply performs dimensionality reduction as a noise suppressing mechanism. |
|
|
||
| ## Padding | ||
|
|
||
| As you see, after applying convolutional layers, the size of the feature map is always smaller than the input, we have to do something to prevent our feature map from shrinking. This is where we use padding. Layers of zero-value pixels are added to surround the input with zeros so that our feature map will not shrink. By padding, we can control the shrinking of our inputs. |
There was a problem hiding this comment.
smaller than the input. We have to ...
| ## Padding | ||
|
|
||
| As you see, after applying convolutional layers, the size of the feature map is always smaller than the input, we have to do something to prevent our feature map from shrinking. This is where we use padding. Layers of zero-value pixels are added to surround the input with zeros so that our feature map will not shrink. By padding, we can control the shrinking of our inputs. | ||
| Different padding mode is: |
| As you see, after applying convolutional layers, the size of the feature map is always smaller than the input, we have to do something to prevent our feature map from shrinking. This is where we use padding. Layers of zero-value pixels are added to surround the input with zeros so that our feature map will not shrink. By padding, we can control the shrinking of our inputs. | ||
| Different padding mode is: | ||
| * zeros(Default) | ||
| * reflect |
There was a problem hiding this comment.
Explain or hyperlink what this padding is
| Different padding mode is: | ||
| * zeros(Default) | ||
| * reflect | ||
| * replicate or circular |
There was a problem hiding this comment.
Explain or hyperlink what this padding is
|
|
||
|
|
||
| ## Stride | ||
| As we said before, when you're applying a kernel to the image, you have to move the kernel during the image. But sometimes you prefer to not move one pixel every time and move the kernel more than one pixel. This is stride. The stride specifies how many kernels have to move each time. |
|
|
||
|
|
||
| ## Stride | ||
| As we said before, when you're applying a kernel to the image, you have to move the kernel during the image. But sometimes you prefer to not move one pixel every time and move the kernel more than one pixel. This is stride. The stride specifies how many kernels have to move each time. |
| ## Table of Content | ||
|
|
||
| - [Introduction](#introduction) | ||
| - [CNN Architecture](#CNN-Architecture) |
There was a problem hiding this comment.
All mentioned layers and functions can be considered as subsections for this section
There was a problem hiding this comment.
This seems unresolved yet.
I mean this structure:
- CNN Architecture
- Fully Connected Layers
- Conv Layer
...
nimajam41
left a comment
There was a problem hiding this comment.
1- Review and revise your English mistakes
2- Edit Table of Contents
|
سلام، خسته نباشید.
هر دوی
Fully Connected Layer
Convolutional layer
اصلاح شود؟ یا فقط آخری؟
… On Jan 14, 2022, at 21:29, nimajam41 ***@***.***> wrote:
@nimajam41 commented on this pull request.
In notebooks/18_deep_neural_network/index.md <#29 (comment)>:
> +- [The Output Size of Conv Layer](#The-output-size-of-Conv-Layer)
+- [Conclusion](#Conclusion)
+- [References](#References)
+
+
+## Introduction
+Deep learning is a subfield of machine learning that deals with algorithms inspired by the structure and function of the brain. Deep learning is a subset of machine learning, which is a part of artificial intelligence (AI).
+
+
+CNN's are models to solve deep learning problems. Suppose that you have high-dimensional inputs such as images or videos. If we want to use MLPs, 2 (or more) dimensional inputs need to be converted to 1-dimensional vectors. This conversion increases the number of trainable parameters exponentially. Also, one important thing in these data is locality, it means that for example in an image, you can find features in near pixels (for examples corners and edges) but, far pixels can't give you efficient features. The solution for solving these problems is using CNNs.
+
+## CNN Architecture
+A CNN architecture is formed by a stack of distinct layers that transform the input volume into an output volume through a differentiable function. A few distinct types of layers are commonly used:
+
+* Fully Connected Layer
+* Convolutional layer
layer -> Layer (to be synced with other bullets)
—
Reply to this email directly, view it on GitHub <#29 (review)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AODND5RET3GXD4ZRWCM53ALUWBP75ANCNFSM5LPU6NEA>.
Triage notifications on the go with GitHub Mobile for iOS <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you authored the thread.
|
|
من دقیقا متوجه منظورتون از to be synced with other bullets نشدم.
ممکن هست بیشتر توضیح بدین؟
… On Jan 14, 2022, at 21:29, nimajam41 ***@***.***> wrote:
@nimajam41 commented on this pull request.
In notebooks/18_deep_neural_network/index.md <#29 (comment)>:
> +- [The Output Size of Conv Layer](#The-output-size-of-Conv-Layer)
+- [Conclusion](#Conclusion)
+- [References](#References)
+
+
+## Introduction
+Deep learning is a subfield of machine learning that deals with algorithms inspired by the structure and function of the brain. Deep learning is a subset of machine learning, which is a part of artificial intelligence (AI).
+
+
+CNN's are models to solve deep learning problems. Suppose that you have high-dimensional inputs such as images or videos. If we want to use MLPs, 2 (or more) dimensional inputs need to be converted to 1-dimensional vectors. This conversion increases the number of trainable parameters exponentially. Also, one important thing in these data is locality, it means that for example in an image, you can find features in near pixels (for examples corners and edges) but, far pixels can't give you efficient features. The solution for solving these problems is using CNNs.
+
+## CNN Architecture
+A CNN architecture is formed by a stack of distinct layers that transform the input volume into an output volume through a differentiable function. A few distinct types of layers are commonly used:
+
+* Fully Connected Layer
+* Convolutional layer
layer -> Layer (to be synced with other bullets)
—
Reply to this email directly, view it on GitHub <#29 (review)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AODND5RET3GXD4ZRWCM53ALUWBP75ANCNFSM5LPU6NEA>.
Triage notifications on the go with GitHub Mobile for iOS <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you authored the thread.
|
| It sweeps a filter across the entire input but that does not have any weights. Instead, the kernel applies an aggregation function to the values within the receptive field, populating the output array. | ||
| There are two types of Pooling: | ||
|
|
||
| 1. Max Pooling: it returns the maximum value from the portion of the image covered by the Kernel. and also performs as a Noise Suppressant. It discards the noisy activations altogether and also performs de-noising along with dimensionality reduction. |
There was a problem hiding this comment.
Start sentences with capital letters
nimajam41
left a comment
There was a problem hiding this comment.
Thanks for your revision. Most of the issues have been resolved. Try to make the Table of Contents hierarchical, explain what different paddings are, and edit some wrong usages of grammar.
@nimajam41