Keras Core LayersDenseThe dense layer can be defined as a densely-connected common Neural Network layer. The output = activation(dot(input, kernel) +bias) operation is executed by the Dense layer. Here an element-wise activation function is being performed by the activation, so as to pass an activation argument, a matrix of weights called kernel is built by the layer, and bias is a vector created by the layer, which is applicable only if the use_bias is True. It is to be noted that if the input given to the layer has a rank greater than two, it will be flattened previously to its primary dot product with the kernel. Example Arguments
Input shape The input shape layer accepts an nD tensor of shape (batch_size, …, input_dim), and makes sure that its most common situation would have to be a 2D input encompassing a shape of (batch_size, input_dim). Output shape It outputs an nD tensor of shape (batch_size, …, units). For instance, where input is a 2D of shape (batch_size, input_dim), the corresponding output will be of shape (batch_size, units). ActivationThis is the layer that implements an activation function on the output. Arguments
Input Shape It comprises of an arbitrary input shape. It makes use of an argument called input_shape while using it as an initial layer in the model. The input_shape can be defined as a tuple of integers that does not include the samples axis. Output Shape The output shape is the same as that of the input shape. DropoutThe dropout is applied to the input as it prevents overfitting by randomly setting units of a fraction rate to 0 during the training time at each update. Arguments
FlattenThe flatten layer is used for flattening the input by not affecting the batch size. Arguments
Example InputThe input layer makes use Input() to instantiate a Keras tensor, which is simply a tensor object from the backend such as Theano, TensorFlow, or CNTK. It can be augmented with some specific attributes, which will let us build a Keras model with the help of only inputs and outputs. If we have m,n and o Keras tensors, then we can perform model = Model(input=[m, n], output=o). Some other added Keras attributes are; _keras_shape, integer shape tuple that is propagated via Keras-side shape inference, and _keras_history is the last layer, which is applied on the tensor. The last layer enables the retrieval of the entire layer graph recursively. Arguments
Returns It returns a tensor. Example ReshapeIt is used to reshape the output to some specific shape. Arguments
Input shape It includes arbitrary input shape even though if it is fixed and make use of input_shape argument while using this layer as the initial layer in the model. Output shape Example PermuteIt permutes the input's dimension as per the given pattern and is mainly used to join RNN's with convnets together. Example Argument
Input shape It consists of an arbitrary input shape and makes use of the input_shape keyword argument, which is a tuple of integers. This argument is utilized while using this layer as the initial layer in the model. It does not include the samples axis. Output shape The output shape is similar to the input shape, just the fact that dimensions are re-ordered according to some specific pattern. RepeatVectorThe RepeatVector layer is used for reiterating an input n times. Example Arguments
Input shape It comprises of a 2D tensor having a shape of (num_samples, features). Output shape It constitutes a 3D tensor of shape (num_samples, n, features). LambdaThis layer is used for wrapping up an arbitrary expression like an object of Layer. Examples Arguments
Input shape The input shape is an arbitrary tuple of integers that uses the argument keyword input_shape while using this layer as the initial layer in the model and does not include the samples axis. Output shape It is either specified by an output_shape argument or auto-inferred when TensorFlow or CNTK is in use. ActivityRegularizationThe activityregularization layer updates the cost function on the basis of input activity. Arguments
Input shape It is an arbitrary tuple of integers that makes use of an input_shape argument while using this layer as the initial layer in the model. It does not embrace samples axis Output shape The output shape is similar to that of input shape. MaskingThe masking layer is used to mask a sequence, simply by using a mask value to avoid timesteps. For a given sample of timesteps, if all the features are equivalent to mask_value, then, in that case, the sample timesteps are masked (skipped) in all downstream layers only if they support masking. An exception will be raised if any of the downstream layers is not in support of masking but still receiving an input mask. Example Let x be the numpy data array of shape (samples, timesteps, features), which will be fed to the LSTM layer. Now suppose that you are willing to mask #0 at timestep #3, and sample #2 at timestep #5, as you lacking with features for these sample timesteps then you can do the followings:
Arguments
SpatialDropout1DIt is a spatial dropout 1D version that performs the same function as that of the dropout, but it does not drop an individual element, rather then it drops the entire 1D feature map. When the contiguous frames are strongly linked in the feature map the same as it is carried out in the convolution layers, then, in that case, the activations are not going to be regularized by the regular dropout but will end up reducing the effective learning rate. In this particular case, it will help in promoting independence between feature maps and will be used instead. Arguments
Input shape It is a 3D tensor of shape (samples, timesteps, channels) Output shape The output shape is similar to the input shape. SpatialDropout2DIt is a spatial dropout 2D version. It also performs similar functions as that of the dropout; however, it drops the whole 2D feature maps rather than an individual element. If the adjacent frames are strongly correlated in the feature map like it is done in the convolution layers, then the activations will not be regularized by the regular dropout, and will otherwise reduce the effective learning rate. In this case, it promotes independence between feature maps and is used instead. Arguments
Input shape If data_format='channels_first', then the 4D tensor will be of shape (samples, channels, rows, cols) else, if it data_format='channels_last', then the 4D tensor will be of shape (samples, rows, cols, channels). Output shape The output shape is similar to the input shape. SpatialDropout3DIt is a spatial dropout 3D version that performs similar functions as that of dropout, but it drops complete 3D feature maps instead of any particular element. The regular dropout will not regularize the activations if adjacent voxels residing in the feature maps are strongly correlated just like in convolution layers and will otherwise decrease the effective learning rate. It also supports independence between feature maps. Arguments
Input shape The shape of a 5D tensor is: (samples, channels, dim1, dim2, dim3) if data_format='channels_first', else (samples, dim1, dim2, dim3, channels) if data_format='channels_last'. Output shape The output shape is the same as that of the input shape. Next TopicConvolutional Layer |