menu
{ "item_title" : "Autoencoder Neural Networks", "item_author" : [" Chun Chet Tan "], "item_description" : "Autoencoders are feedforward neural networks which can have more than one hidden layer. These networks attempt to reconstruct the input data at the output layer. Since the size of the hidden layer in the autoencoders is smaller than the size of the input data, the dimensionality of input data is reduced to a smaller-dimensional code space at the hidden layer. However, training a multilayer autoencoder is tedious. This is due to the fact that the weights at deep hidden layers are hardly optimized. The research work has focused on the characteristics, training and performance evaluation of autoencoders. The concepts of stacking and Restricted Boltzmann Machine have also been discussed in detail. Two datasets, namely ORL face dataset and MNIST handwritten digit dataset have been employed in these experiments. The performances of the autoencoders have also been compared with that of PCA. It has been shown that the autoencoders can also be used for image compression. The compression efficiency has been studied using DDSM dataset (mammogram dataset). Since image patches were used for training, it was possible to compress and decompress mammograms of different sizes.", "item_img_path" : "https://covers1.booksamillion.com/covers/bam/3/83/830/946/3838309464_b.jpg", "price_data" : { "retail_price" : "52.92", "online_price" : "52.92", "our_price" : "52.92", "club_price" : "52.92", "savings_pct" : "0", "savings_amt" : "0.00", "club_savings_pct" : "0", "club_savings_amt" : "0.00", "discount_pct" : "10", "store_price" : "" } }
Autoencoder Neural Networks|Chun Chet Tan

Autoencoder Neural Networks

local_shippingShip to Me
In Stock.
FREE Shipping for Club Members help

Overview

Autoencoders are feedforward neural networks which can have more than one hidden layer. These networks attempt to reconstruct the input data at the output layer. Since the size of the hidden layer in the autoencoders is smaller than the size of the input data, the dimensionality of input data is reduced to a smaller-dimensional code space at the hidden layer. However, training a multilayer autoencoder is tedious. This is due to the fact that the weights at deep hidden layers are hardly optimized. The research work has focused on the characteristics, training and performance evaluation of autoencoders. The concepts of stacking and Restricted Boltzmann Machine have also been discussed in detail. Two datasets, namely ORL face dataset and MNIST handwritten digit dataset have been employed in these experiments. The performances of the autoencoders have also been compared with that of PCA. It has been shown that the autoencoders can also be used for image compression. The compression efficiency has been studied using DDSM dataset (mammogram dataset). Since image patches were used for training, it was possible to compress and decompress mammograms of different sizes.

This item is Non-Returnable

Details

  • ISBN-13: 9783838309460
  • ISBN-10: 3838309464
  • Publisher: LAP Lambert Academic Publishing
  • Publish Date: August 2009
  • Dimensions: 9 x 6 x 0.23 inches
  • Shipping Weight: 0.33 pounds
  • Page Count: 96

Related Categories

You May Also Like...

    1

BAM Customer Reviews