Torrent details for "Macedo D. Enhancing Deep Learning Performance...Linear Unit 2022 [andryold1]"    Log in to bookmark

wide
Torrent details
Cover
Download
Torrent rating (0 rated)
Controls:
Category:
Language:
English English
Total Size:
6.07 MB
Info Hash:
c004f51247cf7b5b0c8b6877ec87c641c8b2ab48
Added By:
Added:  
25-09-2022 14:07
Views:
97
Health:
Seeds:
1
Leechers:
0
Completed:
184
wide




Description
wide
Externally indexed torrent
If you are the original uploader, contact staff to have it moved to your account
Textbook in PDF format

Recently, Deep Learning has caused a significant impact on computer vision, speech recognition, and natural language understanding. In spite of the remarkable advances, Deep Learning (DL) recent performance gains have been modest and usually rely on increasing the depth of the models, which often requires more computational resources such as processing time and memory usage. To tackle this problem, we turned our attention to the interworking between the activation functions and the batch normalization, which is virtually mandatory currently. In this work, we propose the activation function Displaced Rectifier Linear Unit (DReLU) by conjecturing that extending the identity function of ReLU to the third quadrant enhances compatibility with batch normalization. Moreover, we used statistical tests to compare the impact of using distinct activation functions (ReLU, LReLU, PReLU, ELU, and DReLU) on the learning speed and test accuracy performance of VGG and Residual Networks state-of-the-art models. These convolutional neural networks were trained on CIFAR-10 and CIFAR-100, the most commonly used Deep Learning computer vision datasets. The results showed DReLU speeded up learning in all models and datasets. Besides, statistical significant performance assessments (p&lt0.05) showed DReLU enhanced the test accuracy obtained by ReLU in all scenarios. Furthermore, DReLU showed better test accuracy than any other tested activation function in all experiments with one exception.
Deep Learning is based on the simple principle of hierarchical composition of trivial uniform procedures, the artificial neuron computation. This simplicity produces a considerable advantage over classical Machine Learning approaches. In fact, this characteristic is responsible by the extreme scalability that deep learning system are known to present. Hence, if enough training time and data are available, the massive scalability of Deep Learning approaches appears to achieve lower error rate performance than traditional Machine Learning alternatives

  User comments    Sort newest first

No comments have been posted yet.



Post anonymous comment
  • Comments need intelligible text (not only emojis or meaningless drivel).
  • No upload requests, visit the forum or message the uploader for this.
  • Use common sense and try to stay on topic.

  • :) :( :D :P :-) B) 8o :? 8) ;) :-* :-( :| O:-D Party Pirates Yuk Facepalm :-@ :o) Pacman Shit Alien eyes Ass Warn Help Bad Love Joystick Boom Eggplant Floppy TV Ghost Note Msg


    CAPTCHA Image 

    Anonymous comments have a moderation delay and show up after 15 minutes