Externally indexed torrent
If you are the original uploader, contact staff to have it moved to your account
Textbooks in PDF format
Deep Learning systems, a class of multi-layered networks, are capable of automatically learning meaningful hierarchical representations from a variety of structured and unstructured data. Breakthroughs in Deep Learning (DL) allow us to generate new representations, extract knowledge, and draw inferences from raw images, video streams, text and speech, time series, and other complex data types. These powerful deep learning methods are being applied to new and exciting real-world problems in medical diagnostics, factory automation, public safety, environmental sciences, autonomous transportation, military applications, and much more. This book presents some applications of Deep Learning and includes novel architectures and Deep Learning techniques, which are described in fifteen chapters.
Natural Language Processing (NLP) broadly describes the application of Computer Science and Machine Learning to natural language datasets, such as speech or text. The chapter "Language Models for Deep Learning Programming: A Case Study with Keras" explores the application of language models to programming languages and our work in constructing a dataset for the task. More particularly, we focus on the Keras programming language, a popular framework for implementing Deep Learning experiments. Our original model KerasBERT has since been expanded by adding more data and re-training the language model. The original KerasBERT model was trained on two categories of Keras Code Examples and the Keras API reference. This chapter documents adding Keras GitHub Examples, Kaggle Notebook containing Keras Code, Medium articles describing how to use Keras, and StackOverflow questions regarding Keras. With these new data sources, we present new domain generalization analysis, as well as independent and identically distributed (i.i.d.) test set losses. We qualitatively evaluate how well KerasBERT learns the Keras Deep Learning framework through cloze test evaluation. We present miscellaneous properties of these cloze tests such as mask positioning and prompt paraphrasing. KerasBERT is an 80 million parameter RoBERTa model, which we compare to the Zero-Shot learning capability of the 6 billion parameter GPT-Neo model. We present a suite of cloze tests crafted from the Keras documentation to evaluate these models. We find some exciting completions that show KerasBERT is a promising direction for question answering and schema-free database querying. In this chapter, we document the reuse of KerasBERT and integration of additional data sources. We have tripled the size of the original data set and have identified five main sources of Keras information data.
Time-series classification (TSC) spans many real-world applications in domains from healthcare over cybersecurity to manufacturing. While deep semi-supervised learning has gained much attention in computer vision, limited research exists on its applicability in the time-series domain. In this work, we investigate the transferability of state-of-the-art deep semi-supervised models from image to time-series classification. We discuss the necessary model adaptations, in particular, an appropriate model backbone architecture and the use of tailored data augmentation strategies. Based on these adaptations, we explore the potential of deep semi-supervised learning in the context of time-series classification by evaluating our methods on large public time-series classification problems with varying amounts of labeled samples. We perform extensive comparisons under a decidedly realistic and appropriate evaluation scheme with a unified reimplementation of all algorithms considered, which is yet lacking in the field. Further, we shed light on the effect of different data augmentation strategies and model architecture backbones in this context within a series of experiments. We find that these transferred semi-supervised models show substantial performance gains over strong supervised, semi-supervised and self-supervised alternatives, especially for scenarios with very few labeled samples