A deluge of digital content is generated daily by web-based platforms and sensors that capture digital traces of communication and connection, and complex states of society, the economy, and the world. Emerging deep learning methods enable the integration and analysis of these complex data in order to address real world problems by designing and discovering successful solutions.
The real power of deep learning is unleashed by thinking with deep learning to reformulate and solve problems traditional machine learning methods cannot address. These include fusing diverse data like text, images, tabular and network data into integrated and comprehensive digital doubles of the scenarios you want to model, the generation of promising recommendations, and the creation of AI assistants that radically augment an analyst or system's intelligence. This book uses Python and the widely popular PyData ecosystem to demonstrate all motivating examples and includes working code, accompanying exercises, relevant datasets and additional analytics and visualization that facilitate interpretation, communication and decision making.
About the Author: James Evans is Professor of Sociology and Computation, Founding Faculty Director of the Computational Social Science program and Knowledge Lab at the University of Chicago, and an external faculty member at the Santa Fe Institute. His research uses large-scale data, machine learning and generative models to understand how collectives think, what they know, and how to think better. His work involves inquiry into the emergence of ideas, shared patterns of reasoning, and processes of attention, communication, agreement, and certainty. Thinking and knowing collectives like science, Wikipedia or the Web involve complex networks of diverse human and machine intelligences, collaborating and competing to achieve overlapping aims. Evans' work connects the interaction of these agents with the knowledge they produce and its value for themselves and the system. Evans designs observatories for understanding that fuse data from text, images and other sensors with results from interactive crowd-sourcing and online experiments. Much of Evans' work has investigated modern science and technology to identify collective biases, generate new leads taking these into account, and imagine alternative discovery regimes. He has identified R&D institutions that generate more and less novelty, precision, density and robustness. Evans also explores thinking and knowing in other domains ranging from political ideology to popular culture. His research is funded by the National Science Foundation, the National Institutes of Health, DARPA, Facebook, IBM, the Sloan Foundation, Jump! Trading and other sources, and has been published in Nature, Science, PNAS, and top social science and computer science publications and conferences, and featured in the New York Times, the Economist, Atlantic Monthly, Wired, NPR, other outlets around the world. He received his Ph.D. from Stanford University.
Bhargav is an Analytics Fellow at the Knowledge Lab, teaching assistant in the Sociology department, and graduate student at the University of Chicago. He works at the intersection of Machine Learning and the Social Sciences, and has previously written a book on Natural Language Processing and Computational Linguistics in Python as well as having co-authored articles published in the Journal of Machine Learning Research and in Cognition. Before moving to Chicago, he spent two years working in Statistical Machine Learning and Human Computer Interaction as a Research Engineer at INRIA, France. He has spoken at over a dozen Python conferences across Europe, Asia and North America, and is a maintainer of two Python Scientific Computing packages, and contributor to multiple other open source python libraries. While an undergrad student at BITS Pilani, Goa Campus in India, he was selected as a Google Summer of Code student with the organisation NumFOCUS [Gensim].