Feature Engineering for Machine Learning Principles and Techniques for Data Scientists 1st Edition by Alice Zheng ,Amanda Casari – Ebook PDF Instant Download/Delivery:1491953241 ,978-1491953242
Full download Feature Engineering for Machine Learning Principles and Techniques for Data Scientists 1st Edition after payment

Product details:
ISBN 10:1491953241
ISBN 13:978-1491953242
Author:Alice Zheng ,Amanda Casari
Feature engineering is a crucial step in the machine-learning pipeline, yet this topic is rarely examined on its own. With this practical book, you’ll learn techniques for extracting and transforming features―the numeric representations of raw data―into formats for machine-learning models. Each chapter guides you through a single data problem, such as how to represent text or image data. Together, these examples illustrate the main principles of feature engineering.
Rather than simply teach these principles, authors Alice Zheng and Amanda Casari focus on practical application with exercises throughout the book. The closing chapter brings everything together by tackling a real-world, structured dataset with several feature-engineering techniques. Python packages including numpy, Pandas, Scikit-learn, and Matplotlib are used in code examples.
You’ll examine:
- Feature engineering for numeric data: filtering, binning, scaling, log transforms, and power transforms
- Natural text techniques: bag-of-words, n-grams, and phrase detection
- Frequency-based filtering and feature scaling for eliminating uninformative features
- Encoding techniques of categorical variables, including feature hashing and bin-counting
- Model-based feature engineering with principal component analysis
- The concept of model stacking, using k-means as a featurization technique
- Image feature extraction with manual and deep-learning techniques
Table of contents:
- preface
introduction
conventions used in this book
using code examples
o’reilly safari
how to contact us
acknowledgments
special thanks from alice
special thanks from amanda - the machine learning pipeline
data
tasks
models
features
model evaluation - fancy tricks with simple numbers
scalars, vectors, and spaces
dealing with counts
binarization
quantization or binning
log transformation
log transform in action
power transforms: generalization of the log transform
feature scaling or normalization
min-max scaling
standardization (variance scaling)
ℓ2 normalization
interaction features
feature selection
summary
bibliography - text data: flattening, filtering, and chunking
bag-of-x: turning natural text into flat vectors
bag-of-words
bag-of-n-grams
filtering for cleaner features
stopwords
frequency-based filtering
stemming
atoms of meaning: from words to n-grams to phrases
parsing and tokenization
collocation extraction for phrase detection
summary
bibliography - the effects of feature scaling: from bag-of-words to tf-idf
tf-idf: a simple twist on bag-of-words
putting it to the test
creating a classification dataset
scaling bag-of-words with tf-idf transformation
classification with logistic regression
tuning logistic regression with regularization
deep dive: what is happening?
summary
bibliography - categorical variables: counting eggs in the age of robotic chickens
encoding categorical variables
one-hot encoding
dummy coding
effect coding
pros and cons of categorical variable encodings
dealing with large categorical variables
feature hashing
bin counting
summary
bibliography - dimensionality reduction: squashing the data pancake with pca
intuition
derivation
linear projection
variance and empirical variance
principal components: first formulation
principal components: matrix-vector formulation
general solution of the principal components
transforming features
implementing pca
pca in action
whitening and zca
considerations and limitations of pca
use cases
summary
bibliography - nonlinear featurization via k-means model stacking
k-means clustering
clustering as surface tiling
k-means featurization for classification
alternative dense featurization
pros, cons, and gotchas
summary
bibliography - automating the featurizer: image feature extraction and deep learning
the simplest image features (and why they don’t work)
manual feature extraction: sift and hog
image gradients
gradient orientation histograms
sift architecture
learning image features with deep neural networks
fully connected layers
convolutional layers
rectified linear unit (relu) transformation
response normalization layers
pooling layers
structure of alexnet
summary
bibliography - back to the feature: building an academic paper recommender
item-based collaborative filtering
first pass: data import, cleaning, and feature parsing
academic paper recommender: naive approach
second pass: more engineering and a smarter model
People also search for:
feature engineering for machine learning udemy
feature engineering for machine learning alice zheng and amanda casari
feature engineering in machine learning analytics vidhya
what is feature engineering in machine learning
feature extraction machine learning example
Tags: Alice Zheng , Amanda Casari, Feature, Engineering


