Preprocessing data¶
hep_ml.preprocessing contains useful operations with data. Algorithms implemented here follow sklearn conventions for transformers and inherited from BaseEstimator and TransformerMixin.
Minor difference compared to sklearn is that transformations preserve names of features in DataFrames (if it is possible).
See also: sklearn.preprocessing for other useful data transformations.
Examples¶
Transformers may be used as any other transformer, manually training and applying:
>>> from hep_ml.preprocessing import IronTransformer
>>> transformer = IronTransformer().fit(trainX)
>>> new_trainX = transformer.transform(trainX)
>>> new_testX = transformer.transform(testX)
Apart from this, transformers may be plugged as part of sklearn.Pipeline:
>>> from sklearn.pipeline import Pipeline
>>> from hep_ml.nnet import SimpleNeuralNetwork
>>> clf = Pipeline(['pre', IronTransformer(),
>>> 'nnet', SimpleNeuralNetwork()])
Also, neural networks support special argument ‘scaler’. You can pass any transformer there:
>>> clf = SimpleNeuralNetwork(layers=[10, 8], scaler=IronTransformer())
- class hep_ml.preprocessing.BinTransformer(max_bins=128)[source]¶
Bases:
sklearn.base.BaseEstimator
,sklearn.base.TransformerMixin
Bin transformer transforms all features (which are expected to be numerical) to small integers.
This simple transformation, while loosing part of information, can increase speed of some algorithms.
- Parameters
max_bins (int) – maximal number of bins along each axis.
- class hep_ml.preprocessing.IronTransformer(max_points=10000, symmetrize=False)[source]¶
Bases:
sklearn.base.BaseEstimator
,sklearn.base.TransformerMixin
IronTransformer fits one-dimensional transformation for each feature.
After applying this transformations distribution of each feature turns into uniform. This is very handy to work with features with different scale and complex distributions.
The name of transformer comes from https://en.wikipedia.org/wiki/Clothes_iron, which makes anything flat, being applied with enough pressure :)
Recommended to apply with neural networks and other algorithms sensitive to scale of features.
- Parameters
symmetrize – if True, resulting distribution is uniform in [-1, 1], otherwise in [0, 1]
max_points (int) – leave so many points in monotonic transformation.