Preprocessing data

hep_ml.preprocessing contains useful operations with data. Algorithms implemented here follow sklearn conventions for transformers and inherited from BaseEstimator and TransformerMixin.

Minor difference compared to sklearn is that transformations preserve names of features in DataFrames (if it is possible).

See also: sklearn.preprocessing for other useful data transformations.

Examples

Transformers may be used as any other transformer, manually training and applying:

>>> from hep_ml.preprocessing import IronTransformer
>>> transformer = IronTransformer().fit(trainX)
>>> new_trainX = transformer.transform(trainX)
>>> new_testX = transformer.transform(testX)

Apart from this, transformers may be plugged as part of sklearn.Pipeline:

>>> from sklearn.pipeline import Pipeline
>>> from hep_ml.nnet import SimpleNeuralNetwork
>>> clf = Pipeline(['pre', IronTransformer(),
>>>                 'nnet', SimpleNeuralNetwork()])

Also, neural networks support special argument ‘scaler’. You can pass any transformer there:

>>> clf = SimpleNeuralNetwork(layers=[10, 8], scaler=IronTransformer())
class hep_ml.preprocessing.BinTransformer(max_bins=128)[source]

Bases: sklearn.base.BaseEstimator, sklearn.base.TransformerMixin

Bin transformer transforms all features (which are expected to be numerical) to small integers.

This simple transformation, while loosing part of information, can increase speed of some algorithms.

Parameters

max_bins (int) – maximal number of bins along each axis.

fit(X, y=None, sample_weight=None)[source]

Prepare transformation rule, compute bin edges.

Parameters
  • X – pandas.DataFrame or numpy.array with data

  • y – labels, ignored

  • sample_weight – weights, ignored

Returns

self

transform(X, extend_to=1)[source]
Parameters
  • X – pandas.DataFrame or numpy.array with data

  • extend_to (int) – extends number of samples to be divisible by extend_to

Returns

numpy.array with transformed features (names of columns are not preserved), dtype is ‘int8’ for space efficiency.

class hep_ml.preprocessing.IronTransformer(max_points=10000, symmetrize=False)[source]

Bases: sklearn.base.BaseEstimator, sklearn.base.TransformerMixin

IronTransformer fits one-dimensional transformation for each feature.

After applying this transformations distribution of each feature turns into uniform. This is very handy to work with features with different scale and complex distributions.

The name of transformer comes from https://en.wikipedia.org/wiki/Clothes_iron, which makes anything flat, being applied with enough pressure :)

Recommended to apply with neural networks and other algorithms sensitive to scale of features.

Parameters
  • symmetrize – if True, resulting distribution is uniform in [-1, 1], otherwise in [0, 1]

  • max_points (int) – leave so many points in monotonic transformation.

fit(X, y=None, sample_weight=None)[source]

Fit formula. Compute set of 1-dimensional transformations.

Parameters
  • X – pandas.DataFrame with data

  • y – ignored

  • sample_weight – ignored

Returns

self

transform(X)[source]

Transform data.

Parameters

X – pandas.DataFrame with data

Returns

pandas.DataFrame with transformed features