snowflake.ml.modeling.model_selection.RandomizedSearchCV

class snowflake.ml.modeling.model_selection.RandomizedSearchCV(*, estimator, param_distributions, n_iter=10, scoring=None, n_jobs=None, refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', random_state=None, error_score=nan, return_train_score=False, input_cols: Optional[Union[str, Iterable[str]]] = None, output_cols: Optional[Union[str, Iterable[str]]] = None, label_cols: Optional[Union[str, Iterable[str]]] = None, passthrough_cols: Optional[Union[str, Iterable[str]]] = None, drop_input_cols: Optional[bool] = False, sample_weight_col: Optional[str] = None)

Bases: BaseTransformer

Randomized search on hyper parameters For more details on this class, see sklearn.model_selection.RandomizedSearchCV (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html)

Parameters:
  • estimator (estimator object) – An object of that type is instantiated for each grid point. This is assumed to implement the scikit-learn estimator interface. Either estimator needs to provide a score function, or scoring must be passed.

  • param_distributions (dict or list of dicts) – Dictionary with parameters names (str) as keys and distributions or lists of parameters to try. Distributions must provide a rvs method for sampling (such as those from scipy.stats.distributions). If a list is given, it is sampled uniformly. If a list of dicts is given, first a dict is sampled uniformly, and then a parameter is sampled using that dict as above.

  • input_cols (Optional[Union[str, List[str]]]) – A string or list of strings representing column names that contain features. If this parameter is not specified, all columns in the input DataFrame except the columns specified by label_cols and sample-weight_col parameters are considered input columns.

  • label_cols (Optional[Union[str, List[str]]]) – A string or list of strings representing column names that contain labels. This is a required param for estimators, as there is no way to infer these columns. If this parameter is not specified, then object is fitted without labels(Like a transformer).

  • output_cols (Optional[Union[str, List[str]]]) – A string or list of strings representing column names that will store the output of predict and transform operations. The length of output_cols mus match the expected number of output columns from the specific estimator or transformer class used. If this parameter is not specified, output column names are derived by adding an OUTPUT_ prefix to the label column names. These inferred output column names work for estimator’s predict() method, but output_cols must be set explicitly for transformers.

  • passthrough_cols (A string or a list of strings indicating column names to be excluded from any) – operations (such as train, transform, or inference). These specified column(s) will remain untouched throughout the process. This option is helpful in scenarios requiring automatic input_cols inference, but need to avoid using specific columns, like index columns, during training or inference.

  • sample_weight_col (Optional[str]) – A string representing the column name containing the examples’ weights. This argument is only required when working with weighted datasets.

  • drop_input_cols (Optional[bool], default=False) – If set, the response of predict(), transform() methods will not contain input columns.

  • n_iter (int, default=10) – Number of parameter settings that are sampled. n_iter trades off runtime vs quality of the solution.

  • scoring (str, callable, list, tuple or dict, default=None) –

    Strategy to evaluate the performance of the cross-validated model on the test set.

    If scoring represents a single score, one can use:

    • a single string (see scoring_parameter);

    • a callable (see scoring) that returns a single value.

    If scoring represents multiple scores, one can use:

    • a list or tuple of unique strings;

    • a callable returning a dictionary where the keys are the metric names and the values are the metric scores;

    • a dictionary with metric names as keys and callables a values.

    See multimetric_grid_search for an example.

    If None, the estimator’s score method is used.

  • n_jobs (int, default=None) – Number of jobs to run in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.

  • refit (bool, str, or callable, default=True) –

    Refit an estimator using the best found parameters on the whole dataset.

    For multiple metric evaluation, this needs to be a str denoting the scorer that would be used to find the best parameters for refitting the estimator at the end.

    Where there are considerations other than maximum score in choosing a best estimator, refit can be set to a function which returns the selected best_index_ given the cv_results. In that case, the best_estimator_ and best_params_ will be set according to the returned best_index_ while the best_score_ attribute will not be available.

    The refitted estimator is made available at the best_estimator_ attribute and permits using predict directly on this RandomizedSearchCV instance.

    Also for multiple metric evaluation, the attributes best_index_, best_score_ and best_params_ will only be available if refit is set and all of them will be determined w.r.t this specific scorer.

    See scoring parameter to know more about multiple metric evaluation.

  • cv (int, cross-validation generator or an iterable, default=None) –

    Determines the cross-validation splitting strategy. Possible inputs for cv are:

    • None, to use the default 5-fold cross validation,

    • integer, to specify the number of folds in a (Stratified)KFold,

    • CV splitter,

    • An iterable yielding (train, test) splits as arrays of indices.

    For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. These splitters are instantiated with shuffle=False so the splits will be the same across calls.

    Refer User Guide for the various cross-validation strategies that can be used here.

  • verbose (int) –

    Controls the verbosity: the higher, the more messages.

    • >1 : the computation time for each fold and parameter candidate is displayed;

    • >2 : the score is also displayed;

    • >3 : the fold and candidate parameter indexes are also displayed together with the starting time of the computation.

  • pre_dispatch (int, or str, default='2*n_jobs') –

    Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be:

    • None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs

    • An int, giving the exact number of total jobs that are spawned

    • A str, giving an expression as a function of n_jobs, as in ‘2*n_jobs’

  • random_state (int, RandomState instance or None, default=None) – Pseudo random number generator state used for random uniform sampling from lists of possible values instead of scipy.stats distributions. Pass an int for reproducible output across multiple function calls. See Glossary.

  • error_score ('raise' or numeric, default=np.nan) – Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised. This parameter does not affect the refit step, which will always raise the error.

  • return_train_score (bool, default=False) – If False, the cv_results_ attribute will not include training scores. Computing training scores is used to get insights on how different parameter settings impact the overfitting/underfitting trade-off. However computing the scores on the training set can be computationally expensive and is not strictly required to select the parameters that yield the best generalization performance.

Base class for all transformers.

Methods

decision_function(dataset: Union[DataFrame, DataFrame], output_cols_prefix: str = 'decision_function_') Union[DataFrame, DataFrame]

Call decision_function on the estimator with the best found parameters For more details on this function, see sklearn.model_selection.RandomizedSearchCV.decision_function (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html#sklearn.model_selection.RandomizedSearchCV.decision_function)

Parameters:
  • dataset – Union[snowflake.snowpark.DataFrame, pandas.DataFrame] Snowpark or Pandas DataFrame.

  • output_cols_prefix – str Prefix for the response columns

Returns:

Output dataset with results of the decision function for the samples in input dataset.

fit(dataset: Union[DataFrame, DataFrame]) RandomizedSearchCV

Run fit with all sets of parameters For more details on this function, see sklearn.model_selection.RandomizedSearchCV.fit (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html#sklearn.model_selection.RandomizedSearchCV.fit)

Parameters:

dataset – Union[snowflake.snowpark.DataFrame, pandas.DataFrame] Snowpark or Pandas DataFrame.

Returns:

self

get_input_cols() List[str]

Input columns getter.

Returns:

Input columns.

get_label_cols() List[str]

Label column getter.

Returns:

Label column(s).

get_output_cols() List[str]

Output columns getter.

Returns:

Output columns.

get_params(deep: bool = True) Dict[str, Any]

Get parameters for this transformer.

Parameters:

deep – If True, will return the parameters for this transformer and contained subobjects that are transformers.

Returns:

Parameter names mapped to their values.

get_passthrough_cols() List[str]

Passthrough columns getter.

Returns:

Passthrough column(s).

get_sample_weight_col() Optional[str]

Sample weight column getter.

Returns:

Sample weight column.

get_sklearn_args(default_sklearn_obj: Optional[object] = None, sklearn_initial_keywords: Optional[Union[str, Iterable[str]]] = None, sklearn_unused_keywords: Optional[Union[str, Iterable[str]]] = None, snowml_only_keywords: Optional[Union[str, Iterable[str]]] = None, sklearn_added_keyword_to_version_dict: Optional[Dict[str, str]] = None, sklearn_added_kwarg_value_to_version_dict: Optional[Dict[str, Dict[str, str]]] = None, sklearn_deprecated_keyword_to_version_dict: Optional[Dict[str, str]] = None, sklearn_removed_keyword_to_version_dict: Optional[Dict[str, str]] = None) Dict[str, Any]

Get sklearn keyword arguments.

This method enables modifying object parameters for special cases.

Parameters:
  • default_sklearn_obj – Sklearn object used to get default parameter values. Necessary when sklearn_added_keyword_to_version_dict is provided.

  • sklearn_initial_keywords – Initial keywords in sklearn.

  • sklearn_unused_keywords – Sklearn keywords that are unused in snowml.

  • snowml_only_keywords – snowml only keywords not present in sklearn.

  • sklearn_added_keyword_to_version_dict – Added keywords mapped to the sklearn versions in which they were added.

  • sklearn_added_kwarg_value_to_version_dict – Added keyword argument values mapped to the sklearn versions in which they were added.

  • sklearn_deprecated_keyword_to_version_dict – Deprecated keywords mapped to the sklearn versions in which they were deprecated.

  • sklearn_removed_keyword_to_version_dict – Removed keywords mapped to the sklearn versions in which they were removed.

Returns:

Sklearn parameter names mapped to their values.

predict(dataset: Union[DataFrame, DataFrame]) Union[DataFrame, DataFrame]

Call predict on the estimator with the best found parameters For more details on this function, see sklearn.model_selection.RandomizedSearchCV.predict (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html#sklearn.model_selection.RandomizedSearchCV.predict)

Parameters:

dataset – Union[snowflake.snowpark.DataFrame, pandas.DataFrame] Snowpark or Pandas DataFrame.

Returns:

Transformed dataset.

Raises:

SnowflakeMLException – when the output column(s) doesn’t exist in the model signature, raise error

predict_log_proba(dataset: Union[DataFrame, DataFrame], output_cols_prefix: str = 'predict_log_proba_') Union[DataFrame, DataFrame]

Call predict_proba on the estimator with the best found parameters For more details on this function, see sklearn.model_selection.RandomizedSearchCV.predict_proba (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html#sklearn.model_selection.RandomizedSearchCV.predict_proba)

Parameters:
  • dataset – Union[snowflake.snowpark.DataFrame, pandas.DataFrame] Snowpark or Pandas DataFrame.

  • output_cols_prefix – str Prefix for the response columns

Returns:

Output dataset with log probability of the sample for each class in the model.

predict_proba(dataset: Union[DataFrame, DataFrame], output_cols_prefix: str = 'predict_proba_') Union[DataFrame, DataFrame]

Call predict_proba on the estimator with the best found parameters For more details on this function, see sklearn.model_selection.RandomizedSearchCV.predict_proba (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html#sklearn.model_selection.RandomizedSearchCV.predict_proba)

Parameters:
  • dataset – Union[snowflake.snowpark.DataFrame, pandas.DataFrame] Snowpark or Pandas DataFrame.

  • output_cols_prefix – Prefix for the response columns

Returns:

Output dataset with probability of the sample for each class in the model.

score(dataset: Union[DataFrame, DataFrame]) float

If implemented by the original estimator, return the score for the dataset.

Parameters:

dataset – Union[snowflake.snowpark.DataFrame, pandas.DataFrame] Snowpark or Pandas DataFrame.

Returns:

Score.

score_samples(dataset: Union[DataFrame, DataFrame], output_cols_prefix: str = 'score_samples_') Union[DataFrame, DataFrame]

Call score_samples on the estimator with the best found parameters. Only available if refit=True and the underlying estimator supports score_samples.

Parameters:
  • dataset (Union[DataFrame, pd.DataFrame]) – Snowpark or Pandas DataFrame.

  • output_cols_prefix (str) – Prefix for the response columns. Defaults to “score_samples_”.

Returns:

Output dataset with results of the decision function for the samples in input dataset.

Return type:

Union[DataFrame, pd.DataFrame]

set_drop_input_cols(drop_input_cols: Optional[bool] = False) None
set_input_cols(input_cols: Optional[Union[str, Iterable[str]]]) Base

Input columns setter.

Parameters:

input_cols – A single input column or multiple input columns.

Returns:

self

set_label_cols(label_cols: Optional[Union[str, Iterable[str]]]) Base

Label column setter.

Parameters:

label_cols – A single label column or multiple label columns if multi task learning.

Returns:

self

set_output_cols(output_cols: Optional[Union[str, Iterable[str]]]) Base

Output columns setter.

Parameters:

output_cols – A single output column or multiple output columns.

Returns:

self

set_params(**params: Dict[str, Any]) None

Set the parameters of this transformer.

The method works on simple transformers as well as on nested objects. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:

**params – Transformer parameter names mapped to their values.

Raises:

SnowflakeMLException – Invalid parameter keys.

set_passthrough_cols(passthrough_cols: Optional[Union[str, Iterable[str]]]) Base

Passthrough columns setter.

Parameters:

passthrough_cols – Column(s) that should not be used or modified by the estimator/transformer. Estimator/Transformer just passthrough these columns without any modifications.

Returns:

self

set_sample_weight_col(sample_weight_col: Optional[str]) Base

Sample weight column setter.

Parameters:

sample_weight_col – A single column that represents sample weight.

Returns:

self

to_lightgbm() Any
to_sklearn() RandomizedSearchCV

Get sklearn.model_selection.RandomizedSearchCV object.

to_xgboost() Any
transform(dataset: Union[DataFrame, DataFrame]) Union[DataFrame, DataFrame]

Call transform on the estimator with the best found parameters For more details on this function, see sklearn.model_selection.RandomizedSearchCV.transform (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html#sklearn.model_selection.RandomizedSearchCV.transform)

Parameters:

dataset – Union[snowflake.snowpark.DataFrame, pandas.DataFrame] Snowpark or Pandas DataFrame.

Returns:

Transformed dataset.

Attributes

model_signatures

Returns model signature of current class.

Raises:

SnowflakeMLException – If estimator is not fitted, then model signature cannot be inferred

Returns:

each method and its input output signature

语言: 中文