6. Classifier

  1. Click on the Classifier under the Machine Learning category.

  1. Allocate to: Specify the variable name to assign to the model.

  2. Code View: Preview the generated code.

  3. Run: Execute the code.


Logistic Regression

  1. Penalty: Specify the regularization method for the model. (l2 / l1 / elasticnet / none)

  2. C: Adjust the regularization strength.

  3. Random State: Set the seed value for the random number generator.


SupportVectorMachine Classifier

  1. C: C indicates the freedom of the model's regularization. A higher C value makes the model more complex to fit the training data.

  2. Kernel: A function that maps data into higher dimensions. You can control the complexity of the model by selecting the kernel type.

  3. Degree (Poly): Degree determines the degree of the polynomial. A higher degree increases the complexity of the model.

  4. Gamma (Poly, rbf, sigmoid): Gamma adjusts the curvature of the decision boundary. A higher value makes the model fit the training data more closely.

  5. Coef0 (Poly, sigmoid): An additional parameter for the kernel, controlling the offset of the kernel. A higher value makes the model fit the training data more closely.

  6. Random State: Set the seed value for the random number generator.


DecisionTree Classifier

  1. Criterion: Specify the metric used to select the node split. (squared_error / friedman_mse / absolute_error / Poisson)

  2. Max Depth: Specify the maximum depth of the trees.

  3. Min Samples Split: Specify the minimum number of samples required to split a node to prevent excessive splitting.

  4. Random State: Set the seed value for the random number generator.


RandomForest Classifier

  1. N estimators: Specify the number of trees to include in the ensemble.

  2. Criterion: Specify the metric used to select the node split. Options include gini / entropy.

  3. Max Depth: Specify the maximum depth of the trees.

  4. Min Samples Split: Specify the minimum number of samples required to split a node to prevent excessive splitting.

  5. N jobs: Specify the number of CPU cores or threads to use during model training for parallel processing.

  6. Random State: Set the seed value for the random number generator.


GradientBoosting Classifier

  1. Loss: Specify the loss function to be used. Options include deviance / exponential.

  2. Learning rate: Adjust the contribution of each tree and the degree to which the errors of previous trees are corrected. A large value may lead to non-convergence or overfitting, while a small value may increase training time.

  3. N estimators: Specify the number of trees to include in the ensemble.

  4. Criterion: Specify the metric used to select the node split. (friedman_mse / squared_error / mse / mae)

  5. Random State: Set the seed value for the random number generator.


XGB Classifier

  1. N estimators: Specify the number of trees to include in the ensemble.

  2. Max Depth: Specify the maximum depth of the trees.

  3. Learning Rate: Adjust the contribution of each tree and the degree to which the errors of previous trees are corrected.

  4. Gamma: Adjust the curvature of the decision boundary. A higher value makes the model fit the training data more closely.

  5. Random State: Set the seed value for the random number generator.


LGBM Classifier

  1. Boosting type: Specify the boosting method used internally in the algorithm. (gbdt / dart / goss / rf (Random Forest))

  2. Max Depth: Specify the maximum depth of the trees.

  3. Learning rate: Adjust the contribution of each tree and the degree to which the errors of previous trees are corrected.

  4. N estimators: Specify the number of trees to include in the ensemble.

  5. Random State: Set the seed value for the random number generator.


CatBoost Classifier

  1. Learning rate: Adjust the contribution of each tree and the degree to which the errors of previous trees are corrected.

  2. Loss function: Specify the loss function to be used. (RMSE / absolute_error / huber / quantile)

  3. Task type: Specify the hardware used for data processing. (CPU / GPU)

  4. Max depth: Specify the maximum depth of the trees.

  5. N estimators: Specify the number of trees to include in the ensemble.

  6. Random state: Set the seed value for the random number generator.

Last updated