Gradient Boosting Ensemble
Strong Models scikit-learn

Gradient Boosting

Learn the basic idea of gradient boosting: building a strong model by adding many weak learners that correct previous errors.

What is Gradient Boosting?

Gradient boosting builds a model step by step. At each step, a new weak learner (often a shallow tree) is trained to reduce the errors (residuals) of the current model.

  • Uses many shallow trees (weak learners).
  • Each new tree focuses on examples where the model is wrong.
  • Models like XGBoost, LightGBM are advanced gradient boosting variants.

Example: GradientBoostingClassifier

Gradient Boosting on Iris Dataset
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score, classification_report

iris = load_iris()
X, y = iris.data, iris.target

X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42, stratify=y
)

gb_clf = GradientBoostingClassifier(
    n_estimators=100,      # number of weak learners
    learning_rate=0.1,    # step size for each learner
    max_depth=3,          # depth of each tree
    random_state=42
)

gb_clf.fit(X_train, y_train)
y_pred = gb_clf.predict(X_test)

print("Accuracy:", accuracy_score(y_test, y_pred))
print("\nReport:\n", classification_report(y_test, y_pred, target_names=iris.target_names))