A
decision tree creates a hierarchical segmentation of the input data based on a series of rules
applied to each
observation. Each rule assigns an observation to a segment based on the value of one
predictor. Rules are applied sequentially, which results in a hierarchy of segments within
segments. The hierarchy is called a
tree, and each segment is called a
node. The original segment contains the entire
data set and is called the
root node. A node and all of its successors form a
branch. The final
nodes are called
leaves. For each
leaf, a decision is made about the
response variable and applied to all
observations in that leaf. The exact decision depends on the response variable.
The decision tree requires a measure response variable or category response variable
and at least one predictor. A predictor can be a category or measure variable, but
not an
interaction term.
The decision tree enables you to manually train and
prune nodes by entering
interactive mode. In interactive mode, you are unable to modify the response variable, growth properties
are locked, and you cannot export
model score code. Certain modifications to
predictors are allowed, such as converting a measure to a category. When you are in interactive
mode and modify a predictor, the decision tree remains in interactive mode, but attempts
to rebuild the splits and
prunes using the same rules.
To enter interactive mode, you can either start making changes to the decision tree
in the Tree window or you can click Use
Interactive Mode on the Roles tab
in the right pane. To leave interactive mode, click Use
Non-Interactive Mode on the Roles tab.
Note: When you leave interactive
mode, you lose all of your changes.