In deep learning, the term "depth" refers to the number of layers between the input and output. These intermediate layers are called hidden layers because their outputs are not directly observed.
Hidden layers are where the network learns hierarchical features. As more hidden layers are added, the model becomes deeper, allowing it to learn more complex patterns and representations from the data.
Why the other options are incorrect:
A. Convolution: This is a specific type of operation applied in convolutional neural networks (CNNs) but is not the general source of model depth.
B. Dropout: A regularization technique used to prevent overfitting; it doesn’t contribute to the model’s depth.
C. Pooling: Reduces the dimensionality of feature maps; not responsible for the depth of the network.
Exact Extract and Official References:
CompTIA DataX (DY0-001) Official Study Guide, Domain: Machine Learning
“In deep neural networks, hidden layers represent the model’s depth. Each hidden layer allows the network to learn more abstract and high-level features.” (Section 4.3, Deep Learning Fundamentals)
Deep Learning Textbook by Ian Goodfellow, Yoshua Bengio, and Aaron Courville:
"Depth in deep learning refers to the number of hidden layers in the network. Each hidden layer extracts increasingly abstract features of the input data." (Chapter 6, Feedforward Deep Networks)