관리 메뉴

Wiredwisdom

Alexnet Architecture (2012) 본문

2D Vision/CNN (Object Detection)

Alexnet Architecture (2012)

Duke_Ryan 2025. 6. 27. 11:22

 

 

1. Deep Architecture 

5 convolutional +3 fully connected with 60 million parameters

 

2. ReLU Activation

First major CNN to use ReLU instead tanh/sigmoid. 

 

3. Dropout Regularization

50% dropout in FC layers to prevent overfitting

 

4. GPU Acceleration

Trained on two GTX 580 GPUs

 

5. Data Augmentation

Image translations, horizontal reflections, and PCA color augmentation.

 

6. Local Response Normalization

Normalization technique applied after ReLU

(later replaced by Batch Normalization)

 

 

 

 

 

'2D Vision > CNN (Object Detection)' 카테고리의 다른 글

Layer Normalization  (0) 2025.06.27
Depthwise Convolution  (0) 2025.04.07
LeNet-5 (1998)  (0) 2025.03.24