Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
Tags
- differentiablerendering
- LIDAR
- Slam
- alphablending
- covariancematrix
- adaptivedensitycontrol
- tilebasedrasterizer
- sensorfusion
- gaussiansplatting
- rostopics
- catkinworkspace
- turtlebot
- turtlebot3
- roslaunch
- vectorfields
- opencv
- raspberrypi
- vectorcalculus
- pointcloud
- usbcamera
- rospackages
- Ros
- ComputerVision
- NERF
- 3dgaussiansplatting
- realtimerendering
- electromagnetism
- rosnoetic
- 3dmapping
- imageprocessing
Archives
- Today
- Total
Wiredwisdom
Why MSE 본문
In MSE, we use the square of differences (hence "Square" in Mean Square Error).
This means that the number of classifications builds the dimensions(Vector) of our space.
In this n-dimensional space, MSE measures the squared Euclidean distance between predicts vectors and label vectors.
This actually applies the Pythagorean theorem extended to n dimensions.
'Artificial Intelligence > Deep Learning Basic' 카테고리의 다른 글
Cross-entropy and MSE principles (0) | 2025.03.30 |
---|---|
Deep-learning point (1) | 2025.01.13 |
Neural Network Basic (0) | 2024.08.01 |
Attention (0) | 2024.06.23 |
RNN - LSTM - LLM summary (0) | 2024.06.21 |