目录
2.5 perspective transformation
1.效果图
2. 代码
2.1 scaling
import numpy as np
import cv2 as cv
"""
https://docs.opencv.org/master/da/d6e/tutorial_py_geometric_transformations.html
"""
img = cv.imread('C:/Users/Administrator/Desktop/messi.png', cv.IMREAD_GRAYSCALE)
rows, cols = img.shape
# (1) scaling
scaled_img1 = cv.resize(img, None, fx=2, fy=2, interpolation=cv.INTER_CUBIC)
# OR
height, width = img.shape[:2]
scaled_img2 = cv.resize(img, (2 * width, 2 * height), interpolation=cv.INTER_CUBIC)
2.2 translation
# (2) translation. x方向移动100, y方向移动50
# cv.warpAffine: 2x3 transformation matrix
M = np.float32([[1, 0, 100], [0, 1, 50]])
translated_img = cv.warpAffine(img, M, (cols, rows))
2.3 rotate
# (3) rotate
# cols-1 and rows-1 are the coordinate limits.
M = cv.getRotationMatrix2D(center=((cols - 1) / 2.0, (rows - 1) / 2.0), angle=45, scale=1)
rotated_img = cv.warpAffine(img, M, (cols, rows))
2.4 affine transformation
# (4) affine transformation
"""
https://blog.csdn.net/Caesar6666/article/details/104158047
仿射变换的原则是原图上是平行直线,变换后也是平行直线,仿射 = 线性映射 + 平移
为了找到2x3变换矩阵,需要原图的3个点和变换后的3个点;
cv.warpAffine
"""
pts1 = np.float32([[50, 50], [200, 50], [50, 200]])
pts2 = np.float32([[10, 100], [200, 50], [100, 250]])
M = cv.getAffineTransform(src=pts1, dst=pts2)
affine_img = cv.warpAffine(img, M, dsize=(cols, rows))
2.5 perspective transformation
透视变换把一个二维坐标系转换为三维坐标系,然后把三维坐标系投影到新的二维坐标系。
变换后,直线依然是直线。一个平行四边形,经过仿射变换后依然是平行四边形;而经过透视变换后只是一个四边形(不再平行了);
为了找到3x3变换矩阵,需要原图的4个点和变换后的4个点;
# (5) perspective transformation
pts1 = np.float32([[56, 65], [368, 52], [28, 387], [389, 390]])
pts2 = np.float32([[0, 0], [300, 0], [0, 300], [300, 300]])
M = cv.getPerspectiveTransform(src=pts1, dst=pts2)
perspective_img = cv.warpPerspective(img, M, dsize=(300, 300))
3. 变换点坐标集合
getPerspectiveTransform, perspectiveTransform
void test()
{
// 定义旧点集合 pts1
std::vector<cv::Point2f> pts1, pts2;
pts1.push_back(cv::Point2f(56, 65));
pts1.push_back(cv::Point2f(368, 52));
pts1.push_back(cv::Point2f(28, 387));
pts1.push_back(cv::Point2f(389, 390));
pts2.push_back(cv::Point2f(0, 0));
pts2.push_back(cv::Point2f(300, 0));
pts2.push_back(cv::Point2f(0, 300));
pts2.push_back(cv::Point2f(300, 300));
// 计算透视变换矩阵
cv::Mat matrix = cv::getPerspectiveTransform(pts1, pts2);
// 定义新的点集合 pts2
std::vector<cv::Point2f> pts2_new;
// 对旧点集合应用透视变换矩阵
cv::perspectiveTransform(pts1, pts2_new, matrix);
// 打印新的点坐标: 和旧坐标应该是完全一致的
for (int i = 0; i < pts2_new.size(); i++) {
std::cout << "New Point: (" << pts2_new[i].x << ", " << pts2_new[i].y << ")" << std::endl;
std::cout << "Old Point: (" << pts2[i].x << ", " << pts2[i].y << ")" << std::endl;
}
return;
}