Python实现雅可比(Jacobi)迭代计算

通过输入系数矩阵mx,值矩阵mr,最大迭代次数n,目标误差e即可得到答案。在原博主代码基础上添加了对系数矩阵的收敛性判断。原文链接:https://blog.csdn.net/wushaoji321/article/details/106800464/

import numpy as np
def Jacobi_astringency(mx):  # 判断系数矩阵的收敛性
    L, D, U = [], [], []  # 初始化L,D,U矩阵
    for i in range(len(mx)):
        L.append([]), D.append([]), U.append([])
        for j in range(len(mx)):
            if i > j:
                L[i].append(mx[i][j]), D[i].append(0), U[i].append(0)
            if i == j:
                L[i].append(0), D[i].append(mx[i][j]), U[i].append(0)
            if i < j:
                L[i].append(0), D[i].append(0), U[i].append(mx[i][j])
    # print(L)
    # print(D)
    # print(U)
    lu = L  # 计算L+U矩阵
    for i in range(len(mx)):
        for k in range(len(mx)):
            lu[i][k] = L[i][k] + U[i][k]
    # print(L_U)
    G = np.dot(-np.linalg.inv(D), lu)  # 得到G矩阵
    e, v = np.linalg.eig(G)
    for i in range(len(e)):
        count = 0  # 计算不小于一的特征值的数量
        if abs(e[i]) >= 1:
            count = count + 1
    # print(count)
    if count == 0:
        return True
    else:
        print("迭代法不收敛")
        return False


def Jacobi(mx, mr, n=100, e=0.0001):  # mx为系数矩阵,mr为值矩阵,n为默认迭代次数,e为默认误差返回
    if len(mx) == len(mr):  # 若mx与mr长度相等则开始迭代,否则方程无解
        if Jacobi_astringency(mx) == 1:  # 判断系数矩阵mx是否收敛
            x = []
            for i in range(len(mr)):
                x.append([0])  # 得到长度与mr相等的初值,并且把初值设定为零
            count = 0  # 迭代次数计数
            while count < n:  # 没有达到迭代次数时继续迭代
                tempx = []  # 暂时存储单次迭代后的解
                for i in range(len(x)):
                    ri = mr[i][0]
                    for k in range(len(mx[i])):
                        if k != i:
                            ri = ri - mx[i][k] * x[k][0]
                    ri = ri / mx[i][i]
                    tempx.append([ri])  # 每次计算存储单个x值
                print("第{}次迭代的值为:{}".format(count + 1, tempx))
                ee = []  # 存储每两次迭代结果之间的误差
                for i in range(len(x)):
                    ee.append(abs(x[i][0] - tempx[i][0]))
                em = max(ee)  # 取最大误差值
                print("第{}、{}次迭代间误差值为:{}".format(count, count + 1, em))
                if em < e:
                    return tempx  # 当两次迭代的x的最大误差满足要求时,直接返回计算结果
                x = tempx
                count += 1
            return False  # 当运行到最大迭代次数时精度仍不满足要求则返回错误

        else:
            print("使用迭代法不收敛")
    else:
        print("此方程无解")

#举例
mx = [[5, 2, 1], [-1, 4, 2], [2, -5, 10]]
# print(len(mx))

mr = [[-12], [10], [1]]
# print(len(mr))
print(Jacobi(mx, mr, 100, 0.0001))

实现结果如下:

1次迭代的值为:[[-2.4], [2.5], [0.1]]01次迭代间误差值为:2.52次迭代的值为:[[-3.4200000000000004], [1.8499999999999999], [1.83]]12次迭代间误差值为:1.733次迭代的值为:[[-3.5060000000000002], [0.73], [1.709]]23次迭代间误差值为:1.11999999999999994次迭代的值为:[[-3.0338000000000003], [0.7689999999999999], [1.1662000000000001]]34次迭代间误差值为:0.54285次迭代的值为:[[-2.94084], [1.1584499999999998], [1.0912600000000001]]45次迭代间误差值为:0.389449999999999856次迭代的值为:[[-3.081632], [1.21916], [1.2673929999999998]]56次迭代间误差值为:0.176132999999999657次迭代的值为:[[-3.1411426000000002], [1.0958955000000001], [1.3259064]]67次迭代间误差值为:0.123264499999999898次迭代的值为:[[-3.10353948], [1.05176115], [1.2761762700000001]]78次迭代间误差值为:0.049730129999999939次迭代的值为:[[-3.075939714], [1.0860269949999999], [1.246588471]]89次迭代间误差值为:0.0342658449999999610次迭代的值为:[[-3.0837284922], [1.107720836], [1.2582014403]]910次迭代间误差值为:0.02169384100000004711次迭代的值为:[[-3.0947286224600004], [1.0999671568], [1.2706061164400002]]1011次迭代间误差值为:0.01240467614000029312次迭代的值为:[[-3.094108086008], [1.091014786165], [1.2689293028920001]]1112次迭代间误差值为:0.00895237063500009513次迭代的值为:[[-3.0901917750444], [1.092008327052], [1.2643290102841]]1213次迭代间误差值为:0.004600292607900069414次迭代的值为:[[-3.08966913287762], [1.0952875510968498], [1.26404251853488]]1314次迭代间误差值为:0.003279224044849815615次迭代的值为:[[-3.0909235241457163], [1.095561457513155], [1.2655776021239489]]1415次迭代间误差值为:0.00153508358906884216次迭代的值为:[[-3.091340103430052], [1.0944803179015965], [1.2659654335857207]]1516次迭代间误差值为:0.001081139611558423817次迭代的值为:[[-3.090985213877783], [1.0941822573496267], [1.2655081796368086]]1617次迭代间误差值为:0.0004572539489120508418次迭代的值为:[[-3.0907745388672128], [1.09449960671215], [1.2652881714503699]]1718次迭代间误差值为:0.0003173493625232559619次迭代的值为:[[-3.090857476974934], [1.0946622795580119], [1.2654047111295175]]1819次迭代间误差值为:0.0001626728458619108920次迭代的值为:[[-3.0909458540491084], [1.0945832751915079], [1.2655026351739926]]1920次迭代间误差值为:9.792404447517278e-05
[[-3.0909458540491084], [1.0945832751915079], [1.2655026351739926]]
  • 3
    点赞
  • 48
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值