python定义向量内积函数,如何在python中为每对矩阵向量内积分开?

Lets say, I have bunch of matrices As and vectors bs.

As = array([[[1, 7], [3, 8]],

[[2, 1], [5, 9]],

[[7, 2], [8, 3]]])

bs = array([[8, 0], [8, 8], [7, 3]])

When I do np.inner(As, bs), I get:

array([[[ 8, 64, 28], [ 24, 88, 45]],

[[ 16, 24, 17], [ 40, 112, 62]],

[[ 56, 72, 55], [ 64, 88, 65]]])

But I do not need all inner products. What I want is, to calculate each matrix with each vector once.

I can do something like this:

np.array(map(lambda (a, b): np.inner(a, b), zip(As, bs)))

Then I get the expected matrix:

array([[ 8, 24], [ 24, 112], [ 55, 65]])

Now I do not want to use zip, map etc. because I need this operation > 10**6 time (for image processing, exactly for GMM).

Is there any way to use numpy, scipy etc. that can do this for me? (fast and efficent)

解决方案

You can use np.einsum -

np.einsum('ijk,ik->ij',As, bs)

Explanation

With np.array(map(lambda (a, b): np.inner(a, b), zip(As, bs))), we are selecting the first element off As as a and off bs as b and doing inner product. Thus, we are doing :

In [19]: np.inner(As[0],bs[0])

Out[19]: array([ 8, 24])

In [20]: np.inner(As[1],bs[1])

Out[20]: array([ 24, 112])

In [21]: np.inner(As[2],bs[2])

Out[21]: array([55, 65])

Think of it as a loop, we iterate 3 times, corresponding to the length of first axis of As, which is same as for bs. Thus, looking at the lambda expression, at each iteration, we have a = As[0] & b = bs[0], a = As[1] & b = bs[1] and so on.

As and bs being 3D and 2D, let's represent them as iterators imagining the inner-product in our minds. Thus, at iteration, we would have a : j,k and b : m. With that inner product between a and b, we would lose the second axis of a and first of b. Thus, we need to align k with m. Thus, we could assume b to have the same iterator as k. Referencing back from a to As and b to bs, in essence, we would lose the third axis from As and second from bs with the inner product/sum-reduction. That iterating along the first axis for As and bs signifies that we need to keep those aligned under these sum-reductions.

Let's summarize.

We have the iterators involved for the input arrays like so -

As : i x j x k

bs : i x k

Steps involved in the intended operation :

Keep the first axis of As aligned with first of bs.

Lose the third axis of As with sum-reduction against second of bs.

Thus, we would be left with the iterators i,j for the output.

np.einsum is a pretty efficient implementation and is specially handy when we need to keep one or more of the axes of the input arrays aligned against each other.

For more info on einsum, I would suggest following the docs link supplied earlier and also this Q&A could be helpful!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值