python中finfo函数_Python numpy.finfo()用法及代碼示例

numpy.finfo()函數顯示浮點類型的機器限製。

用法: numpy.finfo(dtype)

參數:

dtype:[float,dtype或instance]要獲取有關其信息的浮點數據類型的種類。

返回:浮點類型的機器參數。

代碼1:

# Python program explaining

# numpy.finfo() function

# importing numpy as geek

import numpy as geek

gfg = geek.finfo(geek.float32)

print (gfg)

輸出:

Machine parameters for float32

---------------------------------------------------------------

precision = 6 resolution = 1.0000000e-06

machep = -23 eps = 1.1920929e-07

negep = -24 epsneg = 5.9604645e-08

minexp = -126 tiny = 1.1754944e-38

maxexp = 128 max = 3.4028235e+38

nexp = 8 min = -max

---------------------------------------------------------------

代碼2:

# Python program explaining

# numpy.finfo() function

# importing numpy as geek

import numpy as geek

gfg = geek.finfo(geek.float64)

print (gfg)

輸出:

Machine parameters for float64

---------------------------------------------------------------

precision = 15 resolution = 1.0000000000000001e-15

machep = -52 eps = 2.2204460492503131e-16

negep = -53 epsneg = 1.1102230246251565e-16

minexp = -1022 tiny = 2.2250738585072014e-308

maxexp = 1024 max = 1.7976931348623157e+308

nexp = 11 min = -max

---------------------------------------------------------------

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
这是一个crossattention模块:class CrossAttention(nn.Module): def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.): super().__init__() inner_dim = dim_head * heads context_dim = default(context_dim, query_dim) self.scale = dim_head ** -0.5 self.heads = heads self.to_q = nn.Linear(query_dim, inner_dim, bias=False) self.to_k = nn.Linear(context_dim, inner_dim, bias=False) self.to_v = nn.Linear(context_dim, inner_dim, bias=False) self.to_out = nn.Sequential( nn.Linear(inner_dim, query_dim), nn.Dropout(dropout) ) def forward(self, x, context=None, mask=None): h = self.heads q = self.to_q(x) context = default(context, x) k = self.to_k(context) v = self.to_v(context) q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) # force cast to fp32 to avoid overflowing if _ATTN_PRECISION =="fp32": with torch.autocast(enabled=False, device_type = 'cuda'): q, k = q.float(), k.float() sim = einsum('b i d, b j d -> b i j', q, k) * self.scale else: sim = einsum('b i d, b j d -> b i j', q, k) * self.scale del q, k if exists(mask): mask = rearrange(mask, 'b ... -> b (...)') max_neg_value = -torch.finfo(sim.dtype).max mask = repeat(mask, 'b j -> (b h) () j', h=h) sim.masked_fill_(~mask, max_neg_value) # attention, what we cannot get enough of sim = sim.softmax(dim=-1) out = einsum('b i j, b j d -> b i d', sim, v) out = rearrange(out, '(b h) n d -> b n (h d)', h=h) return self.to_out(out) 我如何从提取各个提示词的注意力热力图并用Gradio可视化?
最新发布
07-15
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值