2991: [2012北大校赛]Do Not Gamble

探讨了一种特殊硬币赌博模拟问题,通过概率计算及动态规划解决正面出现次数的概率。利用FFT快速傅里叶变换进行高效运算。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

2991: [2012北大校赛]Do Not Gamble

Time Limit: 10 Sec   Memory Limit: 128 MB
Submit: 8   Solved: 3
[ Submit][ Status][ Discuss]

Description

Have you ever gambled? I wish you not. Being a professional gambler for three years, I learned every
 trick they play. Gambling is has nothing to do with luck. It is all about cheating.One of the most 
elementary tricks is "mercury dice". Instead of being solid, there is a hollow chamber, filled with 
mercury, inside of the dice. This little modification makes the dice asymmetric, so that, the probab
ility of each outcome is not equal. Imprecisely, which face is up, depends on the previous state.To 
simplify the model, we use "mercury coin" here. As we mentioned, the state (head or tail) after toss
ing depends on the state before. The probability of that the two states are same, is p. For example,
 if the head side is up now, the probability that, the coin is still head up after tossing is p. And
 the probability that the tail side come up is 1-p. If p=0.5, it is a common coin. When p≠ 0.5, thi
s coin is mercury coin. Assume the coin is head up now. After we toss Ntimes, what is the probabilit
y that, the number of heads is less than a certain number K?

Input

The test file contains multiple lines, and each line is a test case. 
Each test cases consists of three numbers
N K p
N and K are integers, and p is a float number. 
N ≤ 30000
K ≤ N
0 ≤ p ≤ 1

Output

For each test case, output the probability that, after tossing N times, the number of heads is less 
than K. Rounded to the three digit after the decimal point.

Sample Input

171 88 0.107

Sample Output

0.83785

HINT

Source

[ Submit][ Status][ Discuss]

题意:
有一枚奇怪的硬币,每次投它有p的概率和上次同一面,1-p的概率和上次反面,一开始正面朝上,问投掷n次后正面朝上次数小于m次的概率是多少??
solution:
Fi,j:投了i次,正面朝上j次,最后一次正面朝上的概率,
Gi,j:投了i次,正面朝上j次,最后一次反面朝上的概率。
这样搞个dp,转移方程显然,但是n是3W的,这样就TLE了--
尝试重新定义————
Fi,j:投了i次,和第一次朝向相同j次,最后一次和第一次朝向相同
Gi,j:类似
这样就能F2i,j = ∑Fi,t*Fi,j-t + ∑Gi,t*Gi,i-j+t
类似列出G2i,j的式子,把F,G分别翻转,记作Fr,Gr,这样就是卷积的形式了
最后和快速幂的处理类似,,瞎搞一下FFT就行


卡精度了。。GG。。bzoj没有spj,输出要特判。。
#include<iostream>  
#include<cstdio>  
#include<cstring>  
#include<algorithm>  
#include<cmath>  
using namespace std;  
   
const int maxn = 2E5 + 10;  
typedef double DB;  
const DB PI = acos(-1.0);  
   
struct Virt{  
    DB r,i;  
    Virt(){}  
    Virt(DB r,DB i): r(r),i(i){}  
    Virt operator + (const Virt &b) {return Virt(r + b.r,i + b.i);}  
    Virt operator - (const Virt &b) {return Virt(r - b.r,i - b.i);}  
    Virt operator * (const Virt &b) {return Virt(r*b.r - i*b.i,r*b.i + i*b.r);}  
    Virt operator * (const DB &t) {return Virt(r*t,i*t);}  
    Virt operator / (const DB &t) {return Virt(r/t,i/t);}  
}F[maxn],G[maxn],Fr[maxn],Gr[maxn],A[maxn],B[maxn],C[maxn],D[maxn];  
   
int n,m;  
DB p,q; 
bool flag; 
   
void Rader(Virt *a,int len)  
{  
    int j = len >> 1;  
    for (int i = 1; i < len - 1; i++) {  
        if (i < j) swap(a[i],a[j]);  
        int k = len >> 1;  
        while (j >= k) {  
            j -= k;  
            k >>= 1;  
        }  
        j += k;  
    }  
}  
   
void FFT(Virt *a,int len,int on)  
{  
    Rader(a,len);  
    DB T = 2.00*PI*(DB)(on);  
    for (int k = 2; k <= len; k <<= 1) {  
        Virt wn = Virt(cos(T/(DB)(k)),sin(T/(DB)(k)));  
        for (int i = 0; i < len; i += k) {  
            Virt w = Virt(1.00,0.00);  
            for (int j = i; j < i + (k>>1); j++) {  
                Virt u = a[j];  
                Virt t = w*a[j + (k>>1)];  
                a[j] = u + t;  
                a[j + (k>>1)] = u - t;  
                w = w*wn;  
            }  
        }  
    }  
    if (on == -1)  
        for (int i = 0; i < len; i++)  
            a[i] = a[i]/(DB)(len);  
}  
   
void Multi(int len)
{
    if (!flag) {
        for (int i = 0; i <= len; i++)
            A[i] = F[i],B[i] = G[i];
        flag = 1;
        return;
    }
    int N = len << 2;
    FFT(A,N,1); FFT(B,N,1);
    FFT(F,N,1); FFT(G,N,1);
    FFT(Fr,N,1); FFT(Gr,N,1);
    for (int i = 0; i < N; i++) {
        Virt a = A[i],b = B[i];
        A[i] = a*F[i] + b*Gr[i];
        B[i] = a*G[i] + b*Fr[i];
    }
    FFT(A,N,-1); FFT(B,N,-1);
    FFT(F,N,-1); FFT(G,N,-1);
    FFT(Fr,N,-1); FFT(Gr,N,-1);
}
   
void Solve(int y)  
{  
    F[1] = Fr[0] = Virt(p,0.00);  
    G[0] = Gr[1] = Virt(q,0.00);  
    int N = 1;  
    for (; y; y >>= 1) {
        if (y & 1) Multi(N);
        int Now = N << 2;
        FFT(F,Now,1); FFT(Fr,Now,1);  
        FFT(G,Now,1); FFT(Gr,Now,1);
        for (int i = 0; i < Now; i++) {
            C[i] = F[i]*F[i] + G[i]*Gr[i];
            D[i] = F[i]*G[i] + G[i]*Fr[i];
        }
        FFT(C,Now,-1); FFT(D,Now,-1);
        N <<= 1;
        for (int i = 0; i <= N; i++) {  
            Fr[N - i] = F[i] = C[i];  
            Gr[N - i] = G[i] = D[i];  
        }  
        for (int i = N + 1; i < Now; i++) {
            F[i] = Fr[i] = Virt(0.00,0.00);
            G[i] = Gr[i] = Virt(0.00,0.00);
        }
    }
}  
   
void Clear(int len)  
{  
    int N = len << 2;
    for (int i = 0; i < N; i++) {  
        F[i] = Fr[i] = G[i] = Gr[i] = Virt(0.00,0.00);  
        A[i] = B[i] = C[i] = D[i] = Virt(0.00,0.00);  
    }  
}  
   
int main()  
{  
    #ifdef DMC  
        freopen("DMC.txt","r",stdin);  
    #endif  
       
    while (scanf("%d%d%lf",&n,&m,&p) != EOF) {  
        q = 1.00 - p;  
        Solve(n);  
        DB ans = 0;  
        flag = 0;
        for (int i = 0; i < m; i++)  
            ans += A[i].r + B[i].r;  
        if (fabs(ans) < 1e-6) puts("0.00000");
        else printf("%.5f\n",ans);
        Clear(n);  
    }  
    return 0;  
}  

内容概要:本文详细探讨了基于樽海鞘算法(SSA)优化的极限学习机(ELM)在回归预测任务中的应用,并与传统的BP神经网络、广义回归神经网络(GRNN)以及未优化的ELM进行了性能对比。首先介绍了ELM的基本原理,即通过随机生成输入层与隐藏层之间的连接权重及阈值,仅需计算输出权重即可快速完成训练。接着阐述了SSA的工作机制,利用樽海鞘群体觅食行为优化ELM的输入权重和隐藏层阈值,从而提高模型性能。随后分别给出了BP、GRNN、ELM和SSA-ELM的具体实现代码,并通过波士顿房价数据集和其他工业数据集验证了各模型的表现。结果显示,SSA-ELM在预测精度方面显著优于其他三种方法,尽管其训练时间较长,但在实际应用中仍具有明显优势。 适合人群:对机器学习尤其是回归预测感兴趣的科研人员和技术开发者,特别是那些希望深入了解ELM及其优化方法的人。 使用场景及目标:适用于需要高效、高精度回归预测的应用场景,如金融建模、工业数据分析等。主要目标是提供一种更为有效的回归预测解决方案,尤其是在处理大规模数据集时能够保持较高的预测精度。 其他说明:文中提供了详细的代码示例和性能对比图表,帮助读者更好地理解和复现实验结果。同时提醒使用者注意SSA参数的选择对模型性能的影响,建议进行参数敏感性分析以获得最佳效果。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值