HDU2767 Proving Equivalences

传送门
Problem Description
Consider the following exercise, found in a generic linear algebra textbook.
Let A be an n × n matrix. Prove that the following statements are equivalent:
1. A is invertible.
2. Ax = b has exactly one solution for every n × 1 matrix b.
3. Ax = b is consistent for every n × 1 matrix b.
4. Ax = 0 has only the trivial solution x = 0.
The typical way to solve such an exercise is to show a series of implications. For instance, one can proceed by showing that (a) implies (b), that (b) implies (c), that (c) implies (d), and finally that (d) implies (a). These four implications show that the four statements are equivalent.
Another way would be to show that (a) is equivalent to (b) (by proving that (a) implies (b) and that (b) implies (a)), that (b) is equivalent to (c), and that (c) is equivalent to (d). However, this way requires proving six implications, which is clearly a lot more work than just proving four implications!
I have been given some similar tasks, and have already started proving some implications. Now I wonder, how many more implications do I have to prove? Can you help me determine this?
Input
On the first line one positive number: the number of testcases, at most 100. After that per testcase:
* One line containing two integers n (1 ≤ n ≤ 20000) and m (0 ≤ m ≤ 50000): the number of statements and the number of implications that have already been proved.
* m lines with two integers s1 and s2 (1 ≤ s1, s2 ≤ n and s1 ≠ s2) each, indicating that it has been proved that statement s1 implies statement s2.
Output
Per testcase:
* One line with the minimum number of additional implications that need to be proved in order to prove that all statements are equivalent.
Sample Input
2
4 0
3 2
1 2
1 3
Sample Output
4
2
Source
NWERC 2008

题目大意

给你n个点,m条有向边,让你连接最少的边使得整个图变成强连通图。(多组数据)

题解

看到题立刻会想到其中有可能有一些子图已经是强联通的了,所以我们先进行一次缩点,剩下一个DAG。我一开始的想法是将所有入度为0和出度为0的点数各减1然后分别和0取max再相加再减1,想把这些点构成一个环,但实际上正确的做法应该是将入度为0的点数和出度为0的点数取max输出。因为如果要将原图构成强连通图,只需要让每一个点的入度和出度都不为0就好了,所以就得到了上面的结论(一定能够找到一种连接方法使得其符合要求)。如果原图已经是强联通的了,输出0。

CODE:

#include<cstdio>
#include<cstring>
const int N=2e4+10;
const int M=5e4+10;
struct edge
{
    int nxt,to;
}a[M],e[M<<5];
int head[N],Head[N];
int s[N],top;
bool instack[N];
int dfn[N],low[N];
int in[N],out[N];
int block[N];
int q,n,m,x,y,num,Num,tot,Time,numin,numout;
inline int max(const int &a,const int &b){return a>b?a:b;}
inline int min(const int &a,const int &b){return a<b?a:b;}
inline void add1(int x,int y)
{
    a[++num].nxt=head[x],a[num].to=y,head[x]=num;
}
inline void add2(int x,int y)
{
    e[++Num].nxt=Head[x],e[Num].to=y,Head[x]=Num;
}
void dfs(int now)
{
    dfn[now]=low[now]=++Time;
    s[++top]=now;
    instack[now]=1;
    for(int i=head[now];i;i=a[i].nxt)
      if(!dfn[a[i].to])
      {
        dfs(a[i].to);
        low[now]=min(low[now],low[a[i].to]);
      }
      else if(instack[a[i].to]) low[now]=min(low[now],dfn[a[i].to]);
    if(low[now]==dfn[now])
    {
        int tmp;
        tot++;
        do
        {
            tmp=s[top--];
            instack[tmp]=0;
            block[tmp]=tot;
        }while(tmp!=now);
    }
}
int main()
{
    scanf("%d",&q);
    while(q--)
    {
        scanf("%d%d",&n,&m);
        memset(head,0,sizeof(head));
        memset(Head,0,sizeof(Head));
        memset(dfn,0,sizeof(dfn));
        memset(in,0,sizeof(in));
        memset(out,0,sizeof(out));
        num=tot=top=Time=numin=numout=0;
        for(int i=1;i<=m;i++)
          scanf("%d%d",&x,&y),add1(x,y);
        for(int i=1;i<=n;i++)
          if(!dfn[i]) dfs(i);
        if(tot==1) printf("0\n");
        else
        {
            for(int j=1;j<=n;j++)
              for(int i=head[j];i;i=a[i].nxt)
                if(block[j]!=block[a[i].to]) in[block[a[i].to]]++,out[block[j]]++;
            for(int i=1;i<=tot;i++)
            {
                if(!in[i]) numin++;
                if(!out[i]) numout++;
            }
            printf("%d\n",max(numin,numout));
        }
    }
    return 0;
}
内容概要:本文详细探讨了基于樽海鞘算法(SSA)优化的极限学习机(ELM)在回归预测任务中的应用,并与传统的BP神经网络、广义回归神经网络(GRNN)以及未优化的ELM进行了性能对比。首先介绍了ELM的基本原理,即通过随机生成输入层与隐藏层之间的连接权重及阈值,仅需计算输出权重即可快速完成训练。接着阐述了SSA的工作机制,利用樽海鞘群体觅食行为优化ELM的输入权重和隐藏层阈值,从而提高模型性能。随后分别给出了BP、GRNN、ELM和SSA-ELM的具体实现代码,并通过波士顿房价数据集和其他工业数据集验证了各模型的表现。果显示,SSA-ELM在预测精度方面显著优于其他三种方法,尽管其训练时间较长,但在实际应用中仍具有明显优势。 适合人群:对机器学习尤其是回归预测感兴趣的科研人员和技术开发者,特别是那些希望深入了解ELM及其优化方法的人。 使用场景及目标:适用于需要高效、高精度回归预测的应用场景,如金融建模、工业数据分析等。主要目标是提供一种更为有效的回归预测解决方案,尤其是在处理大规模数据集时能够保持较高的预测精度。 其他说明:文中提供了详细的代码示例和性能对比图表,帮助读者更好地理解和复现实验果。同时提醒使用者注意SSA参数的选择对模型性能的影响,建议进行参数敏感性分析以获得最佳效果。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值