ZOJ-3794 Greedy Driver(最短路径)

题意

一张 n n 个节点, m 条边的有向图,你的车在 1 1 号节点,要开到 n 节点,容量为 C C (初始时满油)。有若干个点可以无限免费加油,又有若干个地方可以以当地的油价卖出任意容积的油。求从 1 n n 通过一次卖油赚取的最大价值。
1n1000
1m100000 1 ≤ m ≤ 100000

思路

由于卖油的地方只有一个,而且只能卖一次,所以这个点肯定是枚举的。构建一张正向图,求出从 1 1 出发,开到 i,能保有的最大油量,再构建一张反向图,求出从 n n 出发,开到 i,能花费的最少油量(就是正向图上从 i i n 的最小花费)。那这两个数作差就是在 i i 这个点多余,可以卖掉的油。
这道题的最短路和最长路没有 dijkstra 的单调性,能保有的油量在开到加油站时返增不减。这种最短路和最长路问题,用 spfa s p f a 往往可以方便的解决,但是 spfa s p f a 极限复杂度是 O(nm) O ( n m ) 的,在 nm n m 的值很大时,往往有其他的解法, spfa s p f a 应能避就避。

代码

#include<iostream>
#include<cmath>
#include<cstdio>
#include<cstdlib>
#include<cstring>
#include<algorithm>
#include<queue>
#define FOR(i,x,y) for(int i=(x);i<=(y);i++)
#define DOR(i,x,y) for(int i=(x);i>=(y);i--)
#define N 1003
#define M 100003
typedef long long LL;
using namespace std;
int dis1[N],dis2[N],sell[N];
template<const int maxn,const int maxm>struct Linked_list
{
    int head[maxn],to[maxm],cost[maxm],nxt[maxm],tot;
    void clear(){memset(head,-1,sizeof(head));tot=0;}
    void add(int u,int v,int w){to[++tot]=v,cost[tot]=w,nxt[tot]=head[u];head[u]=tot;}
    #define EOR(i,G,u) for(int i=G.head[u];~i;i=G.nxt[i])
};
Linked_list<N,M>G;
Linked_list<N,M>R;
bool addable[N],vis[N];
int n,m,C;
void solve1(int s)
{
    queue<int>q;
    while(!q.empty())q.pop();
    FOR(i,1,n)dis1[i]=-1;
    memset(vis,0,sizeof(vis));
    dis1[s]=C;vis[s]=1;
    q.push(s);
    while(!q.empty())
    {
        int u=q.front();q.pop();vis[u]=0;
        if(addable[u])dis1[u]=C;
        EOR(i,G,u)
        {
            int v=G.to[i],w=G.cost[i];
            if(dis1[u]-w>dis1[v]&&dis1[u]-w>=0)
            {
                dis1[v]=dis1[u]-w;
                if(!vis[v])
                {
                    vis[v]=1;
                    q.push(v);
                }
            }
        }
    }
}
void solve2(int s)
{
    queue<int>q;
    while(!q.empty())q.pop();
    FOR(i,1,n)dis2[i]=C+1;
    memset(vis,0,sizeof(vis));
    dis2[s]=0;vis[s]=1;
    q.push(s);
    while(!q.empty())
    {
        int u=q.front();q.pop();vis[u]=0;
        if(addable[u])dis2[u]=0;
        EOR(i,R,u)
        {
            int v=R.to[i],w=R.cost[i];
            if(dis2[u]+w<dis2[v])
            {
                dis2[v]=dis2[u]+w;
                if(!vis[v])
                {
                    vis[v]=1;
                    q.push(v);
                }
            }
        }
    }
}

int main()
{
    while(~scanf("%d%d%d",&n,&m,&C))
    {
        LL ans=-1;
        G.clear();R.clear();
        memset(sell,0,sizeof(sell));
        FOR(i,1,m)
        {
            int u,v,w;
            scanf("%d%d%d",&u,&v,&w);
            G.add(u,v,w);
            R.add(v,u,w);
        }
        memset(addable,0,sizeof(addable));
        int t,u,x;
        scanf("%d",&t);
        while(t--)
        {
            scanf("%d",&u);
            addable[u]=1;
        }
        solve1(1);
        solve2(n);
        scanf("%d",&t);
        while(t--)
        {
            scanf("%d%d",&u,&x);
            sell[u]=x;
        }
        FOR(i,1,n)
            if(dis1[i]>=dis2[i])
                ans=max(ans,1LL*(dis1[i]-dis2[i])*sell[i]);
        printf("%lld\n",ans);
    }
    return 0;
}
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Complexity theory of circuits strongly suggests that deep architectures can be much more efcient sometimes exponentially than shallow architectures in terms of computational elements required to represent some functions Deep multi layer neural networks have many levels of non linearities allowing them to compactly represent highly non linear and highly varying functions However until recently it was not clear how to train such deep networks since gradient based optimization starting from random initialization appears to often get stuck in poor solutions Hinton et al recently introduced a greedy layer wise unsupervised learning algorithm for Deep Belief Networks DBN a generative model with many layers of hidden causal variables In the context of the above optimization problem we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task Our experiments also conrm the hypothesis that the greedy layer wise unsupervised training strategy mostly helps the optimization by initializing weights in a region near a good local minimum giving rise to internal distributed representations that are high level abstractions of the input bringing better generalization ">Complexity theory of circuits strongly suggests that deep architectures can be much more efcient sometimes exponentially than shallow architectures in terms of computational elements required to represent some functions Deep multi layer neural networks have many levels of non linearities allowin [更多]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值