B. Building Company

Problem - B - Codeforces

 

思路:我们能够发现这其实类似于操作系统的问题,其思想就是我们先把能完成的工程完成,然后加入完成工程后得到的奖励,然后再看是否会有新的工程能够完成,然后一直重复知道不会再出现新的工程能够完成,对于一个工程来说如果其中的一个需求目前已经能够满足了,那么之后是一定会满足的,所以我们就不需要再检查它是否满足了,基于这一点,我们可以考虑对所有的工种建立一个优先队列,存储哪个工程需要这个工种多少个,同时还需要标记一下当前工程还有几个需求需要满足,如果说当前工种的是数量能够满足当前需要该工种的工程的最小需求量,则将其弹出堆,同时将该工程的需求量减一,如果某个工程的需求都被满足了,那么我们就将这个工程对应的奖励添加上,同时检查一下被添加的工种再添加之后会不会满足某个工程对该工种的需求 ,直到不能够再被更新,最后只需要检查一下所有需求都被满足的工程的数量即可

// Problem: B. Building Company
// Contest: Codeforces - The 13th Shandong ICPC Provincial Collegiate Programming Contest
// URL: https://codeforces.com/gym/104417/problem/B
// Memory Limit: 1024 MB
// Time Limit: 2000 ms

#include<iostream>
#include<cstring>
#include<string>
#include<sstream>
#include<cmath>
#include<cstdio>
#include<algorithm>
#include<queue>
#include<map>
#include<stack>
#include<vector> 
#include<set>
#include<unordered_map>
#include<ctime>
#include<cstdlib>
#define fi first
#define se second
#define i128 __int128
using namespace std;
typedef long long ll;
typedef double db;
typedef pair<int,int> PII;
typedef pair<int,pair<int,int> > PIII;
const double eps=1e-7;
const int N=2e5+7 ,M=5e5+7, INF=0x3f3f3f3f,mod=1e9+7,mod1=998244353;
const long long int llINF=0x3f3f3f3f3f3f3f3f;
inline ll read() {ll x=0,f=1;char c=getchar();while(c<'0'||c>'9') {if(c=='-') f=-1;c=getchar();}
while(c>='0'&&c<='9') {x=(ll)x*10+c-'0';c=getchar();} return x*f;}
inline void write(ll x) {if(x < 0) {putchar('-'); x = -x;}if(x >= 10) write(x / 10);putchar(x % 10 + '0');}
inline void write(ll x,char ch) {write(x);putchar(ch);}
void stin() {freopen("in_put.txt","r",stdin);freopen("my_out_put.txt","w",stdout);}
bool cmp0(int a,int b) {return a>b;}
template<typename T> T gcd(T a,T b) {return b==0?a:gcd(b,a%b);}
template<typename T> T lcm(T a,T b) {return a*b/gcd(a,b);}
void hack() {printf("\n----------------------------------\n");}

int T,hackT;
int n,m,k;
int g;
ll vis[N];
vector<PII> s[N];
int po[N];
priority_queue<PII,vector<PII>,greater<PII> > q[N];

void solve() {
	g=read();
	
	int timestemp=0;
	map<int,int> st;
	for(int i=1;i<=g;i++) {
		int a=read(),b=read();
		if(st[a]==0) st[a]=++timestemp;
		vis[st[a]]+=b;
	}
	
	n=read();
	
	for(int i=1;i<=n;i++) {
		int mi=read();
		po[i]=mi;
		for(int j=1;j<=mi;j++) {
			int a=read(),b=read();
			if(st[a]==0) st[a]=++timestemp;
			q[st[a]].push({b,i});
		}
		int ki=read();
		for(int j=1;j<=ki;j++) {
			int a=read(),b=read();
			s[i].push_back({a,b});
		}
	}
	
	queue<int> sy,ti;
	for(int i=1;i<=n;i++) if(po[i]==0) ti.push(i);
	
	for(int i=1;i<=200000;i++) {
		while(q[i].size()!=0&&q[i].top().fi<=vis[i]) {
			po[q[i].top().se]--;
			if(po[q[i].top().se]==0) ti.push(q[i].top().se);
			q[i].pop();
		}
	}
	
	while(sy.size()!=0||ti.size()!=0) {
		while(ti.size()!=0) {
			int t=ti.front();
			ti.pop();
			for(int j=0;j<s[t].size();j++) {
				int a=s[t][j].fi,b=s[t][j].se;
				if(st[a]==0) st[a]=++timestemp;
				vis[st[a]]+=b;
				sy.push(st[a]);
			}
		}
		while(sy.size()!=0) {
			int t=sy.front();
			sy.pop();
			
			while(q[t].size()!=0&&q[t].top().fi<=vis[t]) {
				po[q[t].top().se]--;
				if(po[q[t].top().se]==0) ti.push(q[t].top().se);
				q[t].pop();
			}
		}
	}
	
	int res=0;
	for(int i=1;i<=n;i++) if(po[i]==0) res++;
	printf("%d\n",res);
}   

int main() {
    // init();
    // stin();

    // scanf("%d",&T);
    T=1; 
    while(T--) hackT++,solve();
    
    return 0;       
}          

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Apache Spark 2.x Cookbook by Rishi Yadav English | 31 May 2017 | ASIN: B071HX7GHW | 294 Pages | AZW3 | 7.94 MB Key Features This book contains recipes on how to use Apache Spark as a unified compute engine Cover how to connect various source systems to Apache Spark Covers various parts of machine learning including supervised/unsupervised learning & recommendation engines Book Description While Apache Spark 1.x gained a lot of traction and adoption in the early years, Spark 2.x delivers notable improvements in the areas of API, schema awareness, Performance, Structured Streaming, and simplifying building blocks to build better, faster, smarter, and more accessible big data applications. This book uncovers all these features in the form of structured recipes to analyze and mature large and complex sets of data. Starting with installing and configuring Apache Spark with various cluster managers, you will learn to set up development environments. Further on, you will be introduced to working with RDDs, DataFrames and Datasets to operate on schema aware data, and real-time streaming with various sources such as Twitter Stream and Apache Kafka. You will also work through recipes on machine learning, including supervised learning, unsupervised learning & recommendation engines in Spark. Last but not least, the final few chapters delve deeper into the concepts of graph processing using GraphX, securing your implementations, cluster optimization, and troubleshooting. What you will learn Install and configure Apache Spark with various cluster managers & on AWS Set up a development environment for Apache Spark including Databricks Cloud notebook Find out how to operate on data in Spark with schemas Get to grips with real-time streaming analytics using Spark Streaming & Structured Streaming Master supervised learning and unsupervised learning using MLlib Build a recommendation engine using MLlib Graph processing using GraphX and GraphFrames libraries Develop a set of common applications or project types, and solutions that solve complex big data problems About the Author Rishi Yadav has 19 years of experience in designing and developing enterprise applications. He is an open source software expert and advises American companies on big data and public cloud trends. Rishi was honored as one of Silicon Valley's 40 under 40 in 2014. He earned his bachelor's degree from the prestigious Indian Institute of Technology, Delhi, in 1998. About 12 years ago, Rishi started InfoObjects, a company that helps data-driven businesses gain new insights into data. InfoObjects combines the power of open source and big data to solve business challenges for its clients and has a special focus on Apache Spark. The company has been on the Inc. 5000 list of the fastest growing companies for 6 years in a row. InfoObjects has also been named the best place to work in the Bay Area in 2014 and 2015. Rishi is an open source contributor and active blogger. Table of Contents Getting Started with Apache Spark Developing Applications with Spark Spark SQL Working with External Data Sources Spark Streaming Getting Started with Machine Learning Supervised Learning with MLlib – Regression Supervised Learning with MLlib – Classification Unsupervised learning Recommendations Using Collaborative Filtering Graph Processing Using GraphX and GraphFrames Optimizations and Performance Tuning

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值