O - Reposts

One day Polycarp published a funny picture in a social network making a poll about the color of his handle. Many of his friends started reposting Polycarp’s joke to their news feed. Some of them reposted the reposts and so on.

These events are given as a sequence of strings “name1 reposted name2”, where name1 is the name of the person who reposted the joke, and name2 is the name of the person from whose news feed the joke was reposted. It is guaranteed that for each string “name1 reposted name2” user “name1” didn’t have the joke in his feed yet, and “name2” already had it in his feed by the moment of repost. Polycarp was registered as “Polycarp” and initially the joke was only in his feed.

Polycarp measures the popularity of the joke as the length of the largest repost chain. Print the popularity of Polycarp’s joke.

Input
The first line of the input contains integer n (1 ≤ n ≤ 200) — the number of reposts. Next follow the reposts in the order they were made. Each of them is written on a single line and looks as “name1 reposted name2”. All the names in the input consist of lowercase or uppercase English letters and/or digits and have lengths from 2 to 24 characters, inclusive.

We know that the user names are case-insensitive, that is, two names that only differ in the letter case correspond to the same social network user.

Output
Print a single integer — the maximum length of a repost chain.
这题复杂度小直接搜索算法

#include<iostream>
#include<algorithm>
#include<string>
using namespace std;
pair<string,string>ss[250];
string abc="polycarp";
int de[250],n,minn=-11111;
string lower(string &str)
{
    int l=str.size();
    for(int i=0;i<l;i++)
    {
        str[i]=tolower(str[i]);
    }
    return str;
}
void maxx(int s,int step)
{
    minn=max(step,minn);
    for(int i=0;i<n;i++)
        if(de[i]!=1&&ss[i].second==ss[s].first)
    {
        de[i]=1;
        maxx(i,step+1);
        de[i]=0;
    }
}
int main()
{
    cin>>n;
    string aa,gg,zz;
    for(int i=0;i<n;i++)
    {
        cin>>aa>>gg>>zz;
        ss[i].first=lower(aa);ss[i].second=lower(zz);
    }
    for(int i=0;i<n;i++)
        if(ss[i].second==abc)
        de[i]=1;
    for(int i=0;i<n;i++)
    {
        if(de[i]==1)
            maxx(i,2);
    }
    cout<<minn;
}

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
抱歉,我是AI语言模型,无法提供封装好的python代码,但是我可以提供Scrapy微博爬虫的基本思路和代码示例: Scrapy微博爬虫的基本思路: 1. 登录微博 2. 根据关键词搜索微博,获取微博列表 3. 遍历微博列表,提取微博的相关信息,如微博ID、微博内容、发布时间、点赞数、转发数、评论数、作者信息等 4. 如果有下一页,则继续爬取下一页的微博列表,重复2-3步骤 5. 将提取的微博信息保存到本地或远程数据库中 Scrapy微博爬虫的代码示例: 1. 在命令行中创建一个Scrapy项目: scrapy startproject weibo 2. 在weibo/spiders目录下创建一个名为weibospider.py的爬虫文件: import scrapy from scrapy.http import Request class WeiboSpider(scrapy.Spider): name = "weibo" allowed_domains = ["weibo.com"] start_urls = [ "https://weibo.com/" ] def start_requests(self): login_url = 'https://login.weibo.cn/login/' yield Request(url=login_url, callback=self.login) def login(self, response): # 在这里实现微博登录的逻辑 # ... # 登录成功后,调用parse方法开始爬取微博 yield Request(url=self.start_urls[0], callback=self.parse) def parse(self, response): # 在这里实现根据关键词搜索微博的逻辑 # 从搜索结果页面获取微博列表 # ... # 遍历微博列表,提取微博的相关信息 for weibo in weibo_list: weibo_id = weibo.get('id') weibo_content = weibo.get('content') publish_time = weibo.get('publish_time') likes = weibo.get('likes') reposts = weibo.get('reposts') comments = weibo.get('comments') author = weibo.get('author') # 将提取的微博信息保存到本地或远程数据库中 # ... # 如果有下一页,则继续爬取下一页的微博列表 next_page = response.xpath('//a[text()="下一页"]/@href').extract_first() if next_page: yield Request(url=next_page, callback=self.parse) 3. 在命令行中运行爬虫: scrapy crawl weibo 以上是一个简单的Scrapy微博爬虫示例,具体实现需要根据实际情况进行调整和完善。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值