蒟蒻的ACM算法(五)-KMP算法

蒟蒻的ACM算法五-KMP算法


蒟蒻学识浅陋,欢迎各位大牛指正

KMP从入门到放弃

请观左神为什么想要杀人%%%%njb7
着重听1h12m20s
KMP分为两个部分,一部分为两个字符串间的比较,另一部分为自己与自己的比较.
简单的划分为下面两个图,详细理解请见左神不稳定情绪讲解.
1
2
不过我jiao的在1:21:04时,将例子换为"ababcababak"更好理解
3

代码

#include<iostream>
#include<algorithm>
#include<cstring>
#include<queue>
#include<stdio.h>
#include<stdlib.h>
#ifndef NULL
#define NULL 0
#endif
using namespace std;

char t[1000100],s[1000100];
int len1, len2,n[1000100];
void KMP(char *s,char *t)
{
	for (int i = 0, j = -1; i < len1; i++) {
		while (j != -1 && t[j + 1] != s[i])
			j = n[j];
		if (t[j + 1] == s[i])
			j++;
		if (j == len2 - 1) {
			cout << i - len2 + 2 << endl;
			j = n[j];
		}
	}
}
void getnext(char *t)
{
	n[0] = -1;
	for (int i = 1, j = -1; i < len2; i++) {
		while (j != -1 && t[i] != t[j + 1])
			j = n[j];
		if (t[i] == t[j + 1])
			j++;
		n[i] = j;
	}
}
int main()
{
	cin >> s >> t;
	len1 = strlen(s);
	len2 = strlen(t);
	getnext(t);
	KMP(s, t);
	for (int i = 0; i < len2; i++)
		cout << n[i]+1 << ' ';
	return 0;
}

例题

G Oulipo
The French author Georges Perec (1936–1982) once wrote a book, La disparition, without the letter ‘e’. He was a member of the Oulipo group. A quote from the book:

Tout avait Pair normal, mais tout s’affirmait faux. Tout avait Fair normal, d’abord, puis surgissait l’inhumain, l’affolant. Il aurait voulu savoir où s’articulait l’association qui l’unissait au roman : stir son tapis, assaillant à tout instant son imagination, l’intuition d’un tabou, la vision d’un mal obscur, d’un quoi vacant, d’un non-dit : la vision, l’avision d’un oubli commandant tout, où s’abolissait la raison : tout avait l’air normal mais…

Perec would probably have scored high (or rather, low) in the following contest. People are asked to write a perhaps even meaningful text on some subject with as few occurrences of a given “word” as possible. Our task is to provide the jury with a program that counts these occurrences, in order to obtain a ranking of the competitors. These competitors often write very long texts with nonsense meaning; a sequence of 500,000 consecutive 'T’s is not unusual. And they never use spaces.

So we want to quickly find out how often a word, i.e., a given string, occurs in a text. More formally: given the alphabet {‘A’, ‘B’, ‘C’, …, ‘Z’} and two finite strings over that alphabet, a word W and a text T, count the number of occurrences of W in T. All the consecutive characters of W must exactly match consecutive characters of T. Occurrences may overlap.

Input
The first line of the input file contains a single number: the number of test cases to follow. Each test case has the following format:
One line with the word W, a string over {‘A’, ‘B’, ‘C’, …, ‘Z’}, with 1 ≤ |W| ≤ 10,000 (here |W| denotes the length of the string W).
One line with the text T, a string over {‘A’, ‘B’, ‘C’, …, ‘Z’}, with |W| ≤ |T| ≤ 1,000,000.

Output
For every test case in the input file, the output should contain a single number, on a single line: the number of occurrences of the word W in the text T.
Sample Input
3
BAPC
BAPC
AZA
AZAZAZA
VERDI
AVERDXIVYERDIAN
Sample Output
1
3
0

解析
kmp模板,读入不能用cin,否则超时

#include<iostream>
#include<algorithm>
#include<string.h>
#include<queue>
#include<stdio.h>
#include<stdlib.h>
#ifndef NULL
#define NULL 0
#endif
using namespace std;

typedef long long ll;
char t[1000100], s[1000100];
int len1, len2, n[1000100];
ll read()
{
	ll f = 1,x = 0;
	char s = getchar();
	while (s<'0' || s>'9') {
		if (s == '-')
			f = -1;
		s = getchar();
	}
	while (s >= '0'&&s <= '9') {
		x = x * 10 + s - '0';
		s = getchar();
	}
	x *= f;
	return x;
}
int KMP(char *s, char *t)
{
	int ans = 0;
	for (int i = 0, j = -1; i < len1; i++) {
		while (j != -1 && t[j + 1] != s[i])
			j = n[j];
		if (t[j + 1] == s[i])
			j++;
		if (j == len2 - 1) {
			ans++;
			j = n[j];
		}
	}
	return ans;
}
void getnext(char *t)
{
	n[0] = -1;
	for (int i = 1, j = -1; i < len2; i++) {
		while (j != -1 && t[i] != t[j + 1])
			j = n[j];
		if (t[i] == t[j + 1])
			j++;
		n[i] = j;
	}
}
int main()
{
	int m;
	m = read();
	while (m--) {
		memset(n, 0, sizeof(n));
		scanf("%s %s", &t, &s);
		len1 = strlen(s);
		len2 = strlen(t);
		getnext(t);
		cout<<KMP(s, t)<<endl;
	}
	return 0;
}
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
逻辑回归是一种广泛用于分类任务的线性模型,特别适合于二分类问题。其基本思想是建立输入特征和输出概率之间的线性关系,然后使用sigmoid函数将这个线性结果转换为0到1之间的概率,从而预测一个样本属于某个类别的可能性。 Python中,我们可以使用sklearn库中的LogisticRegression模块来实现逻辑回归。以下是逻辑回归的基本步骤: 1. **数据准备**:导入所需的库,如numpy, pandas, 和sklearn,并加载数据集。 ```python import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import StandardScaler # 加载数据(假设数据集名为df) X = df.drop('target_column', axis=1) # 特征 y = df['target_column'] # 目标变量 ``` 2. **数据预处理**:通常包括归一化或标准化数据,因为逻辑回归对特征尺度敏感。 ```python scaler = StandardScaler() X_scaled = scaler.fit_transform(X) ``` 3. **划分训练集和测试集**: ```python X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=42) ``` 4. **创建并训练模型**: ```python model = LogisticRegression(max_iter=10000) # 配置参数,例如迭代次数 model.fit(X_train, y_train) ``` 5. **预测和评估**: ```python y_pred = model.predict(X_test) accuracy = model.score(X_test, y_test) ``` 6. **模型解释**:逻辑回归模型的系数和截距可以用来理解各个特征对目标变量的影响。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值