利用streamlit创建数据分析

        在当今数据驱动的世界中,能够快速构建直观、交互式的数据分析工具变得越来越重要。Streamlit作为一个革命性的Python库,正在改变数据科学家和分析师展示他们工作的方式。本文将深入探讨如何利用Streamlit创建一个功能丰富、专业级的数据分析平台。

  1. Streamlit简介

Streamlit是一个开源Python库,专为数据科学和机器学习项目设计。它的核心理念是简化数据应用的开发过程,让数据专业人士能够专注于数据本身,而不是繁琐的web开发细节。

主要特点:

  • 简洁的API
  • 实时重载功能
  • 丰富的内置组件
  • 支持自定义组件
  • 易于部署
  1. 搭建基础数据分析平台

让我们从一个基础的数据分析平台开始,逐步构建更复杂的功能。

import streamlit as st
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

st.title('数据探索与分析平台')

# 文件上传
uploaded_file = st.file_uploader("选择CSV文件", type="csv")
if uploaded_file is not None:
    data = pd.read_csv(uploaded_file)
    st.write("数据预览:")
    st.write(data.head())
    
    # 基本统计信息
    st.subheader("基本统计信息")
    st.write(data.describe())
    
    # 选择要分析的列
    numeric_columns = data.select_dtypes(include=[np.number]).columns
    selected_column = st.selectbox("选择要分析的列", numeric_columns)
    
    # 绘制直方图
    st.subheader(f"{selected_column} 的分布")
    fig, ax = plt.subplots()
    sns.histplot(data[selected_column], kde=True, ax=ax)
    st.pyplot(fig)
    
    # 相关性分析
    st.subheader("相关性热力图")
    corr = data[numeric_columns].corr()
    fig, ax = plt.subplots(figsize=(10, 8))
    sns.heatmap(corr, annot=True, cmap='coolwarm', ax=ax)
    st.pyplot(fig)

  1. 添加高级分析功能

接下来,我们可以添加一些更高级的分析功能,如时间序列分析和预测。

import plotly.express as px
from statsmodels.tsa.seasonal import seasonal_decompose
from prophet import Prophet

# 时间序列分析
if 'date' in data.columns:
    st.subheader("时间序列分析")
    data['date'] = pd.to_datetime(data['date'])
    time_column = st.selectbox("选择时间序列数据列", numeric_columns)
    
    fig = px.line(data, x='date', y=time_column, title=f'{time_column}随时间的变化')
    st.plotly_chart(fig)
    
    # 季节性分解
    decomposition = seasonal_decompose(data[time_column], model='additive', period=30)
    fig, (ax1, ax2, ax3, ax4) = plt.subplots(4, 1, figsize=(12, 16))
    decomposition.observed.plot(ax=ax1)
    ax1.set_title('Observed')
    decomposition.trend.plot(ax=ax2)
    ax2.set_title('Trend')
    decomposition.seasonal.plot(ax=ax3)
    ax3.set_title('Seasonal')
    decomposition.resid.plot(ax=ax4)
    ax4.set_title('Residual')
    st.pyplot(fig)
    
    # 使用Prophet进行预测
    st.subheader("时间序列预测 (Prophet)")
    forecast_days = st.slider("预测天数", 1, 365, 30)
    df_prophet = data[['date', time_column]].rename(columns={'date': 'ds', time_column: 'y'})
    m = Prophet()
    m.fit(df_prophet)
    future = m.make_future_dataframe(periods=forecast_days)
    forecast = m.predict(future)
    fig = m.plot(forecast)
    st.pyplot(fig)

  1. 添加机器学习模型

我们还可以集成简单的机器学习模型,如分类或回归。

from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, classification_report

st.subheader("机器学习模型 - 随机森林分类器")

target_column = st.selectbox("选择目标变量", data.columns)
feature_columns = st.multiselect("选择特征变量", [col for col in data.columns if col != target_column])

if len(feature_columns) > 0:
    X = data[feature_columns]
    y = data[target_column]
    
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
    
    model = RandomForestClassifier(n_estimators=100, random_state=42)
    model.fit(X_train, y_train)
    
    y_pred = model.predict(X_test)
    
    accuracy = accuracy_score(y_test, y_pred)
    st.write(f"模型准确率: {accuracy:.2f}")
    
    st.write("分类报告:")
    st.code(classification_report(y_test, y_pred))
    
    # 特征重要性
    feature_importance = pd.DataFrame({'feature': feature_columns, 'importance': model.feature_importances_})
    feature_importance = feature_importance.sort_values('importance', ascending=False)
    
    fig, ax = plt.subplots()
    sns.barplot(x='importance', y='feature', data=feature_importance, ax=ax)
    ax.set_title("特征重要性")
    st.pyplot(fig)

  1. 优化用户体验

最后,我们可以添加一些功能来优化用户体验:

# 添加侧边栏
st.sidebar.title("配置选项")
chart_type = st.sidebar.selectbox("选择图表类型", ["折线图", "柱状图", "散点图"])

# 添加交互式数据表格
st.subheader("交互式数据表格")
st.dataframe(data.style.highlight_max(axis=0))

# 添加下载功能
@st.cache
def convert_df(df):
    return df.to_csv().encode('utf-8')

csv = convert_df(data)
st.download_button(
    label="下载数据为CSV",
    data=csv,
    file_name='data.csv',
    mime='text/csv',
)

# 添加About页面
if st.sidebar.checkbox("显示About信息"):
    st.sidebar.info("这是一个使用Streamlit构建的数据分析平台演示。")
    st.sidebar.info("作者: Your Name")
    st.sidebar.info("版本: 1.0")

通过Streamlit,我们能够快速构建一个功能丰富的数据分析平台。从基本的数据探索到高级的时间序列分析和机器学习模型,Streamlit都能轻松应对。这个平台不仅可以帮助数据科学家更有效地进行数据分析,还能让他们轻松地与团队成员和利益相关者分享他们的发现。

随着Streamlit的不断发展,我们可以期待看到更多创新的数据应用。无论你是数据科学新手还是经验丰富的分析师,Streamlit都是一个值得掌握的强大工具。

开始使用Streamlit构建你自己的数据分析平台吧,相信你会发现它的魅力所在!

注意事项:

  1. 请确保安装了所有必要的库(streamlit, pandas, numpy, matplotlib, seaborn, plotly, statsmodels, prophet, scikit-learn)。
  2. 这个示例假设你的数据集包含日期列和数值列。你可能需要根据实际数据集调整代码。
  3. Prophet模型可能需要一些时间来训练和预测,特别是对于大型数据集。
  4. 始终记得处理可能的错误和异常情况,以提高应用的稳定性。

效果图: 

具体代码如下:

import base64
import networkx as nx
from scipy.cluster import hierarchy
from sklearn.impute import KNNImputer
from sklearn.preprocessing import MinMaxScaler, RobustScaler
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
import re
from sklearn.cluster import KMeans, DBSCAN, AgglomerativeClustering
from sklearn.metrics import silhouette_score, calinski_harabasz_score, davies_bouldin_score
from io import BytesIO
from statsmodels.tsa.seasonal import seasonal_decompose
import statsmodels.api as sm
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet
from sklearn.cross_decomposition import PLSRegression
from sklearn.metrics import mean_squared_error, r2_score
from scipy import stats
import streamlit as st
import pandas as pd
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA, KernelPCA, TruncatedSVD
from sklearn.manifold import TSNE, MDS, Isomap
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from umap import UMAP
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
import streamlit as st
from langchain_community.llms import Tongyi
# 设置页面配置
st.set_page_config(page_title="Excel数据分析器", layout="wide")


def create_tongyi_chat_module():
    # 初始化Tongyi模型
    tongyi = Tongyi()

    # 初始化对话历史
    if 'conversation_history' not in st.session_state:
        st.session_state.conversation_history = []

    def chat_with_tongyi(user_input):
        # 将用户输入添加到对话历史中
        st.session_state.conversation_history.append(f"用户: {user_input}")

        # 创建对话上下文
        context = "\n".join(st.session_state.conversation_history)

        # 添加模型提示
        prompt = f"{context}\n系统:"

        # 使用Tongyi生成回复
        result = tongyi.generate([prompt])
        response = result.generations[0][0].text.strip()

        # 将系统回复添加到对话历史中
        st.session_state.conversation_history.append(f"系统: {response}")

        return response

    # Streamlit界面
    st.title("Tongyi 聊天助手")

    # 显示对话历史
    for message in st.session_state.conversation_history:
        st.text(message)

    # 用户输入
    user_input = st.text_input("请输入您的问题:")

    if st.button("发送"):
        if user_input:
            response = chat_with_tongyi(user_input)
            st.text(f"系统: {response}")
        else:
            st.warning("请输入问题后再发送。")

    # 清空对话历史的按钮
    if st.button("清空对话历史"):
        st.session_state.conversation_history = []
        st.success("对话历史已清空!")
# 定义主函数
def main():
    st.title("数据分析和知识问答系统设计")
    create_tongyi_chat_module()
    # 文件上传
    uploaded_file = st.file_uploader("选择一个Excel文件", type=["xlsx", "xls"])
    if uploaded_file is not None:
        # 读取Excel文件
        df = pd.read_excel(uploaded_file)
        # 显示数据预览
        st.subheader("数据预览")
        st.dataframe(df.head())

        # 显示基本统计信息
        st.subheader("基本统计信息")
        st.write(df.describe())

        # 数据可视化选项
        st.sidebar.subheader("数据可视化选项")
        visualize_option = st.sidebar.selectbox("选择可视化类型",
                                                ["柱状图", "散点图", "折线图", "饼图", "热力图", "箱线图", "小提琴图", "3D散点图"])
        if visualize_option == "柱状图":
            bar_chart(df)
        elif visualize_option == "散点图":
            scatter_plot(df)
        elif visualize_option == "折线图":
            line_chart(df)
        elif visualize_option == "饼图":
            pie_chart(df)
        elif visualize_option == "热力图":
            heatmap(df)
        elif visualize_option == "箱线图":
            box_plot(df)
        elif visualize_option == "小提琴图":
            violin_plot(df)
        elif visualize_option == "3D散点图":
            scatter_3d(df)

        # 数据分析选项
        st.sidebar.subheader("数据分析选项")
        analysis_option = st.sidebar.selectbox("选择分析类型",
                                               ["列分布", "相关性分析", "分组统计", "时间序列分析", "高级过滤", "数据清洗",
                                                "假设检验", "聚类分析", "主成分分析", "回归分析"])

        if analysis_option == "列分布":
            column_distribution(df)
        elif analysis_option == "相关性分析":
            correlation_analysis(df)
        elif analysis_option == "分组统计":
            group_statistics(df)
        elif analysis_option == "时间序列分析":
            time_series_analysis(df)
        elif analysis_option == "高级过滤":
            advanced_filter(df)
        elif analysis_option == "数据清洗":
            data_cleaning(df)
        elif analysis_option == "假设检验":
            hypothesis_testing(df)
        elif analysis_option == "聚类分析":
            cluster_analysis(df)
        elif analysis_option == "主成分分析":
            pca_analysis(df)
        elif analysis_option == "回归分析":
            regression_analysis(df)

        # 数据转换选项
        st.sidebar.subheader("数据转换选项")
        transform_option = st.sidebar.selectbox("选择转换类型",
                                                ["标准化", "对数转换", "一热编码", "二值化"])

        if transform_option == "标准化":
            df = standardize_data(df)
        elif transform_option == "对数转换":
            df = log_transform(df)
        elif transform_option == "一热编码":
            df = one_hot_encode(df)
        elif transform_option == "二值化":
            df = binarize_data(df)
        # 数据下载选项
        st.subheader("下载处理后的数据")
        download_processed_data(df)
# 柱状图可视化
def bar_chart(df):
    st.subheader("柱状图分析")
    column = st.selectbox("选择要分析的列", df.columns.tolist(), key="bar_chart_column_select")
    # 获取值计数并排序
    value_counts = df[column].value_counts().reset_index()
    value_counts.columns = ['category', 'count']
    value_counts['percentage'] = value_counts['count'] / value_counts['count'].sum() * 100
    # 限制显示的类别数量
    top_n = st.slider("显示前N个类别", min_value=5, max_value=min(20, len(value_counts)), value=10, key="bar_chart_top_n")
    value_counts = value_counts.head(top_n)
    # 创建柱状图
    fig = go.Figure()
    fig.add_trace(go.Bar(
        x=value_counts['category'],
        y=value_counts['count'],
        text=value_counts['percentage'].round(2).astype(str) + '%',
        textposition='outside',
        hovertemplate='类别: %{x}<br>数量: %{y}<br>百分比: %{text}<extra></extra>',
        marker_color='skyblue',
        marker_line_color='rgb(8,48,107)',
        marker_line_width=1.5,
        opacity=0.6
    ))
    # 更新布局
    fig.update_layout(
        title={
            'text': f"{column}的分布",
            'y': 0.95,
            'x': 0.5,
            'xanchor': 'center',
            'yanchor': 'top'
        },
        xaxis_title="类别",
        yaxis_title="数量",
        bargap=0.2,
        bargroupgap=0.1,
        plot_bgcolor='rgba(0,0,0,0)',
        yaxis=dict(gridcolor='lightgrey')
    )
    # 根据类别数量调整x轴标签
    if len(value_counts) > 10:
        fig.update_xaxes(tickangle=45)
    # 显示图表
    st.plotly_chart(fig, use_container_width=True)
    # 显示数据表格
    st.write("数据明细:")
    st.dataframe(value_counts)
# 散点图可视化
def scatter_plot(df):
    st.subheader("散点图")
    x_axis = st.selectbox("选择X轴列", df.columns.tolist())
    y_axis = st.selectbox("选择Y轴列", df.columns.tolist())
    fig = px.scatter(df, x=x_axis, y=y_axis)
    st.plotly_chart(fig)


# 折线图可视化
def line_chart(df):
    st.subheader("折线图")
    x_axis = st.selectbox("选择X轴列", df.columns.tolist())
    y_axis = st.selectbox("选择Y轴列", df.columns.tolist())
    fig = px.line(df, x=x_axis, y=y_axis)
    st.plotly_chart(fig)
# 饼图可视化
def pie_chart(df):
    st.subheader("饼图")
    column = st.selectbox("选择要分析的列", df.columns.tolist())
    fig = px.pie(df, names=column)
    st.plotly_chart(fig)


# 热力图可视化
def heatmap(df):
    st.subheader("热力图")
    fig = px.imshow(df.corr(), text_auto=True, aspect="auto")
    st.plotly_chart(fig)


# 箱线图可视化
def box_plot(df):
    st.subheader("箱线图")
    column = st.selectbox("选择要分析的列", df.select_dtypes(include=['float64', 'int64']).columns)
    fig = px.box(df, y=column)
    st.plotly_chart(fig)


# 小提琴图可视化
def violin_plot(df):
    st.subheader("小提琴图")
    column = st.selectbox("选择要分析的列", df.select_dtypes(include=['float64', 'int64']).columns)
    fig = px.violin(df, y=column)
    st.plotly_chart(fig)


# 3D散点图可视化
def scatter_3d(df):
    st.subheader("3D散点图")
    x_axis = st.selectbox("选择X轴列", df.select_dtypes(include=['float64', 'int64']).columns)
    y_axis = st.selectbox("选择Y轴列", df.select_dtypes(include=['float64', 'int64']).columns)
    z_axis = st.selectbox("选择Z轴列", df.select_dtypes(include=['float64', 'int64']).columns)
    fig = px.scatter_3d(df, x=x_axis, y=y_axis, z=z_axis)
    st.plotly_chart(fig)


# 列分布分析
def column_distribution(df):
    st.subheader("列分布分析")
    column = st.selectbox("选择要分析的列", df.columns.tolist())
    if df[column].dtype in ['float64', 'int64']:
        fig = px.histogram(df, x=column, nbins=30, marginal="box")
        st.plotly_chart(fig)
        st.write(f"{column}的统计摘要:")
        st.write(df[column].describe())
    else:
        fig = px.bar(df[column].value_counts().reset_index(), x='index', y=column)
        st.plotly_chart(fig)
        st.write(f"{column}的值计数:")
        st.write(df[column].value_counts())


# 相关性分析
def correlation_analysis(df):
    df = df.replace([np.inf, -np.inf], np.nan)
    st.subheader("高级相关性分析")

    # 数据预处理
    def preprocess_data(data):
        # 移除非数值列
        numeric_df = data.select_dtypes(include=[np.number])
        # 处理缺失值
        numeric_df = numeric_df.fillna(numeric_df.mean())
        return numeric_df

    # 计算相关性矩阵
    def compute_correlation(data, method='pearson'):
        return data.corr(method=method)

    # 绘制热力图
    def plot_heatmap(corr_matrix, colorscale='Viridis'):
        fig = px.imshow(corr_matrix,
                        text_auto=True,
                        aspect="auto",
                        color_continuous_scale=colorscale)
        fig.update_layout(title="相关性热力图")
        return fig

    # 绘制散点图
    def plot_scatter(data, x, y):
        fig = px.scatter(data, x=x, y=y, trendline="ols")
        fig.update_layout(title=f"{x} vs {y} 散点图")
        return fig

    # 绘制配对图
    def plot_pairplot(data):
        pair_plot = sns.pairplot(data)
        st.pyplot(pair_plot.fig)

    # 计算和显示描述性统计
    def show_descriptive_stats(data):
        st.write("描述性统计:")
        st.write(data.describe())

    # 计算和显示偏度和峰度
    def show_skewness_kurtosis(data):
        st.write("偏度和峰度:")
        skew = data.skew()
        kurt = data.kurtosis()
        stats_df = pd.DataFrame({'偏度': skew, '峰度': kurt})
        st.write(stats_df)

    # 执行假设检验
    def perform_hypothesis_test(data, x, y):
        corr, p_value = stats.pearsonr(data[x], data[y])
        st.write(f"{x} 和 {y} 之间的皮尔逊相关系数: {corr:.4f}")
        st.write(f"P值: {p_value:.4f}")
        if p_value < 0.05:
            st.write("在0.05的显著性水平下,存在显著相关性。")
        else:
            st.write("在0.05的显著性水平下,不存在显著相关性。")

    # 主要分析流程
    preprocessed_df = preprocess_data(df)

    # 选择相关性计算方法
    correlation_method = st.selectbox(
        "选择相关性计算方法",
        ["pearson", "spearman", "kendall"]
    )

    # 计算相关性矩阵
    correlation_matrix = compute_correlation(preprocessed_df, method=correlation_method)

    # 相关性热力图
    st.subheader("相关性热力图")
    colorscale = st.selectbox(
        "选择颜色主题",
        ["Viridis", "Plasma", "Inferno", "Magma", "Cividis"]
    )
    heatmap_fig = plot_heatmap(correlation_matrix, colorscale)
    st.plotly_chart(heatmap_fig)

    # 散点图
    st.subheader("散点图分析")
    x_column = st.selectbox("选择 X 轴变量", preprocessed_df.columns)
    y_column = st.selectbox("选择 Y 轴变量", preprocessed_df.columns)
    scatter_fig = plot_scatter(preprocessed_df, x_column, y_column)
    st.plotly_chart(scatter_fig)


    # 描述性统计
    if st.checkbox("显示描述性统计"):
        show_descriptive_stats(preprocessed_df)

    # 偏度和峰度
    if st.checkbox("显示偏度和峰度"):
        show_skewness_kurtosis(preprocessed_df)

    # 假设检验
    if st.checkbox("执行假设检验"):
        st.subheader("假设检验")
        perform_hypothesis_test(preprocessed_df, x_column, y_column)

    # 高相关性对的识别
    st.subheader("高相关性变量对")
    correlation_threshold = st.slider("选择相关性阈值", 0.0, 1.0, 0.8, 0.05)
    high_corr_pairs = []
    for i in range(len(correlation_matrix.columns)):
        for j in range(i):
            if abs(correlation_matrix.iloc[i, j]) > correlation_threshold:
                high_corr_pairs.append(
                    (correlation_matrix.columns[i], correlation_matrix.columns[j], correlation_matrix.iloc[i, j]))

    if high_corr_pairs:
        st.write(f"相关性大于 {correlation_threshold} 的变量对:")
        for pair in high_corr_pairs:
            st.write(f"{pair[0]} - {pair[1]}: {pair[2]:.4f}")
    else:
        st.write(f"没有找到相关性大于 {correlation_threshold} 的变量对。")

    # 相关性网络图
    if st.checkbox("显示相关性网络图"):
        st.subheader("相关性网络图")
        network_threshold = st.slider("选择网络图相关性阈值", 0.0, 1.0, 0.5, 0.05)

        G = nx.Graph()
        for i in range(len(correlation_matrix.columns)):
            for j in range(i):
                if abs(correlation_matrix.iloc[i, j]) > network_threshold:
                    G.add_edge(correlation_matrix.columns[i], correlation_matrix.columns[j],
                               weight=abs(correlation_matrix.iloc[i, j]))

        pos = nx.spring_layout(G)
        edge_x = []
        edge_y = []
        for edge in G.edges():
            x0, y0 = pos[edge[0]]
            x1, y1 = pos[edge[1]]
            edge_x.extend([x0, x1, None])
            edge_y.extend([y0, y1, None])

        edge_trace = go.Scatter(
            x=edge_x, y=edge_y,
            line=dict(width=0.5, color='#888'),
            hoverinfo='none',
            mode='lines')

        node_x = []
        node_y = []
        for node in G.nodes():
            x, y = pos[node]
            node_x.append(x)
            node_y.append(y)

        node_trace = go.Scatter(
            x=node_x, y=node_y,
            mode='markers',
            hoverinfo='text',
            marker=dict(
                showscale=True,
                colorscale='YlGnBu',
                reversescale=True,
                color=[],
                size=10,
                colorbar=dict(
                    thickness=15,
                    title='节点连接数',
                    xanchor='left',
                    titleside='right'
                ),
                line_width=2))

        node_adjacencies = []
        node_text = []
        for node, adjacencies in enumerate(G.adjacency()):
            node_adjacencies.append(len(adjacencies[1]))
            node_text.append(f'{adjacencies[0]}<br># of connections: {len(adjacencies[1])}')

        node_trace.marker.color = node_adjacencies
        node_trace.text = node_text

        fig = go.Figure(data=[edge_trace, node_trace],
                        layout=go.Layout(
                            title='相关性网络图',
                            titlefont_size=16,
                            showlegend=False,
                            hovermode='closest',
                            margin=dict(b=20, l=5, r=5, t=40),
                            annotations=[dict(
                                text="Python code: <a href='https://plotly.com/ipython-notebooks/network-graphs/'> https://plotly.com/ipython-notebooks/network-graphs/</a>",
                                showarrow=False,
                                xref="paper", yref="paper",
                                x=0.005, y=-0.002)],
                            xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
                            yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
                        )
        st.plotly_chart(fig)

    # 相关性随时间变化分析(假设数据集中有时间列)
    if 'date' in df.columns or 'time' in df.columns:
        st.subheader("相关性随时间变化分析")
        time_column = 'date' if 'date' in df.columns else 'time'
        df[time_column] = pd.to_datetime(df[time_column])

        time_window = st.selectbox("选择时间窗口", ["天", "周", "月", "季度", "年"])
        feature1 = st.selectbox("选择第一个特征", preprocessed_df.columns)
        feature2 = st.selectbox("选择第二个特征", preprocessed_df.columns)

        if time_window == "天":
            df_grouped = df.groupby(df[time_column].dt.date)
        elif time_window == "周":
            df_grouped = df.groupby(df[time_column].dt.to_period('W'))
        elif time_window == "月":
            df_grouped = df.groupby(df[time_column].dt.to_period('M'))
        elif time_window == "季度":
            df_grouped = df.groupby(df[time_column].dt.to_period('Q'))
        else:
            df_grouped = df.groupby(df[time_column].dt.year)

        correlations = df_grouped.apply(lambda x: x[feature1].corr(x[feature2]))

        fig = px.line(x=correlations.index, y=correlations.values,
                      labels={'x': '时间', 'y': '相关系数'},
                      title=f'{feature1} 和 {feature2} 的相关性随{time_window}变化')
        st.plotly_chart(fig)

    # 相关性矩阵的层次聚类
    if st.checkbox("显示相关性矩阵的层次聚类"):
        st.subheader("相关性矩阵的层次聚类")

        linkage = hierarchy.linkage(correlation_matrix, method='ward')

        fig, ax = plt.subplots(figsize=(10, 10))
        dendro = hierarchy.dendrogram(linkage, labels=correlation_matrix.columns, ax=ax)
        ax.set_title("特征的层次聚类树状图")
        st.pyplot(fig)

        # 根据聚类结果重新排序相关性矩阵
        clustered_corr = correlation_matrix.iloc[dendro['leaves'], dendro['leaves']]
        fig = px.imshow(clustered_corr, text_auto=True, aspect="auto")
        fig.update_layout(title="基于聚类的相关性热力图")
        st.plotly_chart(fig)

    # 主成分分析 (PCA)
    if st.checkbox("执行主成分分析 (PCA)"):
        st.subheader("主成分分析 (PCA)")

        from sklearn.preprocessing import StandardScaler
        from sklearn.decomposition import PCA

        scaler = StandardScaler()
        scaled_data = scaler.fit_transform(preprocessed_df)

        pca = PCA()
        pca_result = pca.fit_transform(scaled_data)

        explained_variance_ratio = pca.explained_variance_ratio_
        cumulative_variance_ratio = np.cumsum(explained_variance_ratio)

        fig = px.line(x=range(1, len(cumulative_variance_ratio) + 1), y=cumulative_variance_ratio,
                      labels={'x': '主成分数量', 'y': '累积解释方差比'},
                      title='PCA 累积解释方差比')
        st.plotly_chart(fig)

        num_components = st.slider("选择主成分数量", 2, len(preprocessed_df.columns), 2)
        pca = PCA(n_components=num_components)
        pca_result = pca.fit_transform(scaled_data)

        pca_df = pd.DataFrame(data=pca_result, columns=[f'PC{i + 1}' for i in range(num_components)])
        fig = px.scatter(pca_df, x='PC1', y='PC2', title='PCA 结果可视化')
        st.plotly_chart(fig)

    # 相关性与因果关系的讨论
    st.subheader("相关性与因果关系讨论")
    st.write("""
    重要提示:相关性不等同于因果关系。虽然相关性分析可以揭示变量之间的关系强度,
    但它并不能确定变量之间的因果关系。要建立因果关系,通常需要进行更深入的研究,
    如随机对照试验或纵向研究。在解释相关性结果时,请务必考虑以下几点:

    1. 潜在的混淆变量:可能存在影响两个变量关系的其他因素。
    2. 反向因果:相关性不能告诉我们哪个变量是原因,哪个是结果。
    3. 非线性关系:某些变量可能存在非线性关系,这在常规相关性分析中可能不会被捕捉到。
    4. 样本代表性:确保你的样本能够代表整个人群。
    5. 统计显著性:相关系数的统计显著性同样重要,不仅仅是相关系数的大小。

    在得出任何结论之前,建议结合领域知识和其他分析方法来解释这些相关性结果。
    """)
def download_link(object_to_download, download_filename, download_link_text):
    """
    生成一个可下载 CSV 文件的链接
    """
    if isinstance(object_to_download, pd.DataFrame):
        object_to_download = object_to_download.to_csv(index=False)

    b64 = base64.b64encode(object_to_download.encode()).decode()
    return f'<a href="data:file/csv;base64,{b64}" download="{download_filename}">{download_link_text}</a>'
# 分组统计
def group_statistics(df):
    st.subheader("分组统计")
    # 确保所有列名都是字符串
    df.columns = df.columns.astype(str)
    # 选择分组列
    group_columns = st.multiselect("选择分组列", df.select_dtypes(include=['object', 'category']).columns)
    if not group_columns:
        st.warning("请至少选择一个分组列")
        return
    # 选择聚合列和聚合函数
    agg_columns = st.multiselect("选择聚合列", df.select_dtypes(include=['float64', 'int64']).columns)
    if not agg_columns:
        st.warning("请至少选择一个聚合列")
        return
    agg_functions = st.multiselect("选择聚合函数", ["mean", "sum", "count", "min", "max", "median", "std"])
    if not agg_functions:
        st.warning("请至少选择一个聚合函数")
        return
    # 创建聚合字典
    agg_dict = {col: agg_functions for col in agg_columns}
    try:
        # 执行分组聚合
        grouped_data = df.groupby(group_columns).agg(agg_dict).reset_index()
        # 重命名列,保持原始列名
        grouped_data.columns = [col[0] if col[1] == '' else f"{col[0]}_{col[1]}" for col in grouped_data.columns]
        # 打印列名以进行调试
        print(grouped_data.columns)
        # 排序选项
        sort_column = st.selectbox("选择排序列", grouped_data.columns)
        sort_order = st.radio("排序顺序", ("升序", "降序"))
        grouped_data = grouped_data.sort_values(by=sort_column, ascending=(sort_order == "升序"))
        # 显示图表
        for agg_col in agg_columns:
            for func in agg_functions:
                col_name = f"{agg_col}_{func}"
                fig = px.bar(grouped_data, x=grouped_data.columns[0], y=col_name,
                             title=f"{grouped_data.columns[0]}分组的{agg_col} {func}")
                fig.update_layout(xaxis_title=grouped_data.columns[0], yaxis_title=f"{agg_col} ({func})")
                st.plotly_chart(fig)
        # 显示数据表格
        st.write("分组数据:")
        st.dataframe(grouped_data)
        # 下载链接
        if st.button('生成下载链接'):
            tmp_download_link = download_link(grouped_data, 'grouped_data.xlsx', '点击下载 Excel 文件')
            st.markdown(tmp_download_link, unsafe_allow_html=True)
    except Exception as e:
        st.error(f"发生错误: {str(e)}")
        st.write("请检查您的选择和数据是否兼容。")
# 时间序列分析
def time_series_analysis(df):
    st.subheader("时间序列分析")
    date_columns = df.select_dtypes(include=['datetime64']).columns
    if len(date_columns) == 0:
        st.warning("没有检测到日期列。请确保你的Excel文件中至少有一列包含日期数据。")
        return
    date_column = st.selectbox("选择日期列", date_columns)
    value_column = st.selectbox("选择值列", df.select_dtypes(include=['float64', 'int64']).columns)
    df[date_column] = pd.to_datetime(df[date_column])
    df_sorted = df.sort_values(date_column)
    fig = px.line(df_sorted, x=date_column, y=value_column, title=f"{value_column}随时间的变化")
    st.plotly_chart(fig)

    # 移动平均线
    window_size = st.slider("选择移动平均窗口大小", min_value=1, max_value=30, value=7)
    df_sorted['Moving Average'] = df_sorted[value_column].rolling(window=window_size).mean()
    fig = go.Figure()
    fig.add_trace(go.Scatter(x=df_sorted[date_column], y=df_sorted[value_column], mode='lines', name='原始数据'))
    fig.add_trace(
        go.Scatter(x=df_sorted[date_column], y=df_sorted['Moving Average'], mode='lines', name=f'{window_size}天移动平均'))
    fig.update_layout(title=f"{value_column}随时间的变化(包含移动平均)")
    st.plotly_chart(fig)

    # 季节性分解
    if st.checkbox("进行季节性分解"):
        try:
            result = seasonal_decompose(df_sorted.set_index(date_column)[value_column], model='additive', period=30)
            fig, (ax1, ax2, ax3, ax4) = plt.subplots(4, 1, figsize=(10, 12))
            result.observed.plot(ax=ax1)
            ax1.set_title('Observed')
            result.trend.plot(ax=ax2)
            ax2.set_title('Trend')
            result.seasonal.plot(ax=ax3)
            ax3.set_title('Seasonal')
            result.resid.plot(ax=ax4)
            ax4.set_title('Residual')
            plt.tight_layout()
            st.pyplot(fig)
        except:
            st.warning("无法进行季节性分解。请确保数据是等间隔的时间序列。")


# 高级过滤
def advanced_filter(df):
    st.subheader("高级过滤")
    columns_to_filter = st.multiselect("选择要过滤的列", df.columns.tolist())
    for column in columns_to_filter:
        if df[column].dtype == 'object':
            unique_values = df[column].unique()
            selected_values = st.multiselect(f"选择{column}值", unique_values)
            df = df[df[column].isin(selected_values)]
        else:
            min_value = df[column].min()
            max_value = df[column].max()
            selected_range = st.slider(f"选择{column}范围", min_value=float(min_value), max_value=float(max_value),
                                       value=(float(min_value), float(max_value)))
            df = df[(df[column] >= selected_range[0]) & (df[column] <= selected_range[1])]
    st.dataframe(df)


# 数据清洗
def data_cleaning(df):
    st.subheader("高级数据清洗与预处理")
    # 数据概览
    st.write("数据概览:")
    st.write(f"行数: {df.shape[0]}, 列数: {df.shape[1]}")
    # 缺失值处理
    st.subheader("缺失值处理")
    missing_values = df.isnull().sum()
    st.write("各列缺失值数量:")
    st.write(missing_values)
    # 可视化缺失值
    fig, ax = plt.subplots(figsize=(10, 6))
    sns.heatmap(df.isnull(), yticklabels=False, cbar=False, cmap='viridis')
    plt.title('缺失值分布热力图')
    st.pyplot(fig)

    missing_options = st.multiselect("选择缺失值处理方法",
                                     ["移除缺失值", "均值/众数填充", "中位数填充", "固定值填充", "前向填充", "后向填充", "KNN填充", "多重填补"])

    if "移除缺失值" in missing_options:
        threshold = st.slider("选择删除缺失值的阈值(百分比)", 0, 100, 50)
        df.dropna(thresh=len(df) * threshold / 100, inplace=True)
        st.write(f"已移除缺失值超过{threshold}%的行")

    if "均值/众数填充" in missing_options:
        numeric_columns = df.select_dtypes(include=['float64', 'int64']).columns
        df[numeric_columns] = df[numeric_columns].fillna(df[numeric_columns].mean())
        categorical_columns = df.select_dtypes(include=['object']).columns
        df[categorical_columns] = df[categorical_columns].fillna(df[categorical_columns].mode().iloc[0])
        st.write("已用均值/众数填充缺失值")

    if "中位数填充" in missing_options:
        numeric_columns = df.select_dtypes(include=['float64', 'int64']).columns
        df[numeric_columns] = df[numeric_columns].fillna(df[numeric_columns].median())
        st.write("已用中位数填充数值型列的缺失值")

    if "固定值填充" in missing_options:
        fill_value = st.text_input("输入填充值")
        df.fillna(fill_value, inplace=True)
        st.write(f"已用 {fill_value} 填充所有缺失值")

    if "前向填充" in missing_options:
        df.fillna(method='ffill', inplace=True)
        st.write("已用前向填充法填充缺失值")

    if "后向填充" in missing_options:
        df.fillna(method='bfill', inplace=True)
        st.write("已用后向填充法填充缺失值")

    if "KNN填充" in missing_options:
        n_neighbors = st.slider("选择KNN填充的邻居数", 1, 10, 5)
        numeric_columns = df.select_dtypes(include=['float64', 'int64']).columns
        imputer = KNNImputer(n_neighbors=n_neighbors)
        df[numeric_columns] = imputer.fit_transform(df[numeric_columns])
        st.write("已用KNN方法填充数值型列的缺失值")

    if "多重填补" in missing_options:
        n_iterations = st.slider("选择多重填补的迭代次数", 1, 50, 10)
        numeric_columns = df.select_dtypes(include=['float64', 'int64']).columns
        imputer = IterativeImputer(max_iter=n_iterations, random_state=0)
        df[numeric_columns] = imputer.fit_transform(df[numeric_columns])
        st.write("已用多重填补方法填充数值型列的缺失值")

    # 重复值处理
    st.subheader("重复值处理")
    duplicate_count = df.duplicated().sum()
    st.write(f"重复行数: {duplicate_count}")

    if st.button("移除重复行"):
        df.drop_duplicates(inplace=True)
        st.write("已移除重复行")

    # 异常值处理
    st.subheader("异常值处理")
    numeric_columns = df.select_dtypes(include=['float64', 'int64']).columns
    selected_columns = st.multiselect("选择需要处理异常值的列", numeric_columns)

    for col in selected_columns:
        Q1 = df[col].quantile(0.25)
        Q3 = df[col].quantile(0.75)
        IQR = Q3 - Q1
        lower_bound = Q1 - 1.5 * IQR
        upper_bound = Q3 + 1.5 * IQR
        outliers = df[(df[col] < lower_bound) | (df[col] > upper_bound)]
        st.write(f"{col} 列中的异常值数量: {len(outliers)}")

        fig, ax = plt.subplots(figsize=(10, 6))
        sns.boxplot(x=df[col])
        plt.title(f'{col} 的箱线图')
        st.pyplot(fig)

        outlier_treatment = st.selectbox(f"选择 {col} 列的异常值处理方法", ["不处理", "删除", "截断", "平均值替换"])

        if outlier_treatment == "删除":
            df = df[(df[col] >= lower_bound) & (df[col] <= upper_bound)]
            st.write(f"已删除 {col} 列中的异常值")
        elif outlier_treatment == "截断":
            df[col] = np.clip(df[col], lower_bound, upper_bound)
            st.write(f"已将 {col} 列中的异常值截断到上下界范围内")
        elif outlier_treatment == "平均值替换":
            mean_value = df[col].mean()
            df.loc[(df[col] < lower_bound) | (df[col] > upper_bound), col] = mean_value
            st.write(f"已用平均值替换 {col} 列中的异常值")

    # 数据类型转换
    st.subheader("数据类型转换")
    for col in df.columns:
        current_type = df[col].dtype
        new_type = st.selectbox(f"选择 {col} 列的新数据类型", ["保持不变", "int64", "float64", "object", "datetime64"])
        if new_type != "保持不变":
            try:
                if new_type == "datetime64":
                    df[col] = pd.to_datetime(df[col])
                else:
                    df[col] = df[col].astype(new_type)
                st.write(f"已将 {col} 列的数据类型从 {current_type} 转换为 {new_type}")
            except:
                st.write(f"无法将 {col} 列转换为 {new_type} 类型")

    # 特征工程
    st.subheader("特征工程")
    if st.checkbox("创建新特征"):
        new_feature_name = st.text_input("输入新特征的名称")
        feature_expression = st.text_input("输入特征计算表达式(例如:A + B 或 np.log(A))")
        if new_feature_name and feature_expression:
            try:
                df[new_feature_name] = eval(f"df.apply(lambda row: {feature_expression}, axis=1)")
                st.write(f"已创建新特征: {new_feature_name}")
            except:
                st.write("特征创建失败,请检查表达式")

    # 数据标准化/归一化
    st.subheader("数据标准化/归一化")
    numeric_columns = df.select_dtypes(include=['float64', 'int64']).columns
    columns_to_scale = st.multiselect("选择需要标准化/归一化的列", numeric_columns)
    scaling_method = st.selectbox("选择标准化/归一化方法", ["不处理", "StandardScaler", "MinMaxScaler", "RobustScaler"])

    if scaling_method != "不处理" and columns_to_scale:
        if scaling_method == "StandardScaler":
            scaler = StandardScaler()
        elif scaling_method == "MinMaxScaler":
            scaler = MinMaxScaler()
        else:
            scaler = RobustScaler()

        df[columns_to_scale] = scaler.fit_transform(df[columns_to_scale])
        st.write(f"已对选中的列进行 {scaling_method} 处理")

    # 数据编码
    st.subheader("数据编码")
    categorical_columns = df.select_dtypes(include=['object']).columns
    columns_to_encode = st.multiselect("选择需要编码的分类列", categorical_columns)
    encoding_method = st.selectbox("选择编码方法", ["不编码", "One-Hot编码", "标签编码"])

    if encoding_method != "不编码" and columns_to_encode:
        if encoding_method == "One-Hot编码":
            df = pd.get_dummies(df, columns=columns_to_encode)
            st.write("已对选中的列进行One-Hot编码")
        else:
            for col in columns_to_encode:
                df[col] = df[col].astype('category').cat.codes
            st.write("已对选中的列进行标签编码")

    # 时间特征提取
    st.subheader("时间特征提取")
    date_columns = df.select_dtypes(include=['datetime64']).columns
    if len(date_columns) > 0:
        selected_date_column = st.selectbox("选择日期列进行特征提取", date_columns)
        if st.checkbox("提取时间特征"):
            df[f'{selected_date_column}_year'] = df[selected_date_column].dt.year
            df[f'{selected_date_column}_month'] = df[selected_date_column].dt.month
            df[f'{selected_date_column}_day'] = df[selected_date_column].dt.day
            df[f'{selected_date_column}_dayofweek'] = df[selected_date_column].dt.dayofweek
            st.write(f"已从 {selected_date_column} 列提取年、月、日和星期几特征")

    # 文本数据处理
    st.subheader("文本数据处理")
    text_columns = df.select_dtypes(include=['object']).columns
    selected_text_column = st.selectbox("选择文本列进行处理", text_columns)
    if st.checkbox("进行文本清洗"):
        df[selected_text_column] = df[selected_text_column].apply(lambda x: re.sub(r'[^\w\s]', '', str(x).lower()))
        st.write(f"已对 {selected_text_column} 列进行文本清洗(转小写并移除标点符号)")

    # 数据分箱
    st.subheader("数据分箱")
    numeric_columns = df.select_dtypes(include=['float64', 'int64']).columns
    column_to_bin = st.selectbox("选择需要分箱的数值列", numeric_columns)
    n_bins = st.slider("选择分箱数量", 2, 10, 5)
    if st.button("执行分箱"):
        df[f'{column_to_bin}_binned'] = pd.cut(df[column_to_bin], bins=n_bins,
                                               labels=[f'Bin_{i + 1}' for i in range(n_bins)])
        st.write(f"已对 {column_to_bin} 列进行分箱处理")

    # 数据平衡(针对分类问题)
    st.subheader("数据平衡")
    if st.checkbox("检查目标变量分布"):
        target_column = st.selectbox("选择目标变量", df.columns)
        class_distribution = df[target_column].value_counts(normalize=True)
        st.write("目标变量分布:")
        st.write(class_distribution)

        fig, ax = plt.subplots(figsize=(10, 6))
        class_distribution.plot(kind='bar')
        plt.title('目标变量分布')
        plt.ylabel('比例')
        st.pyplot(fig)

        if st.checkbox("进行数据平衡"):
            balance_method = st.selectbox("选择平衡方法", ["随机过采样", "随机欠采样", "SMOTE"])
            if balance_method == "随机过采样":
                from imblearn.over_sampling import RandomOverSampler
                ros = RandomOverSampler(random_state=42)
                X_resampled, y_resampled = ros.fit_resample(df.drop(columns=[target_column]), df[target_column])
                df = pd.concat([X_resampled, y_resampled], axis=1)
            elif balance_method == "随机欠采样":
                from imblearn.under_sampling import RandomUnderSampler
                rus = RandomUnderSampler(random_state=42)
                X_resampled, y_resampled = rus.fit_resample(df.drop(columns=[target_column]), df[target_column])
                df = pd.concat([X_resampled, y_resampled], axis=1)
            else:  # SMOTE
                from imblearn.over_sampling import SMOTE
                smote = SMOTE(random_state=42)
                X_resampled, y_resampled = smote.fit_resample(df.drop(columns=[target_column]), df[target_column])
                df = pd.concat([X_resampled, y_resampled], axis=1)

            st.write(f"已使用 {balance_method} 方法平衡数据")
            new_class_distribution = df[target_column].value_counts(normalize=True)
            st.write("平衡后的目标变量分布:")
            st.write(new_class_distribution)
            st.subheader("特征选择")
            if st.checkbox("执行特征选择"):
                target_column = st.selectbox("选择目标变量(用于特征选择)", df.columns)
                feature_columns = df.drop(columns=[target_column]).columns
                selection_method = st.selectbox("选择特征选择方法", ["方差阈值", "互信息", "递归特征消除"])

                if selection_method == "方差阈值":
                    from sklearn.feature_selection import VarianceThreshold
                    threshold = st.slider("选择方差阈值", 0.0, 1.0, 0.1, 0.05)
                    selector = VarianceThreshold(threshold=threshold)
                    X_selected = selector.fit_transform(df[feature_columns])
                    selected_features = feature_columns[selector.get_support()]
                elif selection_method == "互信息":
                    from sklearn.feature_selection import mutual_info_classif, mutual_info_regression
                    n_features = st.slider("选择要保留的特征数量", 1, len(feature_columns), len(feature_columns) // 2)
                    if df[target_column].dtype == 'object':
                        mi_scores = mutual_info_classif(df[feature_columns], df[target_column])
                    else:
                        mi_scores = mutual_info_regression(df[feature_columns], df[target_column])
                    selected_features = feature_columns[np.argsort(mi_scores)[-n_features:]]
                else:  # 递归特征消除
                    from sklearn.feature_selection import RFE
                    from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
                    n_features = st.slider("选择要保留的特征数量", 1, len(feature_columns), len(feature_columns) // 2)
                    if df[target_column].dtype == 'object':
                        estimator = RandomForestClassifier(n_estimators=100, random_state=42)
                    else:
                        estimator = RandomForestRegressor(n_estimators=100, random_state=42)
                    selector = RFE(estimator, n_features_to_select=n_features)
                    selector = selector.fit(df[feature_columns], df[target_column])
                    selected_features = feature_columns[selector.support_]

                st.write("选中的特征:")
                st.write(selected_features)
                df = df[list(selected_features) + [target_column]]

            # 数据导出
            st.subheader("数据导出")
            if st.button("导出处理后的数据"):
                csv = df.to_csv(index=False)
                b64 = base64.b64encode(csv.encode()).decode()
                href = f'<a href="data:file/csv;base64,{b64}" download="processed_data.csv">下载CSV文件</a>'
                st.markdown(href, unsafe_allow_html=True)

            return df

# 假设检验
def hypothesis_testing(df):
    st.subheader("假设检验")
    test_type = st.selectbox("选择检验类型", ["T检验", "卡方检验"])

    if test_type == "T检验":
        column = st.selectbox("选择要检验的列", df.select_dtypes(include=['float64', 'int64']).columns)
        group_column = st.selectbox("选择分组列", df.select_dtypes(include=['object']).columns)
        group1, group2 = df[group_column].unique()[:2]
        sample1 = df[df[group_column] == group1][column]
        sample2 = df[df[group_column] == group2][column]
        t_stat, p_value = stats.ttest_ind(sample1, sample2)
        st.write(f"T统计量: {t_stat}")
        st.write(f"P值: {p_value}")

    elif test_type == "卡方检验":
        column1 = st.selectbox("选择第一个类别列", df.select_dtypes(include=['object']).columns)
        column2 = st.selectbox("选择第二个类别列", df.select_dtypes(include=['object']).columns)
        contingency_table = pd.crosstab(df[column1], df[column2])
        chi2, p_value, dof, expected = stats.chi2_contingency(contingency_table)
        st.write(f"卡方统计量: {chi2}")
        st.write(f"P值: {p_value}")
        st.write("列联表:")
        st.write(contingency_table)

    st.write("解释:")
    if p_value < 0.05:
        st.write("在5%的显著性水平下,我们拒绝原假设。这意味着存在显著差异或关联。")
    else:
        st.write("在5%的显著性水平下,我们无法拒绝原假设。这意味着没有足够的证据表明存在显著差异或关联。")


# 聚类分析
def cluster_analysis(df):
    st.subheader("聚类分析")

    # 数据预处理
    numeric_columns = df.select_dtypes(include=['float64', 'int64']).columns
    selected_columns = st.multiselect("选择用于聚类的列", numeric_columns)

    if len(selected_columns) < 2:
        st.warning("请至少选择两列进行聚类分析。")
        return

    X = df[selected_columns]

    # 数据标准化选项
    scaling_option = st.radio("选择数据标准化方法", ["StandardScaler", "MinMaxScaler", "不进行标准化"])
    if scaling_option == "StandardScaler":
        scaler = StandardScaler()
        X_scaled = scaler.fit_transform(X)
    elif scaling_option == "MinMaxScaler":
        from sklearn.preprocessing import MinMaxScaler
        scaler = MinMaxScaler()
        X_scaled = scaler.fit_transform(X)
    else:
        X_scaled = X.values

    # 聚类算法选择
    clustering_algorithm = st.selectbox("选择聚类算法", ["K-Means", "DBSCAN", "层次聚类"])

    if clustering_algorithm == "K-Means":
        n_clusters = st.slider("选择聚类数量", min_value=2, max_value=10, value=3)
        kmeans = KMeans(n_clusters=n_clusters, random_state=42)
        df['Cluster'] = kmeans.fit_predict(X_scaled)

        # 计算聚类评估指标
        silhouette = silhouette_score(X_scaled, df['Cluster'])
        calinski_harabasz = calinski_harabasz_score(X_scaled, df['Cluster'])
        davies_bouldin = davies_bouldin_score(X_scaled, df['Cluster'])

        st.write(f"Silhouette Score: {silhouette:.4f}")
        st.write(f"Calinski-Harabasz Index: {calinski_harabasz:.4f}")
        st.write(f"Davies-Bouldin Index: {davies_bouldin:.4f}")

        # 肘部法则
        if st.checkbox("显示肘部法则图"):
            inertias = []
            k_range = range(1, 11)
            for k in k_range:
                kmeans_model = KMeans(n_clusters=k, random_state=42)
                kmeans_model.fit(X_scaled)
                inertias.append(kmeans_model.inertia_)

            fig_elbow = go.Figure(data=go.Scatter(x=list(k_range), y=inertias, mode='lines+markers'))
            fig_elbow.update_layout(title='肘部法则', xaxis_title='聚类数量', yaxis_title='惯性')
            st.plotly_chart(fig_elbow)

    elif clustering_algorithm == "DBSCAN":
        eps = st.slider("选择 eps 值", min_value=0.1, max_value=2.0, value=0.5, step=0.1)
        min_samples = st.slider("选择 min_samples 值", min_value=2, max_value=10, value=5)
        dbscan = DBSCAN(eps=eps, min_samples=min_samples)
        df['Cluster'] = dbscan.fit_predict(X_scaled)

    else:  # 层次聚类
        n_clusters = st.slider("选择聚类数量", min_value=2, max_value=10, value=3)
        linkage = st.selectbox("选择连接方法", ["ward", "complete", "average", "single"])
        hierarchical = AgglomerativeClustering(n_clusters=n_clusters, linkage=linkage)
        df['Cluster'] = hierarchical.fit_predict(X_scaled)

        # 绘制树状图
        if st.checkbox("显示树状图"):
            from scipy.cluster.hierarchy import dendrogram, linkage as scipy_linkage
            plt.figure(figsize=(10, 7))
            dendrogram(scipy_linkage(X_scaled, method=linkage))
            plt.title('层次聚类树状图')
            st.pyplot(plt)

    st.write("聚类结果预览:")
    st.dataframe(df.head())

    # 可视化
    if len(selected_columns) == 2:
        fig = px.scatter(df, x=selected_columns[0], y=selected_columns[1], color='Cluster',
                         title='2D聚类结果可视化')
        st.plotly_chart(fig)
    elif len(selected_columns) >= 3:
        fig = px.scatter_3d(df, x=selected_columns[0], y=selected_columns[1], z=selected_columns[2],
                            color='Cluster', title='3D聚类结果可视化')
        st.plotly_chart(fig)

    # PCA 降维可视化
    if len(selected_columns) > 2 and st.checkbox("使用PCA进行2D可视化"):
        pca = PCA(n_components=2)
        pca_result = pca.fit_transform(X_scaled)
        df['PCA1'] = pca_result[:, 0]
        df['PCA2'] = pca_result[:, 1]

        fig_pca = px.scatter(df, x='PCA1', y='PCA2', color='Cluster',
                             title='PCA降维后的聚类结果可视化')
        st.plotly_chart(fig_pca)

        st.write(f"PCA解释的方差比例: {pca.explained_variance_ratio_}")

    # 特征重要性分析
    if clustering_algorithm == "K-Means" and st.checkbox("显示特征重要性"):
        feature_importance = pd.DataFrame({
            'feature': selected_columns,
            'importance': np.abs(kmeans.cluster_centers_).mean(axis=0)
        }).sort_values('importance', ascending=False)

        fig_importance = px.bar(feature_importance, x='feature', y='importance',
                                title='特征重要性')
        st.plotly_chart(fig_importance)

    # 聚类统计信息
    if st.checkbox("显示聚类统计信息"):
        cluster_stats = df.groupby('Cluster')[selected_columns].agg(['mean', 'std', 'min', 'max'])
        st.write(cluster_stats)

        # 热力图
        plt.figure(figsize=(12, 8))
        sns.heatmap(cluster_stats.xs('mean', axis=1, level=1), annot=True, cmap='YlGnBu')
        plt.title('聚类中心热力图')
        st.pyplot(plt)

    # 聚类结果的箱线图
    if st.checkbox("显示聚类结果的箱线图"):
        fig_box = go.Figure()
        for column in selected_columns:
            fig_box.add_trace(go.Box(y=df[column], x=df['Cluster'], name=column))
        fig_box.update_layout(title='聚类结果的箱线图', xaxis_title='聚类', yaxis_title='值')
        st.plotly_chart(fig_box)

    # 导出聚类结果
    if st.button('导出聚类结果'):
        output = BytesIO()
        with pd.ExcelWriter(output, engine='openpyxl') as writer:
            df.to_excel(writer, index=False, sheet_name='聚类结果')
            if clustering_algorithm == "K-Means":
                pd.DataFrame(kmeans.cluster_centers_, columns=selected_columns).to_excel(writer, sheet_name='聚类中心')

        output.seek(0)
        st.download_button(
            label="下载 Excel 文件",
            data=output,
            file_name="cluster_results.xlsx",
            mime="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
        )

    # 交互式聚类探索
    if st.checkbox("启用交互式聚类探索"):
        st.write("在下面的图表中,您可以通过选择不同的特征来探索聚类结果。")
        x_axis = st.selectbox("选择 X 轴", selected_columns)
        y_axis = st.selectbox("选择 Y 轴", selected_columns)

        fig_interactive = px.scatter(df, x=x_axis, y=y_axis, color='Cluster',
                                     title='交互式聚类结果探索')
        st.plotly_chart(fig_interactive)

    # 聚类结果的单独分析
    if st.checkbox("对单个聚类进行深入分析"):
        cluster_to_analyze = st.selectbox("选择要分析的聚类", df['Cluster'].unique())
        cluster_data = df[df['Cluster'] == cluster_to_analyze]

        st.write(f"聚类 {cluster_to_analyze} 的基本统计信息:")
        st.write(cluster_data[selected_columns].describe())

        if st.checkbox("显示该聚类的相关性热力图"):
            corr = cluster_data[selected_columns].corr()
            plt.figure(figsize=(10, 8))
            sns.heatmap(corr, annot=True, cmap='coolwarm')
            plt.title(f'聚类 {cluster_to_analyze} 的相关性热力图')
            st.pyplot(plt)

    # 聚类稳定性分析
    if clustering_algorithm == "K-Means" and st.checkbox("进行聚类稳定性分析"):
        n_iterations = st.slider("选择迭代次数", min_value=5, max_value=50, value=10)
        stability_scores = []

        for _ in range(n_iterations):
            kmeans_stability = KMeans(n_clusters=n_clusters, random_state=None)
            labels_stability = kmeans_stability.fit_predict(X_scaled)
            stability_scores.append(silhouette_score(X_scaled, labels_stability))

        st.write(f"聚类稳定性分析结果 (基于 {n_iterations} 次迭代):")
        st.write(f"平均 Silhouette Score: {np.mean(stability_scores):.4f}")
        st.write(f"Silhouette Score 标准差: {np.std(stability_scores):.4f}")

        fig_stability = go.Figure(data=go.Box(y=stability_scores, name='Silhouette Scores'))
        fig_stability.update_layout(title='聚类稳定性分析', yaxis_title='Silhouette Score')
        st.plotly_chart(fig_stability)

    # 异常值检测
    if st.checkbox("执行异常值检测"):
        from sklearn.ensemble import IsolationForest
        contamination = st.slider("选择预期的异常值比例", min_value=0.01, max_value=0.5, value=0.1, step=0.01)
        iso_forest = IsolationForest(contamination=contamination, random_state=42)
        df['Is_Anomaly'] = iso_forest.fit_predict(X_scaled)
        df['Is_Anomaly'] = df['Is_Anomaly'].map({1: 'Normal', -1: 'Anomaly'})

        st.write("异常值检测结果预览:")
        st.dataframe(df[df['Is_Anomaly'] == 'Anomaly'].head())

        fig_anomaly = px.scatter(df, x=selected_columns[0], y=selected_columns[1],
                                 color='Is_Anomaly', title='异常值检测结果')
        st.plotly_chart(fig_anomaly)


# 主成分分析
def pca_analysis(df):
    st.subheader("降维分析")

    # 数据预处理
    numeric_columns = df.select_dtypes(include=['float64', 'int64']).columns
    selected_columns = st.multiselect("选择用于降维分析的列", numeric_columns)

    if len(selected_columns) < 2:
        st.warning("请至少选择两列进行降维分析。")
        return

    X = df[selected_columns]
    scaler = StandardScaler()
    X_scaled = scaler.fit_transform(X)

    # 降维方法选择
    dim_reduction_method = st.selectbox("选择降维方法",
                                        ["PCA", "Kernel PCA", "Truncated SVD", "t-SNE", "MDS", "Isomap", "LDA", "UMAP"])

    if dim_reduction_method == "PCA":
        # PCA分析
        pca = PCA()
        X_pca = pca.fit_transform(X_scaled)

        explained_variance_ratio = pca.explained_variance_ratio_
        cumulative_variance_ratio = np.cumsum(explained_variance_ratio)

        fig = go.Figure()
        fig.add_trace(go.Scatter(y=cumulative_variance_ratio, mode='lines+markers', name='累积解释方差比'))
        fig.update_layout(title='累积解释方差比', xaxis_title='主成分数量', yaxis_title='累积解释方差比')
        st.plotly_chart(fig)

        n_components = st.slider("选择保留的主成分数量", min_value=1, max_value=len(selected_columns), value=2)
        pca = PCA(n_components=n_components)
        X_reduced = pca.fit_transform(X_scaled)

        # 展示特征向量
        if st.checkbox("显示特征向量"):
            loadings = pca.components_.T * np.sqrt(pca.explained_variance_)
            loading_matrix = pd.DataFrame(loadings, columns=[f'PC{i + 1}' for i in range(n_components)],
                                          index=selected_columns)
            st.write("特征向量:")
            st.dataframe(loading_matrix)

            # 热力图可视化特征向量
            fig, ax = plt.subplots(figsize=(10, 8))
            sns.heatmap(loading_matrix, annot=True, cmap='coolwarm', ax=ax)
            plt.title("特征向量热力图")
            st.pyplot(fig)

    elif dim_reduction_method == "Kernel PCA":
        kernel = st.selectbox("选择核函数", ["rbf", "poly", "sigmoid", "cosine"])
        kpca = KernelPCA(n_components=2, kernel=kernel)
        X_reduced = kpca.fit_transform(X_scaled)

    elif dim_reduction_method == "Truncated SVD":
        n_components = st.slider("选择组件数量", min_value=1, max_value=min(X.shape) - 1, value=2)
        svd = TruncatedSVD(n_components=n_components)
        X_reduced = svd.fit_transform(X_scaled)

    elif dim_reduction_method == "t-SNE":
        perplexity = st.slider("选择困惑度", min_value=5, max_value=50, value=30)
        tsne = TSNE(n_components=2, perplexity=perplexity)
        X_reduced = tsne.fit_transform(X_scaled)

    elif dim_reduction_method == "MDS":
        n_components = st.slider("选择组件数量", min_value=2, max_value=min(X.shape) - 1, value=2)
        mds = MDS(n_components=n_components)
        X_reduced = mds.fit_transform(X_scaled)

    elif dim_reduction_method == "Isomap":
        n_neighbors = st.slider("选择邻居数量", min_value=5, max_value=50, value=15)
        isomap = Isomap(n_components=2, n_neighbors=n_neighbors)
        X_reduced = isomap.fit_transform(X_scaled)

    elif dim_reduction_method == "LDA":
        target_column = st.selectbox("选择目标变量", df.columns)
        if df[target_column].dtype == 'object':
            lda = LinearDiscriminantAnalysis(n_components=2)
            X_reduced = lda.fit_transform(X_scaled, df[target_column])
        else:
            st.warning("LDA需要分类目标变量。请选择一个分类变量作为目标。")
            return

    else:  # UMAP
        n_neighbors = st.slider("选择邻居数量", min_value=2, max_value=100, value=15)
        min_dist = st.slider("选择最小距离", min_value=0.0, max_value=0.99, value=0.1)
        umap = UMAP(n_neighbors=n_neighbors, min_dist=min_dist, n_components=2)
        X_reduced = umap.fit_transform(X_scaled)

    # 结果可视化
    reduced_df = pd.DataFrame(X_reduced, columns=[f'Component{i + 1}' for i in range(X_reduced.shape[1])])
    st.write("降维结果预览:")
    st.dataframe(reduced_df.head())

    if X_reduced.shape[1] >= 2:
        fig = px.scatter(reduced_df, x='Component1', y='Component2', title='降维结果可视化')
        st.plotly_chart(fig)

        if X_reduced.shape[1] >= 3:
            fig = px.scatter_3d(reduced_df, x='Component1', y='Component2', z='Component3', title='3D降维结果可视化')
            st.plotly_chart(fig)

    # 额外分析
    if st.checkbox("进行额外分析"):
        # 相关性分析
        corr_matrix = X.corr()
        fig, ax = plt.subplots(figsize=(10, 8))
        sns.heatmap(corr_matrix, annot=True, cmap='coolwarm', ax=ax)
        plt.title("变量相关性热力图")
        st.pyplot(fig)

        # 层次聚类
        if st.checkbox("进行层次聚类"):
            linkage_matrix = linkage(X_scaled, method='ward')
            fig, ax = plt.subplots(figsize=(10, 7))
            dendrogram(linkage_matrix, ax=ax)
            plt.title("层次聚类树状图")
            st.pyplot(fig)

        # 异常值检测
        if st.checkbox("进行异常值检测"):
            from sklearn.ensemble import IsolationForest
            iso_forest = IsolationForest(contamination=0.1, random_state=42)
            outlier_labels = iso_forest.fit_predict(X_scaled)
            reduced_df['Outlier'] = outlier_labels
            fig = px.scatter(reduced_df, x='Component1', y='Component2', color='Outlier',
                             color_discrete_map={1: 'blue', -1: 'red'},
                             title="异常值检测结果")
            st.plotly_chart(fig)

        # 变量贡献度分析(仅适用于PCA)
        if dim_reduction_method == "PCA":
            if st.checkbox("分析变量贡献度"):
                total_variance = np.sum(pca.explained_variance_ratio_)
                variable_contributions = np.sum(np.abs(pca.components_), axis=0) / total_variance
                contrib_df = pd.DataFrame({'Variable': selected_columns, 'Contribution': variable_contributions})
                contrib_df = contrib_df.sort_values('Contribution', ascending=False)
                fig = px.bar(contrib_df, x='Variable', y='Contribution', title='变量对主成分的贡献度')
                st.plotly_chart(fig)
    st.subheader("降维结果统计描述")
    st.write(reduced_df.describe())

    # 保存降维结果
    if st.button("保存降维结果"):
        reduced_df_with_original = pd.concat([df, reduced_df], axis=1)
        reduced_df_with_original.to_csv("降维结果.csv", index=False)
        st.success("降维结果已保存为 '降维结果.csv'")

    # 结论和建议
    st.subheader("降维分析结论和建议")
    st.write(f"1. 使用 {dim_reduction_method} 方法将数据从 {len(selected_columns)} 维降至 {X_reduced.shape[1]} 维。")
    if dim_reduction_method == "PCA":
        st.write(f"2. 前 {n_components} 个主成分解释了 {cumulative_variance_ratio[n_components-1]*100:.2f}% 的总方差。")
        st.write("3. 考虑移除对主成分贡献较小的变量,以简化模型。")
    st.write("4. 观察降维结果的分布,寻找可能的聚类或异常值。")
    st.write("5. 考虑使用降维结果作为其他机器学习模型的输入,如分类或回归任务。")

    # 进一步分析建议
    st.subheader("进一步分析建议")
    st.write("1. 尝试不同的降维方法,比较它们的效果。")
    st.write("2. 结合领域知识,解释降维结果的实际含义。")
    st.write("3. 考虑使用降维结果进行数据可视化,以发现潜在的模式或关系。")
    st.write("4. 如果存在明显的聚类,可以考虑进行聚类分析。")
    st.write("5. 对异常值进行深入分析,了解它们的特征和可能的原因。")

# 回归分析
def regression_analysis(df):
    st.subheader("回归分析")

    # 数据预处理
    numeric_columns = df.select_dtypes(include=['float64', 'int64']).columns
    y_column = st.selectbox("选择因变量(Y)", numeric_columns)
    x_columns = st.multiselect("选择自变量(X)", [col for col in numeric_columns if col != y_column])

    if len(x_columns) == 0:
        st.warning("请至少选择一个自变量。")
        return

    X = df[x_columns]
    y = df[y_column]

    # 数据拆分
    test_size = st.slider("选择测试集比例", 0.1, 0.5, 0.2, 0.05)
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=42)

    # 标准化
    scaler = StandardScaler()
    X_train_scaled = scaler.fit_transform(X_train)
    X_test_scaled = scaler.transform(X_test)

    # 回归方法选择
    regression_method = st.selectbox("选择回归方法",
                                     ["OLS", "Ridge回归", "Lasso回归", "ElasticNet回归"])

    if regression_method == "OLS":
        X_train_sm = sm.add_constant(X_train)
        X_test_sm = sm.add_constant(X_test)
        model = sm.OLS(y_train, X_train_sm).fit()

        st.write("OLS回归分析结果:")
        st.write(model.summary())

        # 预测和评估
        y_pred = model.predict(X_test_sm)
        mse = mean_squared_error(y_test, y_pred)
        r2 = r2_score(y_test, y_pred)

    elif regression_method == "Ridge回归":
        alpha = st.slider("选择Ridge alpha值", 0.01, 10.0, 1.0, 0.01)
        model = Ridge(alpha=alpha)
        model.fit(X_train_scaled, y_train)

        y_pred = model.predict(X_test_scaled)
        mse = mean_squared_error(y_test, y_pred)
        r2 = r2_score(y_test, y_pred)

    elif regression_method == "Lasso回归":
        alpha = st.slider("选择Lasso alpha值", 0.01, 10.0, 1.0, 0.01)
        model = Lasso(alpha=alpha)
        model.fit(X_train_scaled, y_train)

        y_pred = model.predict(X_test_scaled)
        mse = mean_squared_error(y_test, y_pred)
        r2 = r2_score(y_test, y_pred)

    else:  # ElasticNet回归
        alpha = st.slider("选择ElasticNet alpha值", 0.01, 10.0, 1.0, 0.01)
        l1_ratio = st.slider("选择ElasticNet l1_ratio", 0.0, 1.0, 0.5, 0.01)
        model = ElasticNet(alpha=alpha, l1_ratio=l1_ratio)
        model.fit(X_train_scaled, y_train)

        y_pred = model.predict(X_test_scaled)
        mse = mean_squared_error(y_test, y_pred)
        r2 = r2_score(y_test, y_pred)

    st.write(f"均方误差 (MSE): {mse:.4f}")
    st.write(f"决定系数 (R²): {r2:.4f}")

    # 残差分析
    if st.checkbox("显示残差分析"):
        residuals = y_test - y_pred

        fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))

        # 残差散点图
        ax1.scatter(y_pred, residuals)
        ax1.set_xlabel("预测值")
        ax1.set_ylabel("残差")
        ax1.set_title("残差 vs 预测值")
        ax1.axhline(y=0, color='r', linestyle='--')

        # Q-Q图
        stats.probplot(residuals, dist="norm", plot=ax2)
        ax2.set_title("Q-Q 图")

        st.pyplot(fig)

        # 残差的统计检验
        _, p_value = stats.normaltest(residuals)
        st.write(f"残差正态性检验 p-value: {p_value:.4f}")

        _, p_value = stats.levene(y_pred, residuals)
        st.write(f"残差同方差性检验 p-value: {p_value:.4f}")

    # 特征重要性
    if regression_method != "OLS" and st.checkbox("显示特征重要性"):
        importance = pd.DataFrame({
            'feature': x_columns,
            'importance': np.abs(model.coef_)
        }).sort_values('importance', ascending=False)

        fig = px.bar(importance, x='feature', y='importance', title='特征重要性')
        st.plotly_chart(fig)

    # 降维方法
    if len(x_columns) > 2 and st.checkbox("使用降维方法"):
        dimension_reduction_method = st.selectbox("选择降维方法", ["PCA", "PLS回归"])

        if dimension_reduction_method == "PCA":
            n_components = st.slider("选择PCA组件数", 1, len(x_columns), 2)
            pca = PCA(n_components=n_components)
            X_pca = pca.fit_transform(X)

            explained_variance_ratio = pca.explained_variance_ratio_
            st.write(f"解释的方差比例: {explained_variance_ratio}")

            if n_components == 2:
                fig = px.scatter(x=X_pca[:, 0], y=X_pca[:, 1], color=y,
                                 labels={'x': 'PC1', 'y': 'PC2'},
                                 title='PCA降维结果')
                st.plotly_chart(fig)
            elif n_components == 3:
                fig = px.scatter_3d(x=X_pca[:, 0], y=X_pca[:, 1], z=X_pca[:, 2],
                                    color=y, labels={'x': 'PC1', 'y': 'PC2', 'z': 'PC3'},
                                    title='PCA降维结果')
                st.plotly_chart(fig)

        else:  # PLS回归
            n_components = st.slider("选择PLS组件数", 1, len(x_columns), 2)
            pls = PLSRegression(n_components=n_components)
            X_pls = pls.fit_transform(X, y)[0]

            explained_variance = np.var(X_pls, axis=0) / np.var(X, axis=0).sum()
            st.write(f"解释的方差比例: {explained_variance}")

            if n_components == 2:
                fig = px.scatter(x=X_pls[:, 0], y=X_pls[:, 1], color=y,
                                 labels={'x': 'PLS1', 'y': 'PLS2'},
                                 title='PLS降维结果')
                st.plotly_chart(fig)
            elif n_components == 3:
                fig = px.scatter_3d(x=X_pls[:, 0], y=X_pls[:, 1], z=X_pls[:, 2],
                                    color=y, labels={'x': 'PLS1', 'y': 'PLS2', 'z': 'PLS3'},
                                    title='PLS降维结果')
                st.plotly_chart(fig)

    # 交互效应分析
    if len(x_columns) >= 2 and st.checkbox("分析交互效应"):
        interact_var1 = st.selectbox("选择第一个交互变量", x_columns)
        interact_var2 = st.selectbox("选择第二个交互变量", [col for col in x_columns if col != interact_var1])

        X_interact = X.copy()
        X_interact['interaction'] = X_interact[interact_var1] * X_interact[interact_var2]

        X_interact_sm = sm.add_constant(X_interact)
        model_interact = sm.OLS(y, X_interact_sm).fit()

        st.write("包含交互项的回归分析结果:")
        st.write(model_interact.summary())

    # 非线性关系探索
    if st.checkbox("探索非线性关系"):
        nonlinear_var = st.selectbox("选择要探索非线性关系的变量", x_columns)

        fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))

        # 原始散点图
        sns.scatterplot(x=X[nonlinear_var], y=y, ax=ax1)
        ax1.set_title(f"{nonlinear_var} vs {y_column}")

        #对数变换后的散点图
        sns.scatterplot(x=np.log(X[nonlinear_var] + 1), y=y, ax=ax2)
        ax2.set_title(f"log({nonlinear_var}) vs {y_column}")
        st.pyplot(fig)
    # 多重共线性诊断
    if len(x_columns) > 1 and st.checkbox("进行多重共线性诊断"):
        correlation_matrix = X.corr()
        fig, ax = plt.subplots(figsize=(10, 8))
        sns.heatmap(correlation_matrix, annot=True, cmap='coolwarm', ax=ax)
        plt.title("自变量相关性热力图")
        st.pyplot(fig)
        # 计算VIF
        from statsmodels.stats.outliers_influence import variance_inflation_factor
        vif_data = pd.DataFrame()
        vif_data["feature"] = X.columns
        vif_data["VIF"] = [variance_inflation_factor(X.values, i) for i in range(len(X.columns))]
        st.write("方差膨胀因子 (VIF):")
        st.write(vif_data)
    # 异常值检测
    if st.checkbox("执行异常值检测"):
        from sklearn.ensemble import IsolationForest
        contamination = st.slider("选择预期的异常值比例", 0.01, 0.5, 0.1, 0.01)
        iso_forest = IsolationForest(contamination=contamination, random_state=42)
        outlier_labels = iso_forest.fit_predict(X)
        outliers = X[outlier_labels == -1]
        st.write(f"检测到 {len(outliers)} 个异常值")
        if len(x_columns) >= 2:
            fig = px.scatter(X, x=x_columns[0], y=x_columns[1], color=outlier_labels,
                             color_discrete_map={1: 'blue', -1: 'red'},
                             title="异常值检测结果")
            st.plotly_chart(fig)
    # 预测
    if st.checkbox("进行新数据预测"):
        st.write("请输入新的自变量值进行预测:")
        new_data = {}
        for col in x_columns:
            new_data[col] = st.number_input(f"输入 {col} 的值")
        new_df = pd.DataFrame([new_data])
        if regression_method == "OLS":
            new_df_sm = sm.add_constant(new_df)
            prediction = model.predict(new_df_sm)
        else:
            new_df_scaled = scaler.transform(new_df)
            prediction = model.predict(new_df_scaled)

        st.write(f"预测结果: {prediction[0]:.4f}")
    # 模型比较
    if st.checkbox("比较不同回归模型"):
        models = {
            "OLS": LinearRegression(),
            "Ridge": Ridge(),
            "Lasso": Lasso(),
            "ElasticNet": ElasticNet()
        }
        results = []
        for name, model in models.items():
            model.fit(X_train_scaled, y_train)
            y_pred = model.predict(X_test_scaled)
            mse = mean_squared_error(y_test, y_pred)
            r2 = r2_score(y_test, y_pred)
            results.append({"Model": name, "MSE": mse, "R²": r2})
        results_df = pd.DataFrame(results)
        st.write("不同模型的比较结果:")
        st.write(results_df)
        fig = go.Figure()
        fig.add_trace(go.Bar(x=results_df['Model'], y=results_df['MSE'], name='MSE'))
        fig.add_trace(go.Bar(x=results_df['Model'], y=results_df['R²'], name='R²'))
        fig.update_layout(title='不同模型的性能比较', barmode='group')
        st.plotly_chart(fig)
# 数据标准化
def standardize_data(df):
    numeric_columns = df.select_dtypes(include=['float64', 'int64']).columns
    scaler = StandardScaler()
    df[numeric_columns] = scaler.fit_transform(df[numeric_columns])
    st.write("数据已标准化")
    return df
# 对数转换
def log_transform(df):
    numeric_columns = df.select_dtypes(include=['float64', 'int64']).columns
    for column in numeric_columns:
        if df[column].min() > 0:
            df[column] = np.log(df[column])
    st.write("已对正值列进行对数转换")
    return df
# 一热编码
def one_hot_encode(df):
    categorical_columns = df.select_dtypes(include=['object']).columns
    df = pd.get_dummies(df, columns=categorical_columns)
    st.write("分类变量已进行一热编码")
    return df
# 二值化
def binarize_data(df):
    numeric_columns = df.select_dtypes(include=['float64', 'int64']).columns
    for column in numeric_columns:
        threshold = st.number_input(f"输入{column}的二值化阈值", value=df[column].mean())
        df[f"{column}_binarized"] = (df[column] > threshold).astype(int)
    st.write("数值列已二值化")
    return df
# 下载处理后的数据
def download_processed_data(df):
    csv = df.to_csv(index=False)
    b64 = base64.b64encode(csv.encode()).decode()
    href = f'<a href="data:file/csv;base64,{b64}" download="processed_data.csv">下载处理后的CSV文件</a>'
    st.markdown(href, unsafe_allow_html=True)
if __name__ == "__main__":
    main()

  • 5
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值