Assign02: Categorical Variables

Assign02: Categorical Variables

One of the main ways for working with categorical variables is using 0, 1 encodings. In this technique, you create a new column for every level of the categorical variable. The advantages of this approach include:

  1. The ability to have differing influences of each level on the response.
  2. You do not impose a rank of the categories.
  3. The ability to interpret the results more easily than other encodings.

The disadvantages of this approach are that you introduce a large number of effects into your model. If you have a large number of categorical variables or categorical variables with a large number of levels, but not a large sample size, you might not be able to estimate the impact of each of these variables on your response variable. There are some rules of thumb that suggest 10 data points for each variable you add to your model. That is 10 rows for each column. This is a reasonable lower bound, but the larger your sample (assuming it is representative), the better.

Let’s try out adding dummy variables for the categorical variables into the model. We will compare to see the improvement over the original model only using quantitative variables.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
import CatVar as t
import seaborn as sns
%matplotlib inline

df = pd.read_csv('./survey_results_public.csv')
df.head()

#Only use quant variables and drop any rows with missing values
num_vars = df[['Salary', 'CareerSatisfaction', 'HoursPerWeek', 'JobSatisfaction', 'StackOverflowSatisfaction']]

#Drop the rows with missing salaries
drop_sal_df = num_vars.dropna(subset=['Salary'], axis=0)

# Mean function
fill_mean = lambda col: col.fillna(col.mean())
# Fill the mean
fill_df = drop_sal_df.apply(fill_mean, axis=0)

#Split into explanatory and response variables
X = fill_df[['CareerSatisfaction', 'HoursPerWeek', 'JobSatisfaction', 'StackOverflowSatisfaction']]
y = fill_df['Salary']

#Split into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .30, random_state=42) 

lm_model = LinearRegression(normalize=True) # Instantiate
lm_model.fit(X_train, y_train) #Fit
        
#Predict and score the model
y_test_preds = lm_model.predict(X_test) 
"The r-squared score for the model using only quantitative variables was {} on {} values.".format(r2_score(y_test, y_test_preds), len(y_test))

Question 1

  1. Use the df dataframe. Identify the columns that are categorical in nature. How many of the columns are considered categorical? Use the reference here if you get stuck.
cat_df = df.select_dtypes(include=['object']) # Subset to a dataframe only holding the categorical columns

# Print how many categorical columns are in the dataframe - should be 147
cat_df.shape[1]


# Test your dataframe matches the solution
t.cat_df_check(cat_df)

Question 2

  1. Use cat_df and the cells below to fill in the dictionary below the correct value for each statement.

np.sum(np.sum(cat_df.isnull())/cat_df.shape[0] == 0)# Cell for your work here
np.sum(np.sum(cat_df.isnull())/cat_df.shape[0] > .5)
np.sum(np.sum(cat_df.isnull())/cat_df.shape[0] > .75)

# Provide the key as an `integer` that answers the question

cat_df_dict = {'the number of columns with no missing values': 6, 
               'the number of columns with more than half of the column missing': 49,
               'the number of columns with more than 75% of the column missing': 13
}

# Check your dictionary results
t.cat_df_dict_check(cat_df_dict)

Question 3

3. For each of the categorical variables, we now need to create dummy columns. However, as we saw above, there are a lot of missing values in the current set of categorical columns. So, you might be wondering, what happens when you dummy a column that has missing values.

The documentation for creating dummy variables in pandas is available here, but we can also just put this to practice to see what happens.

First, run the cell below to create a dataset that you will use before moving to the full stackoverflow data.

After you have created dummy_var_df, use the additional cells to fill in the sol_3_dict with the correct variables that match each key.


dummy_var_df = pd.DataFrame({'col1': ['a', 'a', 'b', 'b', 'a', np.nan, 'b', np.nan],

在这里插入图片描述


pd.get_dummies(dummy_var_df['col1'])# Use this cell to write whatever code you need.

在这里插入图片描述

a = 1
b = 2
c = 3
d = 'col1'
e = 'col2'
f = 'the rows with NaNs are dropped by default'
g = 'the NaNs are always encoded as 0'


sol_3_dict = {'Which column should you create a dummy variable for?': d,
              'When you use the default settings for creating dummy variables, how many are created?': b,
              'What happens with the nan values?': g 
             }

# Check your dictionary against the solution
t.sol_3_dict_check(sol_3_dict)

Question 4

4. Notice, you can also use get_dummies to encode NaN values as their own dummy coded column using the dummy_na argument. Often these NaN values are also informative, but you are not capturing them by leaving them as 0 in every column of your model.

Create a new encoding for col1 of dummy_var_df that provides dummy columns not only for each level, but also for the missing values below. Store the 3 resulting dummy columns in dummy_cols_df and check your solution against ours.

dummy_cols_df = pd.get_dummies(dummy_var_df['col1'], dummy_na=True) #Create the three dummy columns for dummy_var_df

# Look at your result
dummy_cols_df

在这里插入图片描述

# Check against the solution
t.dummy_cols_df_check(dummy_cols_df)

Question 5

5. We could use either of the above to begin creating an X matrix that would (potentially) allow us to predict better than just the numeric columns we have been using thus far.

First, complete the create_dummy_df. Follow the instructions in the document string to assist as necessary.


def create_dummy_df(df, cat_cols, dummy_na):
    '''
    INPUT:
    df - pandas dataframe with categorical variables you want to dummy
    cat_cols - list of strings that are associated with names of the categorical columns
    dummy_na - Bool holding whether you want to dummy NA vals of categorical columns or not
    
    OUTPUT:
    df - a new dataframe that has the following characteristics:
            1. contains all columns that were not specified as categorical
            2. removes all the original columns in cat_cols
            3. dummy columns for each of the categorical columns in cat_cols
            4. if dummy_na is True - it also contains dummy columns for the NaN values
    '''
    for col in  cat_cols:
        try:
            # for each cat add dummy var, drop original column
            df = pd.concat([df.drop(col, axis=1), pd.get_dummies(df[col], prefix=col, prefix_sep='_', drop_first=True, dummy_na=dummy_na)], axis=1)
        except:
            continue
    return df

#Dropping where the salary has missing values
df  = df.dropna(subset=['Salary'], axis=0)

#Pull a list of the column names of the categorical variables
cat_df = df.select_dtypes(include=['object'])
cat_cols_lst = cat_df.columns

df_new = create_dummy_df(df, cat_cols_lst, dummy_na=False) #Use your newly created function

# Show shape to assure it has a shape of (5009, 11938)
print(df_new.shape)

df_new.head()

Question 6

6. Use the document string below to complete the function. Then test your function against the solution.

def clean_fit_linear_mod(df, response_col, cat_cols, dummy_na, test_size=.3, rand_state=42):
    '''
    INPUT:
    df - a dataframe holding all the variables of interest
    response_col - a string holding the name of the column 
    cat_cols - list of strings that are associated with names of the categorical columns
    dummy_na - Bool holding whether you want to dummy NA vals of categorical columns or not
    test_size - a float between [0,1] about what proportion of data should be in the test dataset
    rand_state - an int that is provided as the random state for splitting the data into training and test 
    
    OUTPUT:
    test_score - float - r2 score on the test data
    train_score - float - r2 score on the test data
    lm_model - model object from sklearn
    X_train, X_test, y_train, y_test - output from sklearn train test split used for optimal model
    
    Your function should:
    1. Drop the rows with missing response values
    2. Drop columns with NaN for all the values
    3. Use create_dummy_df to dummy categorical columns
    4. Fill the mean of the column for any missing values 
    5. Split your data into an X matrix and a response vector y
    6. Create training and test sets of data
    7. Instantiate a LinearRegression model with normalized data
    8. Fit your model to the training data
    9. Predict the response for the training data and the test data
    10. Obtain an rsquared value for both the training and test data
    '''
    #Drop the rows with missing response values
    df  = df.dropna(subset=[response_col], axis=0)

    #Drop columns with all NaN values
    df = df.dropna(how='all', axis=1)

    #Dummy categorical variables
    df = create_dummy_df(df, cat_cols, dummy_na)

    # Mean function
    fill_mean = lambda col: col.fillna(col.mean())
    # Fill the mean
    df = df.apply(fill_mean, axis=0)

    #Split into explanatory and response variables
    X = df.drop(response_col, axis=1)
    y = df[response_col]

    #Split into train and test
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=rand_state)

    lm_model = LinearRegression(normalize=True) # Instantiate
    lm_model.fit(X_train, y_train) #Fit

    #Predict using your model
    y_test_preds = lm_model.predict(X_test)
    y_train_preds = lm_model.predict(X_train)

    #Score using your model
    test_score = r2_score(y_test, y_test_preds)
    train_score = r2_score(y_train, y_train_preds)

    return test_score, train_score, lm_model, X_train, X_test, y_train, y_test


#Test your function with the above dataset
test_score, train_score, lm_model, X_train, X_test, y_train, y_test = clean_fit_linear_mod(df_new, 'Salary', cat_cols_lst, dummy_na=False

#Print training and testing score 
#your training should be about 1, 
#while the test should be about .45
print("The rsquared on the training data was {}.  The rsquared on the test data was {}.".format(train_score, test_score))

作业笔记结束
🔗回到上一级笔记列表

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
问题: Cannot assign requested address: bind 回答: 这个错误通常是由于端口被其他进程占用导致的,或者是由于客户端频繁连接服务器导致的TIME_WAIT状态过多,导致可用的端口号用尽。\[1\]解决这个问题的方法有几种。一种是通过使用netstat -nap命令查看当前的连接状态,并使用kill -9 pid命令杀死处于TIME_WAIT状态的进程。\[1\]另一种方法是在虚拟机中进行设置,可以通过编辑虚拟网络编辑器并还原默认设置,然后选择网络桥接模式,并使用ifconfig命令查看网卡,使用sudo ifconfig ens33 up命令打开网卡,最后使用sudo dhclient命令动态分配一个IP地址。这样就可以在同一个网段中使用bind函数了。\[3\] #### 引用[.reference_title] - *1* *3* [Linux网络编程,bind:error:Cannot assign requested address,Ubuntu网络桥接](https://blog.csdn.net/weixin_44261839/article/details/108122387)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* [Cannot assign requested address: bind](https://blog.csdn.net/lily3130202512/article/details/128286540)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值