Coursera | Applied Data Science with Python专项课程 | Introduction to Data Science in Python

本文为学习笔记,记录了由University of Michigan推出的Coursera专项课程——Applied Data Science with Python中Course One: Introduction to Data Science in Python全部Assignment代码,均已通过测试,得分均为100/100。

说明:

1.由于笔者也仍处于学习阶段,因此部分代码内容可能并未优化至最优;

2.本文的代码块部分在读取文件时采用目录+文件名的方式,便于读者下载数据集后在自己的Jupyter NoteBook中运行和验证结果;

3.本文的代码块部分在Pandas、NumPy、SciPy等模块避免了重复引入,在Jupyter Notebook重启后需要注意易产生变量未定义的问题;

4.部分代码运行结果较长,不便于展示,因此仅截取部分结果,并在运行结果处标注。

目录

Week 1 Fundamentals of Data Manipulation with Python

Assignment 1

Part A

Part B

Part C

Week 2 Basic Data Processing with Pandas

Assignment 2

Question 1

Question 2

Question 3

Question 4

Week 3 More Data Processing with Pandas

Assignment 3

Question 1

Question 2

Question 3

Question 4

Question 5

Question 6

Question 7

Question 8

Question 9

Question 10¶

Question 11

Question 12

Question 13

Week 4 Beyond Data Manipulation

Assignment 4

Question 1

Question 2

Question 3

Question 4

Question 5

Week 1 Fundamentals of Data Manipulation with Python

Assignment 1

For this assignment you are welcomed to use other regex resources such a regex "cheat sheets" you find on the web.

Part A

Find a list of all of the names in the following string using regex.

import re
def names():
    simple_string = """Amy is 5 years old, and her sister Mary is 2 years old. 
    Ruth and Peter, their parents, have 3 kids."""

    names = re.findall('([A-Z].+?[a-z])[, ]',simple_string)
    return names

运行结果:

['Amy', 'Mary', 'Ruth', 'Peter']

Part B

The dataset file in assets/grades.txt contains a line separated list of people with their grade in a class. Create a regex to generate a list of just those students who received a B in the course.

def grades():
    directory = 'assets/'
    filename = 'grades.txt'
    with open (directory + filename, 'r') as file:
        grades = file.read()
        names = re.findall('([A-Z].+): B',grades)          
    return names

运行结果:

['Bell Kassulke', 'Simon Loidl', 'Elias Jovanovic', 'Hakim Botros', 'Emilie Lorentsen', 'Jake Wood', 'Fatemeh Akhtar', 'Kim Weston', 'Yasmin Dar', 'Viswamitra Upandhye', 'Killian Kaufman', 'Elwood Page', 'Elodie Booker', 'Adnan Chen', 'Hank Spinka', 'Hannah Bayer']

Part C

Consider the standard web log file in assets/logdata.txt. This file records the access a user makes when visiting a web page (like this one!). Each line of the log has the following items:

  • a host (e.g., '146.204.224.152')
  • a user_name (e.g., 'feest6811' note: sometimes the user name is missing! In this case, use '-' as the value for the username.)
  • the time a request was made (e.g., '21/Jun/2019:15:45:24 -0700')
  • the post request type (e.g., 'POST /incentivize HTTP/1.1' note: not everything is a POST!

Your task is to convert this into a list of dictionaries, where each dictionary looks like the following:

example_dict = {"host":"146.204.224.152", 
                "user_name":"feest6811", 
                "time":"21/Jun/2019:15:45:24 -0700",
                "request":"POST /incentivize HTTP/1.1"}
def logs():
    directory = 'assets/'
    filename = 'logdata.txt'
    with open(directory + filename, 'r') as file:
        logdata = file.read()
    lines = logdata.split("\n")
    dataset = []
    for line in lines:
        if len(line) == 0: continue
        log = dict()
        log["host"] = re.findall('[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+', line)[0]
        log["user_name"] = re.findall('\- (.*) \[', line)[0]
        log["time"] = re.findall('\[(.*)\]', line)[0]
        log["request"] = re.findall('"([A-Z]+ /.+)"', line)[0]
        dataset.append(log)
    return dataset

运行结果(部分):

[{'host': '146.204.224.152', 'user_name': 'feest6811', 'time': '21/Jun/2019:15:45:24 -0700', 'request': 'POST /incentivize HTTP/1.1'}, {'host': '197.109.77.178', 'user_name': 'kertzmann3129', 'time': '21/Jun/2019:15:45:25 -0700', 'request': 'DELETE /virtual/solutions/target/web+services HTTP/2.0'}, {'host': '156.127.178.177', 'user_name': 'okuneva5222', 'time': '21/Jun/2019:15:45:27 -0700', 'request': 'DELETE /interactive/transparent/niches/revolutionize HTTP/1.1'}, {'host': '100.32.205.59', 'user_name': 'ortiz8891', 'time': '21/Jun/2019:15:45:28 -0700', 'request': 'PATCH /architectures HTTP/1.0'}, ...]

Week 2 Basic Data Processing with Pandas

Assignment 2

For this assignment you'll be looking at 2017 data on immunizations from the CDC. Your datafile for this assignment is in assets/NISPUF17.csv. A data users guide for this, which you'll need to map the variables in the data to the questions being asked, is available at assets/NIS-PUF17-DUG.pdfNote: you may have to go to your Jupyter tree (click on the Coursera image) and navigate to the assignment 2 assets folder to see this PDF file).

Question 1

Write a function called proportion_of_education which returns the proportion of children in the dataset who had a mother with the education levels equal to less than high school (<12), high school (12), more than high school but not a college graduate (>12) and college degree.

This function should return a dictionary in the form of (use the correct numbers, do not round numbers):

    {"less than high school":0.2,
    "high school":0.4,
    "more than high school but not college":0.2,
    "college":0.2}
import pandas as pd
def proportion_of_education():
    directory = 'assets/'
    filename = 'NISPUF17.csv'
    df = pd.read_csv(directory + filename, index_col=0)
    set1 = (df.EDUC1 == 1).sum()
    set2 = (df.EDUC1 == 2).sum()
    set3 = (df.EDUC1 == 3).sum()
    set4 = (df.EDUC1 == 4).sum()
    total = len(df.EDUC1.dropna())
        
    result = {"less than high school": set1 / total,
             "high school": set2 / total,
             "more than high school but not college": set3 / total,
             "college": set4 / total}
    return result

运行结果:

{'less than high school': 0.10202002459160373, 'high school': 0.172352011241876, 'more than high school but not college': 0.24588090637625154, 'college': 0.47974705779026877}

Question 2

Let's explore the relationship between being fed breastmilk as a child and getting a seasonal influenza vaccine from a healthcare provider. Return a tuple of the average number of influenza vaccines for those children we know received breastmilk as a child and those who know did not.

This function should return a tuple in the form (use the correct numbers:

(2.5, 0.1)
def average_influenza_doses():
    directory = 'assets/'
    filename = 'NISPUF17.csv'
    df = pd.read_csv(directory + filename, index_col=0)
    average1 = df[df['CBF_01']== 1]['P_NUMFLU'].dropna().mean()
    average2 = df[df['CBF_01']== 2]['P_NUMFLU'].dropna().mean()
    return (average1,average2)

 运行结果:

(1.8799187420058687, 1.5963945918878317)

Question 3

It would be interesting to see if there is any evidence of a link between vaccine effectiveness and sex of the child. Calculate the ratio of the number of children who contracted chickenpox but were vaccinated against it (at least one varicella dose) versus those who were vaccinated but did not contract chicken pox. Return results by sex.

This function should return a dictionary in the form of (use the correct numbers):

    {"male":0.2,
    "female":0.4}

Note: To aid in verification, the chickenpox_by_sex()['female'] value the autograder is looking for starts with the digits 0.0077.

def chickenpox_by_sex():
    directory = 'assets/'
    filename = 'NISPUF17.csv'
    df = pd.read_csv(directory + filename, index_col=0)
    male_a = df[(df.SEX == 1) & (df.P_NUMVRC >= 1) & (df.HAD_CPOX == 1)]
    male_b = df[(df.SEX == 1) & (df.P_NUMVRC >= 1) & (df.HAD_CPOX == 2)]
    male = len(male_a) / len(male_b)
    
    female_a = df[(df.SEX == 2) & (df.P_NUMVRC >= 1) & (df.HAD_CPOX == 1)]
    female_b = df[(df.SEX == 2) & (df.P_NUMVRC >= 1) & (df.HAD_CPOX == 2)]
    female = len(female_a) / len(female_b)
    
    return {"male":male,"female":female}

运行结果:

{'male': 0.009675583380762664, 'female': 0.0077918259335489565}

Question 4

A correlation is a statistical relationship between two variables. If we wanted to know if vaccines work, we might look at the correlation between the use of the vaccine and whether it results in prevention of the infection or disease [1]. In this question, you are to see if there is a correlation between having had the chicken pox and the number of chickenpox vaccine doses given (varicella).

Some notes on interpreting the answer. The had_chickenpox_column is either 1 (for yes) or 2 (for no), and the num_chickenpox_vaccine_column is the number of doses a child has been given of the varicella vaccine. A positive correlation (e.g., corr > 0) means that an increase in had_chickenpox_column (which means more no’s) would also increase the values of num_chickenpox_vaccine_column (which means more doses of vaccine). If there is a negative correlation (e.g., corr < 0), it indicates that having had chickenpox is related to an increase in the number of vaccine doses.

Also, pval is the probability that we observe a correlation between had_chickenpox_column and num_chickenpox_vaccine_column which is greater than or equal to a particular value occurred by chance. A small pval means that the observed correlation is highly unlikely to occur by chance. In this case, pval should be very small (will end in e-18 indicating a very small number).

[1] This isn’t really the full picture, since we are not looking at when the dose was given. It’s possible that children had chickenpox and then their parents went to get them the vaccine. Does this dataset have the data we would need to investigate the timing of the dose?

import scipy.stats as stats
import numpy as np
def corr_chickenpox():
    directory = 'assets/'
    filename = 'NISPUF17.csv'
    df = pd.read_csv(directory + filename, index_col=0)
    df = df.loc[:,["HAD_CPOX","P_NUMVRC"]]
    df["HAD_CPOX"] = df["HAD_CPOX"].replace({99: np.nan, 77: np.nan})
    df = df.dropna()
    corr, pval=stats.pearsonr(df["HAD_CPOX"], df["P_NUMVRC"])
    return corr

运行结果:

0.07044873460147986

Week 3 More Data Processing with Pandas

Assignment 3

All questions are weighted the same in this assignment. This assignment requires more individual learning then the last one did - you are encouraged to check out the pandas documentation to find functions or methods you might not have used yet, or ask questions on Stack Overflow and tag them as pandas and python related. All questions are worth the same number of points except question 1 which is worth 17% of the assignment grade.

Note: Questions 3-13 rely on your question 1 answer.

import pandas as pd
import numpy as np

# Filter all warnings. If you would like to see the warnings, please comment the two lines below.
import warnings
warnings.filterwarnings('ignore')

Question 1

Load the energy data from the file assets/Energy Indicators.xls, which is a list of indicators of energy supply and renewable electricity production from the United Nations for the year 2013, and should be put into a DataFrame with the variable name of Energy.

Keep in mind that this is an Excel file, and not a comma separated values file. Also, make sure to exclude the footer and header information from the datafile. The first two columns are unneccessary, so you should get rid of them, and you should change the column labels so that the columns are:

['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable]

Convert Energy Supply to gigajoules (Note: there are 1,000,000 gigajoules in a petajoule). For all countries which have missing data (e.g. data with "...") make sure this is reflected as np.NaN values.

Rename the following list of countries (for use in later questions):

{"Republic of Korea": "South Korea", "United States of America": "United States", "United Kingdom of Great Britain and Northern Ireland": "United Kingdom", "China, Hong Kong Special Administrative Region": "Hong Kong"}

There are also several countries with numbers and/or parenthesis in their name. Be sure to remove these, e.g. 'Bolivia (Plurinational State of)' should be 'Bolivia''Switzerland17' should be 'Switzerland'.

Next, load the GDP data from the file assets/world_bank.csv, which is a csv containing countries' GDP from 1960 to 2015 from World Bank. Call this DataFrame GDP.

Make sure to skip the header, and rename the following list of countries:

{"Korea, Rep.": "South Korea", "Iran, Islamic Rep.": "Iran", "Hong Kong SAR, China": "Hong Kong"}

Finally, load the Sciamgo Journal and Country Rank data for Energy Engineering and Power Technology from the file assets/scimagojr-3.xlsx, which ranks countries based on their journal contributions in the aforementioned area. Call this DataFrame ScimEn.

Join the three datasets: GDP, Energy, and ScimEn into a new dataset (using the intersection of country names). Use only the last 10 years (2006-2015) of GDP data and only the top 15 countries by Scimagojr 'Rank' (Rank 1 through 15).

The index of this DataFrame should be the name of the country, and the columns should be ['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations', 'Citations per document', 'H index', 'Energy Supply', 'Energy Supply per Capita', '% Renewable', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015'].

This function should return a DataFrame with 20 columns and 15 entries, and the rows of the DataFrame should be sorted by "Rank".

def answer_one():
    # YOUR CODE HERE
    directory = 'assets/'
    filename1 = 'Energy Indicators.xls'
    filename2 = 'world_bank.csv'
    filename3 ='scimagojr-3.xlsx'
    # Processing DataFrame Energy
    # Step 1: Extract Raw DataFrame Energy from .xls file
    Energy = pd.read_excel(directory + filename1, "Energy", header=17, 
                       usecols=[2, 3, 4, 5], names=['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable'], 
                       index_col=0).dropna()
    # Step 2: Reset the data of Energy
    Energy[list(Energy.columns)] = Energy[list(Energy.columns)].replace({"...": np.nan})
    Energy['Energy Supply'] = Energy['Energy Supply'] * 1000000
    # Step 3: Clean the index column
    Energy.index = Energy.index.str.replace("\(.+\)|[0-9]+", "", regex=True)
    Energy.index = [country.strip() for country in list(Energy.index)]
    Energy = Energy.rename({"Republic of Korea": "South Korea",
               "United States of America": "United States",
               "United Kingdom of Great Britain and Northern Ireland": "United Kingdom",
               "China, Hong Kong Special Administrative Region": "Hong Kong"}, 
              axis="index")
    # Processing DataFrame GDP
    # Step 1: Extract Raw DataFrame GDP from .csv file (exclude the last blank column)
    GDP = pd.read_csv(directory + filename2, header=4, index_col=0)
    # Step 2: Reset the index column
    GDP = GDP.rename({"Korea, Rep.": "South Korea", 
            "Iran, Islamic Rep.": "Iran",
            "Hong Kong SAR, China": "Hong Kong"},
           axis="index")
    # Processing DataFrame ScimEn
    ScimEn = pd.read_excel(directory + filename3, index_col=1)
    
    # Merging three DataFrame
    df = pd.merge(Energy, GDP, how='inner', left_index=True, right_index=True)
    df = pd.merge(df, ScimEn, how='inner', left_index=True, right_index=True)
    
    # Extract specific columns from df
    columns = ['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations', 'Citations per document', 'H index', 
           'Energy Supply', 'Energy Supply per Capita', '% Renewable', 
           '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015']
    df = df[columns]
    
    # Sort by the values of Rank and select the top 15
    df = df.sort_values(by="Rank").iloc[:15,:]
    return df

运行结果(部分):

                    Rank  Documents  Citable documents  Citations  \ ...
China                  1     127050             126767     597237    ...
United States          2      96661              94747     792274    ...
Japan                  3      30504              30287     223024    ...
United Kingdom         4      20944              20357     206091    ...
Russian Federation     5      18534              18301      34266    ...
Canada                 6      17899              17620     215003    ...

Question 2

The previous question joined three datasets then reduced this to just the top 15 entries. When you joined the datasets, but before you reduced this to the top 15 items, how many entries did you lose?

This function should return a single number.

def answer_two():
    # YOUR CODE HERE
    # Same code as answer one
    directory = 'assets/'
    filename1 = 'Energy Indicators.xls'
    filename2 = 'world_bank.csv'
    filename3 ='scimagojr-3.xlsx'
    Energy = pd.read_excel(directory + filename1, "Energy", header=17, 
                       usecols=[2, 3, 4, 5], names=['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable'], 
                       index_col=0).dropna()
    
    Energy[list(Energy.columns)] = Energy[list(Energy.columns)].replace({"...": np.nan})
    Energy['Energy Supply'] = Energy['Energy Supply'] * 1000000
    
    Energy.index = Energy.index.str.replace("\(.+\)|[0-9]+", "", regex=True)
    Energy.index = [country.strip() for country in list(Energy.index)]
    Energy = Energy.rename({"Republic of Korea": "South Korea",
               "United States of America": "United States",
               "United Kingdom of Great Britain and Northern Ireland": "United Kingdom",
               "China, Hong Kong Special Administrative Region": "Hong Kong"}, 
              axis="index")
    
    GDP = pd.read_csv(directory + filename2, header=4, index_col=0)
    
    GDP = GDP.rename({"Korea, Rep.": "South Korea", 
            "Iran, Islamic Rep.": "Iran",
            "Hong Kong SAR, China": "Hong Kong"},
           axis="index")
    
    ScimEn = pd.read_excel(directory + filename3, index_col=1)
    
    # merge dataframe with inner method
    df1 = pd.merge(Energy, GDP, how='inner', left_index=True, right_index=True)
    df1 = pd.merge(df1, ScimEn, how='inner', left_index=True, right_index=True)
    
    # merge dataframe with outer method
    df2 = pd.merge(Energy, GDP, how='outer', left_index=True, right_index=True)
    df2 = pd.merge(df2, ScimEn, how='outer', left_index=True, right_index=True)
    
    return len(df2) - len(df1)

运行结果:

156

Question 3

What are the top 15 countries for average GDP over the last 10 years?

This function should return a Series named avgGDP with 15 countries and their average GDP sorted in descending order.

def answer_three():
    # YOUR CODE HERE
    df = answer_one()
    df["avgGDP"] = df.iloc[:, -10:].mean(1)
    avgGDP = df.sort_values(by="avgGDP", ascending=False)["avgGDP"]
    return avgGDP

运行结果:

United States         1.536434e+13
China                 6.348609e+12
Japan                 5.542208e+12
Germany               3.493025e+12
France                2.681725e+12
United Kingdom        2.487907e+12
Brazil                2.189794e+12
Italy                 2.120175e+12
India                 1.769297e+12
Canada                1.660647e+12
Russian Federation    1.565459e+12
Spain                 1.418078e+12
Australia             1.164043e+12
South Korea           1.106715e+12
Iran                  4.441558e+11
Name: avgGDP, dtype: float64

Question 4

By how much had the GDP changed over the 10 year span for the country with the 6th largest average GDP?

This function should return a single number.

def answer_four():
    # YOUR CODE HERE
    directory = 'assets/'
    filename2 = 'world_bank.csv'
    GDP = pd.read_csv(directory + filename2, header=4, index_col=0)
    sixth = list(answer_three().index)[5]
    previous = GDP.iloc[:,-10].loc[sixth]
    now = GDP.iloc[:,-1].loc[sixth]
    return now - previous

运行结果:

246702696075.3999

Question 5

What is the mean energy supply per capita?

This function should return a single number.

def answer_five():
    # YOUR CODE HERE
    df = answer_one()
    return df['Energy Supply per Capita'].mean()

运行结果:

157.6

Question 6

What country has the maximum % Renewable and what is the percentage?

This function should return a tuple with the name of the country and the percentage.

def answer_six():
    # YOUR CODE HERE
    df = answer_one()
    df = df.sort_values(by='% Renewable', ascending=False)
    country = list(df.index)[0]
    percentage = df["% Renewable"].iloc[0]
    return (country, percentage)

运行结果:

('Brazil', 69.64803)

Question 7

Create a new column that is the ratio of Self-Citations to Total Citations. What is the maximum value for this new column, and what country has the highest ratio?

This function should return a tuple with the name of the country and the ratio.

def answer_seven():
    # YOUR CODE HERE
    df = answer_one()
    df["Ratio"] = df["Self-citations"] / df["Citations"]
    df = df.sort_values(by="Ratio", ascending=False)
    country = list(df.index)[0]
    ratio = df.iloc[0,-1]
    return (country, ratio)

运行结果:

('China', 0.6893126179389422)

Question 8

Create a column that estimates the population using Energy Supply and Energy Supply per capita. What is the third most populous country according to this estimate?

This function should return the name of the country

def answer_eight():
    # YOUR CODE HERE
    directory = 'assets/'
    filename1 = 'Energy Indicators.xls'
    Energy = pd.read_excel(directory + filename1, "Energy", header=17, 
                       usecols=[2, 3, 4, 5], names=['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable'], 
                       index_col=0).dropna()
    
    Energy[list(Energy.columns)] = Energy[list(Energy.columns)].replace({"...": np.nan})
    Energy['Energy Supply'] = Energy['Energy Supply'] * 1000000
    
    Energy.index = Energy.index.str.replace("\(.+\)|[0-9]+", "", regex=True)
    Energy = Energy.rename({"Republic of Korea": "South Korea",
               "United States of America": "United States",
               "United Kingdom of Great Britain and Northern Ireland": "United Kingdom",
               "China, Hong Kong Special Administrative Region": "Hong Kong"}, 
              axis="index")
    # Add the column of Population Estimate
    Energy["Population Estimate"] = Energy["Energy Supply"] / Energy["Energy Supply per Capita"]
    # Sort by Population Estimate
    Energy = Energy.sort_values(by="Population Estimate", ascending=False)
    # Return Values
    country = list(Energy.index)[2]
    return country

运行结果:

United States

Question 9

Create a column that estimates the number of citable documents per person. What is the correlation between the number of citable documents per capita and the energy supply per capita? Use the .corr() method, (Pearson's correlation).

This function should return a single number.

(Optional: Use the built-in function plot9() to visualize the relationship between Energy Supply per Capita vs. Citable docs per Capita)

def answer_nine():
    # YOUR CODE HERE
    import scipy.stats as stats
    df = answer_one()
    df["Population"] = df["Energy Supply"] / df["Energy Supply per Capita"]
    df["Citable Documents Per Capita"] = df["Citable documents"] / df["Population"]
    df = df.loc[:,["Energy Supply per Capita","Citable Documents Per Capita"]].dropna()
    corr, pval=stats.pearsonr(df["Citable Documents Per Capita"], df["Energy Supply per Capita"])
    return corr

运行结果:

0.7940010435442942

Question 10

Create a new column with a 1 if the country's % Renewable value is at or above the median for all countries in the top 15, and a 0 if the country's % Renewable value is below the median.

This function should return a series named HighRenew whose index is the country name sorted in ascending order of rank.

def answer_ten():
    # YOUR CODE HERE
    df = answer_one()
    median = df["% Renewable"].median()
    df["HighRenew"] = np.where(df["% Renewable"] >= median, 1, 0)
    return df["HighRenew"]

运行结果:

China                 1
United States         0
Japan                 0
United Kingdom        0
Russian Federation    1
Canada                1
Germany               1
India                 0
France                1
South Korea           0
Italy                 1
Spain                 1
Iran                  0
Australia             0
Brazil                1
Name: HighRenew, dtype: int64

Question 11

Use the following dictionary to group the Countries by Continent, then create a DataFrame that displays the sample size (the number of countries in each continent bin), and the sum, mean, and std deviation for the estimated population of each country.

ContinentDict  = {'China':'Asia', 
                  'United States':'North America', 
                  'Japan':'Asia', 
                  'United Kingdom':'Europe', 
                  'Russian Federation':'Europe', 
                  'Canada':'North America', 
                  'Germany':'Europe', 
                  'India':'Asia',
                  'France':'Europe', 
                  'South Korea':'Asia', 
                  'Italy':'Europe', 
                  'Spain':'Europe', 
                  'Iran':'Asia',
                  'Australia':'Australia', 
                  'Brazil':'South America'}

This function should return a DataFrame with index named Continent ['Asia', 'Australia', 'Europe', 'North America', 'South America'] and columns ['size', 'sum', 'mean', 'std']

def answer_eleven():
    # YOUR CODE HERE
    # Initialize dataframe and continent dictionary
    df = answer_one()
    ContinentDict  = {'China':'Asia', 
                  'United States':'North America', 
                  'Japan':'Asia', 
                  'United Kingdom':'Europe', 
                  'Russian Federation':'Europe', 
                  'Canada':'North America', 
                  'Germany':'Europe', 
                  'India':'Asia',
                  'France':'Europe', 
                  'South Korea':'Asia', 
                  'Italy':'Europe', 
                  'Spain':'Europe', 
                  'Iran':'Asia',
                  'Australia':'Australia', 
                  'Brazil':'South America'}
    df = df.reset_index()
    
    # Add new columns
    df["Continent"] = df["index"].replace(ContinentDict)
    df["Population"] = df["Energy Supply"] / df["Energy Supply per Capita"]
    
    # Group by Population
    df = df.groupby("Continent").agg({"Population": [np.size, np.sum, np.mean, np.std]})
    return df.Population

运行结果:

              size           sum          mean           std
Continent                                                    
Asia            5.0  2.898666e+09  5.797333e+08  6.790979e+08
Australia       1.0  2.331602e+07  2.331602e+07           NaN
Europe          6.0  4.579297e+08  7.632161e+07  3.464767e+07
North America   2.0  3.528552e+08  1.764276e+08  1.996696e+08
South America   1.0  2.059153e+08  2.059153e+08           NaN

Question 12

Cut % Renewable into 5 bins. Group Top15 by the Continent, as well as these new % Renewable bins. How many countries are in each of these groups?

This function should return a Series with a MultiIndex of Continent, then the bins for % Renewable. Do not include groups with no countries.

def answer_twelve():
    # YOUR CODE HERE
    df = answer_one()
    ContinentDict  = {'China':'Asia', 
                  'United States':'North America', 
                  'Japan':'Asia', 
                  'United Kingdom':'Europe', 
                  'Russian Federation':'Europe', 
                  'Canada':'North America', 
                  'Germany':'Europe', 
                  'India':'Asia',
                  'France':'Europe', 
                  'South Korea':'Asia', 
                  'Italy':'Europe', 
                  'Spain':'Europe', 
                  'Iran':'Asia',
                  'Australia':'Australia', 
                  'Brazil':'South America'}
    df["% Renewable"] = pd.cut(df["% Renewable"], 5)
    df = df.reset_index()
    df["Continent"] = df["index"].replace(ContinentDict)
    df = df.groupby(["Continent", "% Renewable"]).agg({"% Renewable": np.size})
    df = df.dropna()
    return df["% Renewable"]

运行结果:

Continent      % Renewable     
Asia           (2.212, 15.753]     4.0
               (15.753, 29.227]    1.0
Australia      (2.212, 15.753]     1.0
Europe         (2.212, 15.753]     1.0
               (15.753, 29.227]    3.0
               (29.227, 42.701]    2.0
North America  (2.212, 15.753]     1.0
               (56.174, 69.648]    1.0
South America  (56.174, 69.648]    1.0
Name: % Renewable, dtype: float64

Question 13

Convert the Population Estimate series to a string with thousands separator (using commas). Use all significant digits (do not round the results).

e.g. 12345678.90 -> 12,345,678.90

This function should return a series PopEst whose index is the country name and whose values are the population estimate string

def answer_thirteen():
    # YOUR CODE HERE
    df = answer_one()
    df["Population"] = df["Energy Supply"] / df["Energy Supply per Capita"]
    df["Population"] = df["Population"].apply(lambda x: format(x, ","))
    PopEst = df["Population"]
    return PopEst

运行结果:

China                 1,367,645,161.2903225
United States          317,615,384.61538464
Japan                  127,409,395.97315437
United Kingdom         63,870,967.741935484
Russian Federation            143,500,000.0
Canada                  35,239,864.86486486
Germany                 80,369,696.96969697
India                 1,276,730,769.2307692
France                  63,837,349.39759036
South Korea            49,805,429.864253394
Italy                  59,908,256.880733944
Spain                    46,443,396.2264151
Iran                    77,075,630.25210084
Australia              23,316,017.316017315
Brazil                 205,915,254.23728815
Name: Population, dtype: object

Week 4 Beyond Data Manipulation

Assignment 4

Description

In this assignment you must read in a file of metropolitan regions and associated sports teams from assets/wikipedia_data.html and answer some questions about each metropolitan region. Each of these regions may have one or more teams from the "Big 4": NFL (football, in assets/nfl.csv), MLB (baseball, in assets/mlb.csv), NBA (basketball, in assets/nba.csv or NHL (hockey, in assets/nhl.csv). Please keep in mind that all questions are from the perspective of the metropolitan region, and that this file is the "source of authority" for the location of a given sports team. Thus teams which are commonly known by a different area (e.g. "Oakland Raiders") need to be mapped into the metropolitan region given (e.g. San Francisco Bay Area). This will require some human data understanding outside of the data you've been given (e.g. you will have to hand-code some names, and might need to google to find out where teams are)!

For each sport I would like you to answer the question: what is the win/loss ratio's correlation with the population of the city it is in? Win/Loss ratio refers to the number of wins over the number of wins plus the number of losses. Remember that to calculate the correlation with pearsonr, so you are going to send in two ordered lists of values, the populations from the wikipedia_data.html file and the win/loss ratio for a given sport in the same order. Average the win/loss ratios for those cities which have multiple teams of a single sport. Each sport is worth an equal amount in this assignment (20%*4=80%) of the grade for this assignment. You should only use data from year 2018 for your analysis -- this is important!

Notes

  1. Do not include data about the MLS or CFL in any of the work you are doing, we're only interested in the Big 4 in this assignment.
  2. I highly suggest that you first tackle the four correlation questions in order, as they are all similar and worth the majority of grades for this assignment. This is by design!
  3. It's fair game to talk with peers about high level strategy as well as the relationship between metropolitan areas and sports teams. However, do not post code solving aspects of the assignment (including such as dictionaries mapping areas to teams, or regexes which will clean up names).
  4. There may be more teams than the assert statements test, remember to collapse multiple teams in one city into a single value!

Question 1

For this question, calculate the win/loss ratio's correlation with the population of the city it is in for the NHL using 2018 data.

import pandas as pd
import numpy as np
import scipy.stats as stats
import re

directory = 'assets/'
nhl_df=pd.read_csv(directory + "nhl.csv")
cities=pd.read_html(directory + "wikipedia_data.html")[1]
cities=cities.iloc[:-1,[0,3,5,6,7,8]]
league = "NHL"

def nhl_correlation(): 
    # YOUR CODE HERE
    cityinfo = cities
    df = nhl_df
    
    cityinfo = cityinfo.rename(columns={"Metropolitan area": "City", "Population (2016 est.)[8]": "Population"})
    cityinfo = cityinfo.loc[:,["City", "Population", league]]
    cityinfo = cityinfo.replace(r'\[.*\]', '', regex=True)
    cityinfo = cityinfo.replace('—', np.nan).replace('', np.nan)
    cityinfo = cityinfo.dropna()
    cityinfo = cityinfo.set_index(league)

    df = df[df["year"] == 2018]
    df = df.replace(r'\*', '', regex=True)
    df = df[["team", "W", "L"]]
    df["team"] = df["team"].apply(lambda x: x.split(" ")[-1])
    df = df.set_index("team")
    df = df.drop(["Division"])
    df = df.astype(int)
    df.insert(2, "City", np.nan)
    df["Ratio"] = df["W"] / (df["W"] + df["L"])

    for teams in cityinfo.index:
        for team in df.index:
            if team in teams:
                df.loc[team, "City"] = cityinfo.loc[teams, "City"]

    cityinfo = cityinfo.set_index("City")
    cityinfo = cityinfo.astype(int)
    df = df.groupby("City").agg({"Ratio": np.mean})
    da = pd.merge(df, cityinfo, how="inner", left_index=True, right_index=True)
    population_by_region, win_loss_by_region = da["Population"].tolist(), da["Ratio"].tolist()
    
    assert len(population_by_region) == len(win_loss_by_region), "Q1: Your lists must be the same length"
    assert len(population_by_region) == 28, "Q1: There should be 28 teams being analysed for NHL"
    
    return stats.pearsonr(population_by_region, win_loss_by_region)[0]

运行结果:

0.012486162921209907

Question 2

For this question, calculate the win/loss ratio's correlation with the population of the city it is in for the NBA using 2018 data.

import pandas as pd
import numpy as np
import scipy.stats as stats
import re

directory = 'assets/'

nba_df=pd.read_csv(directory + "nba.csv")
cities=pd.read_html(directory + "wikipedia_data.html")[1]
cities=cities.iloc[:-1,[0,3,5,6,7,8]]
league = "NBA"

def nba_correlation():
    # YOUR CODE HERE
    cityinfo = cities
    df = nba_df

    cityinfo = cityinfo.rename(columns={"Metropolitan area": "City", "Population (2016 est.)[8]": "Population"})
    cityinfo = cityinfo.loc[:,["City", "Population", league]]
    cityinfo = cityinfo.replace(r'\[.+\]', '', regex=True)
    cityinfo = cityinfo.replace('—', np.nan).replace('', np.nan)
    cityinfo = cityinfo.dropna()
    cityinfo = cityinfo.set_index(league)

    df = df[df["year"] == 2018]
    df = df.replace(r'\*', '', regex=True)
    df = df.replace(r'\([0-9]+\)', '', regex=True)
    df = df[["team", "W", "L"]]
    df["team"] = df["team"].apply(lambda x: x.split(" ")[-1].strip())
    df = df.set_index("team")
    df = df.astype(int)
    df.insert(2, "City", np.nan)
    df["Ratio"] = df["W"] / (df["W"] + df["L"])

    for teams in cityinfo.index:
        for team in df.index:
            if team in teams:
                df.loc[team, "City"] = cityinfo.loc[teams, "City"]

    cityinfo = cityinfo.set_index("City")
    cityinfo = cityinfo.astype(int)

    df = df.groupby("City").agg({"Ratio": np.mean})
    da = pd.merge(df, cityinfo, how="inner", left_index=True, right_index=True)
    population_by_region, win_loss_by_region = da["Population"].tolist(), da["Ratio"].tolist()
    
    assert len(population_by_region) == len(win_loss_by_region), "Q2: Your lists must be the same length"
    assert len(population_by_region) == 28, "Q2: There should be 28 teams being analysed for NBA"

    return stats.pearsonr(population_by_region, win_loss_by_region)[0]

运行结果:

-0.17657160252844617

Question 3

For this question, calculate the win/loss ratio's correlation with the population of the city it is in for the MLB using 2018 data.

import pandas as pd
import numpy as np
import scipy.stats as stats
import re
directory = 'assets/'

mlb_df=pd.read_csv(directory + "mlb.csv")
cities=pd.read_html(directory + "wikipedia_data.html")[1]
cities=cities.iloc[:-1,[0,3,5,6,7,8]]
league = "MLB"

def mlb_correlation(): 
    # YOUR CODE HERE
    cityinfo = cities
    df = mlb_df

    cityinfo = cityinfo.rename(columns={"Metropolitan area": "City", "Population (2016 est.)[8]": "Population"})
    cityinfo = cityinfo.loc[:,["City", "Population", league]]
    cityinfo = cityinfo.replace(r'\[.+\]', '', regex=True)
    cityinfo = cityinfo.replace('—', np.nan).replace('', np.nan)
    cityinfo = cityinfo.dropna()
    cityinfo = cityinfo.set_index(league)

    df = df[df["year"] == 2018]
    df = df[["team", "W", "L"]]
    df["team"] = df["team"].apply(lambda x: x.split(" ")[-1].strip())
    df.iloc[0,0], df.iloc[8, 0] = "Red Sox", "White Sox"
    df = df.set_index("team")
    df = df.astype(float)
    df.insert(1, "City", np.nan)
    df["Ratio"] = df["W"] / (df["W"] + df["L"])

    for teams in cityinfo.index:
        for team in df.index:
            if team in teams:
                df.loc[team, "City"] = cityinfo.loc[teams, "City"]

    cityinfo = cityinfo.set_index("City")
    cityinfo = cityinfo.astype(int)

    df = df.groupby("City").agg({"Ratio": np.mean})
    da = pd.merge(df, cityinfo, how="inner", left_index=True, right_index=True)
    population_by_region, win_loss_by_region = da["Population"].tolist(), da["Ratio"].tolist()
    
    assert len(population_by_region) == len(win_loss_by_region), "Q3: Your lists must be the same length"
    assert len(population_by_region) == 26, "Q3: There should be 26 teams being analysed for MLB"

    return stats.pearsonr(population_by_region, win_loss_by_region)[0]

运行结果:

0.15027698302669307

Question 4

For this question, calculate the win/loss ratio's correlation with the population of the city it is in for the NFL using 2018 data.

import pandas as pd
import numpy as np
import scipy.stats as stats
import re

directory = 'assets/'

nfl_df=pd.read_csv(directory + "nfl.csv")
cities=pd.read_html(directory + "wikipedia_data.html")[1]
cities=cities.iloc[:-1,[0,3,5,6,7,8]]
league = "NFL"

def nfl_correlation(): 
    # YOUR CODE HERE
    cityinfo = cities
    df = nfl_df

    cityinfo = cityinfo.rename(columns={"Metropolitan area": "City", "Population (2016 est.)[8]": "Population"})
    cityinfo = cityinfo.loc[:,["City", "Population", league]]
    cityinfo = cityinfo.replace(r'\[.+\]', '', regex=True)
    cityinfo = cityinfo.replace('—', np.nan).replace('', np.nan)
    cityinfo = cityinfo.dropna()
    cityinfo = cityinfo.set_index(league)
    
    df = df[df["year"] == 2018]
    df = df[["team", "W", "L"]]
    df["team"] = df["team"].apply(lambda x: x.split(" ")[-1].strip())
    df["team"] = df["team"].replace(r'[\*|\+]','', regex=True)
    df = df.set_index("team")
    df = df.drop(["East", "West", "North", "South"])
    df = df.astype(int)
    df.insert(2, "City", np.nan)
    df["Ratio"] = df["W"] / (df["W"] + df["L"])

    for teams in cityinfo.index:
        for team in df.index:
            if team in teams:
                df.loc[team, "City"] = cityinfo.loc[teams, "City"]

    cityinfo = cityinfo.set_index("City")
    cityinfo = cityinfo.astype(int)

    df = df.groupby("City").agg({"Ratio": np.mean})
    da = pd.merge(df, cityinfo, how="inner", left_index=True, right_index=True)
    population_by_region, win_loss_by_region = da["Population"].tolist(), da["Ratio"].tolist()

    assert len(population_by_region) == len(win_loss_by_region), "Q4: Your lists must be the same length"
    assert len(population_by_region) == 29, "Q4: There should be 29 teams being analysed for NFL"

    return stats.pearsonr(population_by_region, win_loss_by_region)[0]

运行结果:

0.004922112149349393

Question 5

In this question I would like you to explore the hypothesis that given that an area has two sports teams in different sports, those teams will perform the same within their respective sports. How I would like to see this explored is with a series of paired t-tests (so use ttest_rel) between all pairs of sports. Are there any sports where we can reject the null hypothesis? Again, average values where a sport has multiple teams in one region. Remember, you will only be including, for each sport, cities which have teams engaged in that sport, drop others as appropriate. This question is worth 20% of the grade for this assignment.

import pandas as pd
import numpy as np
import scipy.stats as stats
import re

directory = "assets/"
mlb_df = pd.read_csv(directory + "mlb.csv")
nhl_df = pd.read_csv(directory + "nhl.csv")
nba_df = pd.read_csv(directory + "nba.csv")
nfl_df = pd.read_csv(directory + "nfl.csv")
cities=pd.read_html(directory + "wikipedia_data.html")[1]
cities=cities.iloc[:-1,[0,3,5,6,7,8]]

'''mlb_df=pd.read_csv("assets/mlb.csv")
nhl_df=pd.read_csv("assets/nhl.csv")
nba_df=pd.read_csv("assets/nba.csv")
nfl_df=pd.read_csv("assets/nfl.csv")
cities=pd.read_html("assets/wikipedia_data.html")[1]
cities=cities.iloc[:-1,[0,3,5,6,7,8]]'''

def clean_cityinfo():
    cityinfo = cities
    cityinfo = cityinfo.rename(columns={"Metropolitan area": "City", "Population (2016 est.)[8]": "Population"})
    cityinfo = cityinfo[["City", "Population", "NFL", "MLB", "NBA", "NHL"]]
    cityinfo = cityinfo.replace(r'\[.+\]', '', regex=True)
    cityinfo = cityinfo.replace(r'[^a-zA-Z0-9 ]', '', regex=True)
    cityinfo.loc[13, "NFL"] = ''
    return cityinfo

def paired_cityinfo(league1, league2):
    cityinfo = clean_cityinfo()[["City", "Population", league1, league2]].replace('', np.nan).dropna()
    cityinfo = cityinfo.set_index("City")
    cityinfo = cityinfo["Population"]
    return cityinfo

def nhl_data(): 
    df = nhl_df
    df = df[df["year"] == 2018]
    df = df.replace(r'\*', '', regex=True)
    df = df[["team", "W", "L"]]
    df["team"] = df["team"].apply(lambda x: x.split(" ")[-1])
    df = df.set_index("team")
    df = df.drop(["Division"])
    df = df.astype(int)
    df.insert(2, "City", np.nan)
    df["Ratio"] = df["W"] / (df["W"] + df["L"])
    
    cityinfo = clean_cityinfo()
    cityinfo = cityinfo.set_index("NHL")
    for teams in cityinfo.index:
        for team in df.index:
            if team in teams:
                df.loc[team, "City"] = cityinfo.loc[teams, "City"]
    df = df[["City", "Ratio"]]
    df = df.groupby("City").agg({"Ratio": np.mean})
    return df

def nba_data():
    df = nba_df
    df = df[df["year"] == 2018]
    df = df.replace(r'\*', '', regex=True)
    df = df.replace(r'\([0-9]+\)', '', regex=True)
    df = df[["team", "W", "L"]]
    df["team"] = df["team"].apply(lambda x: x.split(" ")[-1].strip())
    df = df.set_index("team")
    df = df.astype(int)
    df.insert(2, "City", np.nan)
    df["Ratio"] = df["W"] / (df["W"] + df["L"])
    
    cityinfo = clean_cityinfo()
    cityinfo = cityinfo.set_index("NBA")
    for teams in cityinfo.index:
        for team in df.index:
            if team in teams:
                df.loc[team, "City"] = cityinfo.loc[teams, "City"]
    df = df[["City", "Ratio"]]
    df = df.groupby("City").agg({"Ratio": np.mean})
    return df

def mlb_data():
    df = mlb_df
    df = df[df["year"] == 2018]
    df = df[["team", "W", "L"]]
    df["team"] = df["team"].apply(lambda x: x.split(" ")[-1].strip())
    df.iloc[0,0], df.iloc[8, 0] = "Red Sox", "White Sox"
    df = df.set_index("team")
    df = df.astype(float)
    df.insert(1, "City", np.nan)
    df["Ratio"] = df["W"] / (df["W"] + df["L"])
    
    cityinfo = clean_cityinfo()
    cityinfo = cityinfo.set_index("MLB")
    for teams in cityinfo.index:
        for team in df.index:
            if team in teams:
                df.loc[team, "City"] = cityinfo.loc[teams, "City"]
    df = df[["City", "Ratio"]]
    df = df.groupby("City").agg({"Ratio": np.mean})
    return df

def nfl_data():
    df = nfl_df
    df = df[df["year"] == 2018]
    df = df[["team", "W", "L"]]
    df["team"] = df["team"].apply(lambda x: x.split(" ")[-1].strip())
    df["team"] = df["team"].replace(r'[\*|\+]','', regex=True)
    df = df.set_index("team")
    df = df.drop(["East", "West", "North", "South"])
    df = df.astype(int)
    df.insert(2, "City", np.nan)
    df["Ratio"] = df["W"] / (df["W"] + df["L"])
    
    cityinfo = clean_cityinfo()
    cityinfo = cityinfo.set_index("NFL")
    for teams in cityinfo.index:
        for team in df.index:
            if team in teams:
                df.loc[team, "City"] = cityinfo.loc[teams, "City"]
    df = df[["City", "Ratio"]]
    df = df.groupby("City").agg({"Ratio": np.mean})
    return df

def league_to_data(league):
    if league == "NFL":
        return nfl_data()
    elif league == "NBA":
        return nba_data()
    elif league == "NHL":
        return nhl_data()
    elif league == "MLB":
        return mlb_data()

def p_value(league1, league2):
    df1, df2 = league_to_data(league1), league_to_data(league2)
    temp_df = pd.merge(paired_cityinfo(league1, league2), df1, how='left', left_index=True, right_index=True)
    temp_df = pd.merge(temp_df, df2, how='left', left_index=True, right_index=True)
    return stats.ttest_rel(temp_df['Ratio_x'], temp_df['Ratio_y'])[1]

def sports_team_performance():
    # YOUR CODE HERE
    #raise NotImplementedError()
    
    # Note: p_values is a full dataframe, so df.loc["NFL","NBA"] should be the same as df.loc["NBA","NFL"] and
    # df.loc["NFL","NFL"] should return np.nan
    sports = ['NFL', 'NBA', 'NHL', 'MLB']
    p_values = pd.DataFrame({k:np.nan for k in sports}, index=sports)
    for league1 in sports:
        for league2 in sports:
            if league1 == league2:
                continue
            p_values.loc[league1, league2] = p_value(league1, league2)
            
    assert abs(p_values.loc["NBA", "NHL"] - 0.02) <= 1e-2, "The NBA-NHL p-value should be around 0.02"
    assert abs(p_values.loc["MLB", "NFL"] - 0.80) <= 1e-2, "The MLB-NFL p-value should be around 0.80"
    
    return p_values

运行结果:

NFLNBANHLMLB
NFLNaN0.9417920.0308830.802069
NBA0.941792NaN0.0222970.950540
NHL0.0308830.022297NaN0.000708
MLB0.8020690.9505400.000708NaN
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值