Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
300 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 9</font>
Download
Step1: Número de veículos pertencentes a cada marca
Step2: Preço médio dos veículos com base no tipo de veículo, bem como no tipo de caixa de câmbio | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Imports
import os
import subprocess
import stat
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib as mat
import matplotlib.pyplot as plt
from datetime import datetime
sns.set(style="white")
%matplotlib inline
np.__version__
pd.__version__
sns.__version__
mat.__version__
# Dataset
clean_data_path = "dataset/autos.csv"
df = pd.read_csv(clean_data_path,encoding="latin-1")
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 9</font>
Download: http://github.com/dsacademybr
Mini-Projeto 2 - Análise Exploratória em Conjunto de Dados do Kaggle
Análise 2
End of explanation
# Crie um Plot que mostre o número de veículos pertencentes a cada marca
sns.set_style("whitegrid")
g = sns.catplot(y="brand", data=df, kind="count", palette="Reds_r", height=7, aspect=1.5)
g.ax.set_title("Veículos Por Marca",fontdict={'size':18})
g.ax.xaxis.set_label_text("Número de Veículos",fontdict= {'size':16})
g.ax.yaxis.set_label_text("Marca",fontdict= {'size':16})
plt.show()
# Salvando o plot
g.savefig(("plots/Analise2/brand-vehicleCount.png"))
Explanation: Número de veículos pertencentes a cada marca
End of explanation
# Crie um Plot com o Preço médio dos veículos com base no tipo de veículo, bem como no tipo de caixa de câmbio
fig, ax = plt.subplots(figsize=(8,5))
colors = ["#00e600", "#ff8c1a","#a180cc"]
sns.barplot(x="vehicleType", y="price",hue="gearbox", palette=colors, data=df)
ax.set_title("Preço médio dos veículos por tipo de veículo e tipo de caixa de câmbio",fontdict= {'size':12})
ax.xaxis.set_label_text("Tipo de Veículo",fontdict= {'size':12})
ax.yaxis.set_label_text("Preço Médio",fontdict= {'size':12})
plt.show()
# Salvando o plot
fig.savefig("plots/Analise2/vehicletype-gearbox-price.png")
Explanation: Preço médio dos veículos com base no tipo de veículo, bem como no tipo de caixa de câmbio
End of explanation |
301 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Hypothesis testing
The following is a version of thinkstats2.HypothesisTest with just the essential methods
Step2: And here's an example that uses it to compute the p-value of an experiment where we toss a coin 250 times and get 140 heads.
Step3: The p-value turns out to be about 7%, which is considered on the border of statistical significance.
Step4: Permutation test
To compute the p-value of an observed difference in means, we can assume that there is no difference between the groups and generate simulated results by shuffling the data.
Step5: Here's an example where we test the observed difference in pregnancy length for first babies and others.
Step6: The p-value is about 17%, which means it is plausible that the observed difference is just the result of random sampling, and might not be generally true in the population.
Step7: Here's the distrubution of the test statistic (the difference in means) over many simulated samples
Step8: Under the null hypothesis, we often see differences bigger than the observed difference.
Step9: If the hypothesis under test is that first babies come late, the appropriate test statistic is the raw difference between first babies and others, rather than the absolute value of the difference. In that case, the p-value is smaller, because we are testing a more specific hypothesis.
Step10: But in this example, the result is still not statistically significant.
Difference in standard deviation
In this framework, it is easy to use other test statistics. For example, if we think the variance for first babies might be higher, we can run this test
Step11: But that's not statistically significant either.
Testing correlation
To check whether an observed correlation is statistically significant, we can run a permutation test with a different test statistic.
Step12: Here's an example testing the correlation between birth weight and mother's age.
Step13: The reported p-value is 0, which means that in 1000 trials we didn't see a correlation, under the null hypothesis, that exceeded the observed correlation. That means that the p-value is probably smaller than $1/1000$, but it is not actually 0.
To get a sense of how unexpected the observed value is under the null hypothesis, we can compare the actual correlation to the largest value we saw in the simulations.
Step14: Testing proportions
Here's an example that tests whether the outcome of a rolling a six-sided die is suspicious, where the test statistic is the total absolute difference between the observed outcomes and the expected long-term averages.
Step15: Here's an example using the data from the book
Step16: The observed deviance from the expected values is not statistically significant.
By convention, it is more common to test data like this using the chi-squared statistic
Step17: Using this test, we get a smaller p-value
Step18: Taking this result at face value, we might consider the data statistically significant, but considering the results of both tests, I would not draw any strong conclusions.
Chi-square test of pregnancy length
Step19: If we specifically test the deviations of first babies and others from the expected number of births in each week of pregnancy, the results are statistically significant with a very small p-value. But at this point we have run so many tests, we should not be surprised to find at least one that seems significant.
Step21: Power
Here's the function that estimates the probability of a non-significant p-value even is there really is a difference between the groups.
Step23: In this example, the false negative rate is 70%, which means that the power of the test (probability of statistical significance if the actual difference is 0.078 weeks) is only 30%.
Exercises
Exercise
Step27: Exercise | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import random
import thinkstats2
import thinkplot
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
class HypothesisTest(object):
def __init__(self, data):
self.data = data
self.MakeModel()
self.actual = self.TestStatistic(data)
def PValue(self, iters=1000):
self.test_stats = [self.TestStatistic(self.RunModel())
for _ in range(iters)]
count = sum(1 for x in self.test_stats if x >= self.actual)
return count / iters
def TestStatistic(self, data):
raise UnimplementedMethodException()
def MakeModel(self):
pass
def RunModel(self):
raise UnimplementedMethodException()
Explanation: Hypothesis testing
The following is a version of thinkstats2.HypothesisTest with just the essential methods:
End of explanation
class CoinTest(HypothesisTest):
def TestStatistic(self, data):
heads, tails = data
test_stat = abs(heads - tails)
return test_stat
def RunModel(self):
heads, tails = self.data
n = heads + tails
sample = [random.choice('HT') for _ in range(n)]
hist = thinkstats2.Hist(sample)
data = hist['H'], hist['T']
return data
Explanation: And here's an example that uses it to compute the p-value of an experiment where we toss a coin 250 times and get 140 heads.
End of explanation
ct = CoinTest((140, 110))
pvalue = ct.PValue()
pvalue
Explanation: The p-value turns out to be about 7%, which is considered on the border of statistical significance.
End of explanation
class DiffMeansPermute(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
group1, group2 = data
test_stat = abs(group1.mean() - group2.mean())
return test_stat
def MakeModel(self):
group1, group2 = self.data
self.n, self.m = len(group1), len(group2)
self.pool = np.hstack((group1, group2))
def RunModel(self):
np.random.shuffle(self.pool)
data = self.pool[:self.n], self.pool[self.n:]
return data
Explanation: Permutation test
To compute the p-value of an observed difference in means, we can assume that there is no difference between the groups and generate simulated results by shuffling the data.
End of explanation
import first
live, firsts, others = first.MakeFrames()
data = firsts.prglngth.values, others.prglngth.values
Explanation: Here's an example where we test the observed difference in pregnancy length for first babies and others.
End of explanation
ht = DiffMeansPermute(data)
pvalue = ht.PValue()
pvalue
Explanation: The p-value is about 17%, which means it is plausible that the observed difference is just the result of random sampling, and might not be generally true in the population.
End of explanation
ht.PlotCdf()
thinkplot.Config(xlabel='test statistic',
ylabel='CDF')
Explanation: Here's the distrubution of the test statistic (the difference in means) over many simulated samples:
End of explanation
class DiffMeansOneSided(DiffMeansPermute):
def TestStatistic(self, data):
group1, group2 = data
test_stat = group1.mean() - group2.mean()
return test_stat
Explanation: Under the null hypothesis, we often see differences bigger than the observed difference.
End of explanation
ht = DiffMeansOneSided(data)
pvalue = ht.PValue()
pvalue
Explanation: If the hypothesis under test is that first babies come late, the appropriate test statistic is the raw difference between first babies and others, rather than the absolute value of the difference. In that case, the p-value is smaller, because we are testing a more specific hypothesis.
End of explanation
class DiffStdPermute(DiffMeansPermute):
def TestStatistic(self, data):
group1, group2 = data
test_stat = group1.std() - group2.std()
return test_stat
ht = DiffStdPermute(data)
pvalue = ht.PValue()
pvalue
Explanation: But in this example, the result is still not statistically significant.
Difference in standard deviation
In this framework, it is easy to use other test statistics. For example, if we think the variance for first babies might be higher, we can run this test:
End of explanation
class CorrelationPermute(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
xs, ys = data
test_stat = abs(thinkstats2.Corr(xs, ys))
return test_stat
def RunModel(self):
xs, ys = self.data
xs = np.random.permutation(xs)
return xs, ys
Explanation: But that's not statistically significant either.
Testing correlation
To check whether an observed correlation is statistically significant, we can run a permutation test with a different test statistic.
End of explanation
cleaned = live.dropna(subset=['agepreg', 'totalwgt_lb'])
data = cleaned.agepreg.values, cleaned.totalwgt_lb.values
ht = CorrelationPermute(data)
pvalue = ht.PValue()
pvalue
Explanation: Here's an example testing the correlation between birth weight and mother's age.
End of explanation
ht.actual, ht.MaxTestStat()
Explanation: The reported p-value is 0, which means that in 1000 trials we didn't see a correlation, under the null hypothesis, that exceeded the observed correlation. That means that the p-value is probably smaller than $1/1000$, but it is not actually 0.
To get a sense of how unexpected the observed value is under the null hypothesis, we can compare the actual correlation to the largest value we saw in the simulations.
End of explanation
class DiceTest(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
observed = data
n = sum(observed)
expected = np.ones(6) * n / 6
test_stat = sum(abs(observed - expected))
return test_stat
def RunModel(self):
n = sum(self.data)
values = [1, 2, 3, 4, 5, 6]
rolls = np.random.choice(values, n, replace=True)
hist = thinkstats2.Hist(rolls)
freqs = hist.Freqs(values)
return freqs
Explanation: Testing proportions
Here's an example that tests whether the outcome of a rolling a six-sided die is suspicious, where the test statistic is the total absolute difference between the observed outcomes and the expected long-term averages.
End of explanation
data = [8, 9, 19, 5, 8, 11]
dt = DiceTest(data)
pvalue = dt.PValue(iters=10000)
pvalue
Explanation: Here's an example using the data from the book:
End of explanation
class DiceChiTest(DiceTest):
def TestStatistic(self, data):
observed = data
n = sum(observed)
expected = np.ones(6) * n / 6
test_stat = sum((observed - expected)**2 / expected)
return test_stat
Explanation: The observed deviance from the expected values is not statistically significant.
By convention, it is more common to test data like this using the chi-squared statistic:
End of explanation
dt = DiceChiTest(data)
pvalue = dt.PValue(iters=10000)
pvalue
Explanation: Using this test, we get a smaller p-value:
End of explanation
class PregLengthTest(thinkstats2.HypothesisTest):
def MakeModel(self):
firsts, others = self.data
self.n = len(firsts)
self.pool = np.hstack((firsts, others))
pmf = thinkstats2.Pmf(self.pool)
self.values = range(35, 44)
self.expected_probs = np.array(pmf.Probs(self.values))
def RunModel(self):
np.random.shuffle(self.pool)
data = self.pool[:self.n], self.pool[self.n:]
return data
def TestStatistic(self, data):
firsts, others = data
stat = self.ChiSquared(firsts) + self.ChiSquared(others)
return stat
def ChiSquared(self, lengths):
hist = thinkstats2.Hist(lengths)
observed = np.array(hist.Freqs(self.values))
expected = self.expected_probs * len(lengths)
stat = sum((observed - expected)**2 / expected)
return stat
Explanation: Taking this result at face value, we might consider the data statistically significant, but considering the results of both tests, I would not draw any strong conclusions.
Chi-square test of pregnancy length
End of explanation
data = firsts.prglngth.values, others.prglngth.values
ht = PregLengthTest(data)
p_value = ht.PValue()
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
Explanation: If we specifically test the deviations of first babies and others from the expected number of births in each week of pregnancy, the results are statistically significant with a very small p-value. But at this point we have run so many tests, we should not be surprised to find at least one that seems significant.
End of explanation
def FalseNegRate(data, num_runs=1000):
Computes the chance of a false negative based on resampling.
data: pair of sequences
num_runs: how many experiments to simulate
returns: float false negative rate
group1, group2 = data
count = 0
for i in range(num_runs):
sample1 = thinkstats2.Resample(group1)
sample2 = thinkstats2.Resample(group2)
ht = DiffMeansPermute((sample1, sample2))
p_value = ht.PValue(iters=101)
if p_value > 0.05:
count += 1
return count / num_runs
neg_rate = FalseNegRate(data)
neg_rate
Explanation: Power
Here's the function that estimates the probability of a non-significant p-value even is there really is a difference between the groups.
End of explanation
# Solution
def RunTests(live, iters=1000):
Runs the tests from Chapter 9 with a subset of the data.
live: DataFrame
iters: how many iterations to run
n = len(live)
firsts = live[live.birthord == 1]
others = live[live.birthord != 1]
# compare pregnancy lengths
data = firsts.prglngth.values, others.prglngth.values
ht = DiffMeansPermute(data)
p1 = ht.PValue(iters=iters)
data = (firsts.totalwgt_lb.dropna().values,
others.totalwgt_lb.dropna().values)
ht = DiffMeansPermute(data)
p2 = ht.PValue(iters=iters)
# test correlation
live2 = live.dropna(subset=['agepreg', 'totalwgt_lb'])
data = live2.agepreg.values, live2.totalwgt_lb.values
ht = CorrelationPermute(data)
p3 = ht.PValue(iters=iters)
# compare pregnancy lengths (chi-squared)
data = firsts.prglngth.values, others.prglngth.values
ht = PregLengthTest(data)
p4 = ht.PValue(iters=iters)
print('%d\t%0.2f\t%0.2f\t%0.2f\t%0.2f' % (n, p1, p2, p3, p4))
# Solution
n = len(live)
for _ in range(7):
sample = thinkstats2.SampleRows(live, n)
RunTests(sample)
n //= 2
# Solution
# My results:
# test1: difference in mean pregnancy length
# test2: difference in mean birth weight
# test3: correlation of mother's age and birth weight
# test4: chi-square test of pregnancy length
# n test1 test2 test2 test4
# 9148 0.16 0.00 0.00 0.00
# 4574 0.10 0.01 0.00 0.00
# 2287 0.25 0.06 0.00 0.00
# 1143 0.24 0.03 0.39 0.03
# 571 0.81 0.00 0.04 0.04
# 285 0.57 0.41 0.48 0.83
# 142 0.45 0.08 0.60 0.04
# Conclusion: As expected, tests that are positive with large sample
# sizes become negative as we take away data. But the pattern is
# erratic, with some positive tests even at small sample sizes.
Explanation: In this example, the false negative rate is 70%, which means that the power of the test (probability of statistical significance if the actual difference is 0.078 weeks) is only 30%.
Exercises
Exercise: As sample size increases, the power of a hypothesis test increases, which means it is more likely to be positive if the effect is real. Conversely, as sample size decreases, the test is less likely to be positive even if the effect is real.
To investigate this behavior, run the tests in this chapter with different subsets of the NSFG data. You can use thinkstats2.SampleRows to select a random subset of the rows in a DataFrame.
What happens to the p-values of these tests as sample size decreases? What is the smallest sample size that yields a positive test?
End of explanation
# Solution
class DiffMeansResample(DiffMeansPermute):
Tests a difference in means using resampling.
def RunModel(self):
Run the model of the null hypothesis.
returns: simulated data
group1 = np.random.choice(self.pool, self.n, replace=True)
group2 = np.random.choice(self.pool, self.m, replace=True)
return group1, group2
# Solution
def RunResampleTest(firsts, others):
Tests differences in means by resampling.
firsts: DataFrame
others: DataFrame
data = firsts.prglngth.values, others.prglngth.values
ht = DiffMeansResample(data)
p_value = ht.PValue(iters=10000)
print('\ndiff means resample preglength')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
data = (firsts.totalwgt_lb.dropna().values,
others.totalwgt_lb.dropna().values)
ht = DiffMeansPermute(data)
p_value = ht.PValue(iters=10000)
print('\ndiff means resample birthweight')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
# Solution
RunResampleTest(firsts, others)
# Solution
# Conclusions: Using resampling instead of permutation has very
# little effect on the results.
# The two models are based on slightly difference assumptions, and in
# this example there is no compelling reason to choose one or the other.
# But in general p-values depend on the choice of the null hypothesis;
# different models can yield very different results.
Explanation: Exercise: In Section 9.3, we simulated the null hypothesis by permutation; that is, we treated the observed values as if they represented the entire population, and randomly assigned the members of the population to the two groups.
An alternative is to use the sample to estimate the distribution for the population, then draw a random sample from that distribution. This process is called resampling. There are several ways to implement resampling, but one of the simplest is to draw a sample with replacement from the observed values, as in Section 9.10.
Write a class named DiffMeansResample that inherits from DiffMeansPermute and overrides RunModel to implement resampling, rather than permutation.
Use this model to test the differences in pregnancy length and birth weight. How much does the model affect the results?
End of explanation |
302 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load the data
For this work, we're going to use the same retail sales data that we've used before. It can be found in the examples directory of this repository.
Step1: Like all good modeling projects, we need to take a look at the data to get an idea of what it looks like.
Step2: It's pretty clear from this data that we are looking at a trending dataset with some seasonality. This is actually a pretty good datset for prophet since the additive model and prophet's implemention does well with this type of data.
With that in mind, let's take look at what prophet does from a modeling standpoint to compare with the dynamic linear regression model. For more details on this, you can take a look at my blog post titled Forecasting Time Series data with Prophet – Part 4 (http
Step3: With our prophet model ready for comparison, let's build a model with pyflux's dynamic linear regresion model.
More Data Viz
Now that we've run our prophet model and can see what it has done, its time to walk through what I call the 'long form' of model building. This is more involved than throwing data at a library and accepting the results.
For this data, let's first look at the differenced log values of our sales data (to try to make it more stationary).
Step4: With our original data (top pane in orange), we can see a very pronounced trend. With the differenced log values (bottom pane in blue), we've removed that trend and made the data staionary (or hopefully we have).
Now, lets take a look at an autocorrelation plot, which will tell us whether the future sales is correlated with the past data. I won't go into detail on autocorrelation, but if you don't understand whether you have autocorrelation (and to what degree), you might be in for a hard time
Step5: We can see that at a lag of 1 and 2 months, there are positive correlations for sales but as time goes on, that correlation drops quickly to a negative correlation that stays in place over time, which hints at the fact that there are some autoregressive effects within this data.
Because of this fact, we can start our modeling by using an ARMA model of some sort. | Python Code:
sales_df = pd.read_csv('../examples/retail_sales.csv', index_col='date', parse_dates=True)
sales_df.head()
Explanation: Load the data
For this work, we're going to use the same retail sales data that we've used before. It can be found in the examples directory of this repository.
End of explanation
sales_df.plot()
Explanation: Like all good modeling projects, we need to take a look at the data to get an idea of what it looks like.
End of explanation
# Prep data for prophet and run prophet
df = sales_df.reset_index()
df=df.rename(columns={'date':'ds', 'sales':'y'})
model = Prophet(weekly_seasonality=True)
model.fit(df);
future = model.make_future_dataframe(periods=24, freq = 'm')
forecast = model.predict(future)
model.plot(forecast);
Explanation: It's pretty clear from this data that we are looking at a trending dataset with some seasonality. This is actually a pretty good datset for prophet since the additive model and prophet's implemention does well with this type of data.
With that in mind, let's take look at what prophet does from a modeling standpoint to compare with the dynamic linear regression model. For more details on this, you can take a look at my blog post titled Forecasting Time Series data with Prophet – Part 4 (http://pythondata.com/forecasting-time-series-data-prophet-part-4/)
End of explanation
diff_log = pd.DataFrame(np.diff(np.log(sales_df['sales'].values)))
diff_log.index = sales_df.index.values[1:sales_df.index.values.shape[0]]
diff_log.columns = ["Sales DiffLog"]
sales_df['logged']=np.log(sales_df['sales'])
sales_df.tail()
sales_df.plot(subplots=True)
Explanation: With our prophet model ready for comparison, let's build a model with pyflux's dynamic linear regresion model.
More Data Viz
Now that we've run our prophet model and can see what it has done, its time to walk through what I call the 'long form' of model building. This is more involved than throwing data at a library and accepting the results.
For this data, let's first look at the differenced log values of our sales data (to try to make it more stationary).
End of explanation
pf.acf_plot(diff_log.values.T[0])
pf.acf_plot(np.square(diff_log.values.T[0]))
Explanation: With our original data (top pane in orange), we can see a very pronounced trend. With the differenced log values (bottom pane in blue), we've removed that trend and made the data staionary (or hopefully we have).
Now, lets take a look at an autocorrelation plot, which will tell us whether the future sales is correlated with the past data. I won't go into detail on autocorrelation, but if you don't understand whether you have autocorrelation (and to what degree), you might be in for a hard time :)
Let's take a look at the autocorrelation plot (acf) if the differenced log values as well as the ACF of the square of the differenced log values.
End of explanation
Logged = pd.DataFrame(np.log(sales_df['sales']))
Logged.index = pd.to_datetime(sales_df.index)
Logged.columns = ['Sales - Logged']
Logged.head()
modelLLT = pf.LLT(data=Logged)
x = modelLLT.fit()
x.summary()
model.plot_fit(figsize=(20,10))
modelLLT.plot_predict_is(h=len(Logged)-1, figsize=(20,10))
predicted = modelLLT.predict_is(h=len(Logged)-1)
predicted.columns = ['Predicted']
predicted.tail()
np.exp(predicted).plot()
sales_df_future=sales_df
sales_df
final_sales=sales_df.merge(np.exp(predicted),right_on=predicted.index)
final_sales = sales_df.merge()
final_sales.tail()
final_sales.plot()
Explanation: We can see that at a lag of 1 and 2 months, there are positive correlations for sales but as time goes on, that correlation drops quickly to a negative correlation that stays in place over time, which hints at the fact that there are some autoregressive effects within this data.
Because of this fact, we can start our modeling by using an ARMA model of some sort.
End of explanation |
303 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Saving figure source data
Many scientific journals are (for good reason) requiring that authors upload the source data for their figures. For complex analysis pipelines this can be complicated and frustrating. FigureFirst to the rescue!
This notebook demonstrates how you can use FigureFirst to save the source data for your figures automatically. Source data here means the data, function calls, and the arguments to those functions.
With the FigureFirst formatted source data in hand, you can easily rebuild the figure, or output the data to a human readable CSV (Markdown) file.
Step1: Clear out the data for all the figures and axes
Step2: All the magic happens through the FFAxis class, which is a wrapper for the MatPlotLib Axis
To keep track of which axis you're working with you can use the breadcrumb
Step3: Use matplotlib to plot some data on figure_1 axis_a
Step4: Use some functions in figurefirst.mpl_functions on figure_1 axis_b
Step5: Use a pickle-able user defined plotting function on figure_2 axis_a
Step6: Use a custom function from another package
Step7: You can also write arbitrary data to the file using a similar interface through the layout object
Step8: Regenerate the figure from the saved data
Step9: Take a look at the data file
Step10: Here are all the plotting actions, data, and settings called for figure_1 axis_a
Step11: If you need a more standard and human readable format, you can convert the data file into a markdown / csv file
Only data that has argument descriptions associated with it will be saved. This prevents clogging the file with tick marks, etc. The titles and descriptions are drawn from the data file, so use descriptive titles when writing the code!
Because we did not provide descriptions for the arguments to the rectangle call, it's data is not saved. | Python Code:
import numpy as np
import figurefirst
fifi = figurefirst
from IPython.display import display,SVG,Markdown
layout = fifi.FigureLayout('figure_template.svg', hide_layers=['template'])
layout.make_mplfigures(hide=True)
Explanation: Saving figure source data
Many scientific journals are (for good reason) requiring that authors upload the source data for their figures. For complex analysis pipelines this can be complicated and frustrating. FigureFirst to the rescue!
This notebook demonstrates how you can use FigureFirst to save the source data for your figures automatically. Source data here means the data, function calls, and the arguments to those functions.
With the FigureFirst formatted source data in hand, you can easily rebuild the figure, or output the data to a human readable CSV (Markdown) file.
End of explanation
for key, axis in layout.axes.items():
print (key)
# note, you can use the data filename, or the layout filename (as long as you use the defaults)
fifi.regenerate.clear_fifidata('figure_template.svg', key)
Explanation: Clear out the data for all the figures and axes
End of explanation
ax = layout.axes[('figure_1', 'axis_a')]
ax.breadcrumb
Explanation: All the magic happens through the FFAxis class, which is a wrapper for the MatPlotLib Axis
To keep track of which axis you're working with you can use the breadcrumb
End of explanation
ax = layout.axes[('figure_1', 'axis_a')]
# make some fake data
x = np.linspace(0,10,100)
y = np.sin(x)
# call matplotlib's plot function with the figurefirst wrapper, which saves the data
title = 'Sine wave for ' + ax.breadcrumb['layout_key'][1]
argument_1 = 'Time'
argument_2 = 'Response'
ax._plot([title, argument_1, argument_2], x, y, color='blue')
Explanation: Use matplotlib to plot some data on figure_1 axis_a
End of explanation
ax = layout.axes[('figure_1', 'axis_b')]
# use figurefirst wrapper for adding a patch
# note: matplotlib function add_artist does not work in regeneration step (artists cannot be pickled)
ax._add_mpl_patch(['This is a rectangle'], 'Rectangle', (3, 0), 1.5, 1, fill=False, color='red', linewidth=1)
# Generally we recommend using '_' notation, and including a title and description of the arguments
# However, for quick formatting calls, there is a faster notation. This does not work with custom functions.
# First set record to True
ax.record = True
# Then make your function calls as usual
# matplotlib functions
ax.set_xlim(0,5)
ax.set_ylim(-1,1)
# figurefirst.mpl_functions
ax.adjust_spines(['left', 'bottom'])
ax.set_fontsize(6)
Explanation: Use some functions in figurefirst.mpl_functions on figure_1 axis_b
End of explanation
ax = layout.axes[('figure_2', 'axis_a')]
def foo(ax, x, list_of_noisy_ys, color='green'):
mean_y = np.mean(list_of_noisy_ys, axis=0)
ax.plot(x, mean_y, color=color, linewidth=3)
for y in list_of_noisy_ys:
ax.plot(x, y, color=color, linewidth=1, alpha=0.2)
# save a custom plotting function in the data file
list_of_noisy_ys = []
for i in range(6):
noisy_y = y + np.random.uniform(-0.5, 0.5, len(y))
list_of_noisy_ys.append(noisy_y)
ax._custom(['Plot line and dots', 'Time', 'List of y values'], foo, x, list_of_noisy_ys, color='magenta')
Explanation: Use a pickle-able user defined plotting function on figure_2 axis_a
End of explanation
layout.append_figure_to_layer(layout.figures['figure_1'], 'figure_1', cleartarget=True)
layout.append_figure_to_layer(layout.figures['figure_2'], 'figure_2', cleartarget=True)
svg = 'figure_output.svg'
layout.write_svg(svg)
SVG(svg)
Explanation: Use a custom function from another package:
ax._custom(['Title', 'Arg description], 'package.module.function', args, *kwargs)
Save figures to layout and write svg
End of explanation
# first clear anything that is there
fifi.regenerate.clear_fifidata(layout.data_filename, layout_key='Supplemental Data')
a = [np.random.random(10) for i in range(5)]
b = [1,2,3,4]
layout.write_fifidata(['Title of Arbitrary Data', 'Description of Data A', 'Description of Data B'],
a, b)
Explanation: You can also write arbitrary data to the file using a similar interface through the layout object
End of explanation
fifi.regenerate.replot('figure_template.svg', output_filename='new_figure_output.svg')
svg = 'new_figure_output.svg'
layout.set_layer_visibility('Layer 1',False)
layout.write_svg(svg)
SVG(svg)
Explanation: Regenerate the figure from the saved data
End of explanation
data = fifi.regenerate.load_data_file('figure_template.svg') # you can either use the layout or data filename
Explanation: Take a look at the data file
End of explanation
data[('figure_1', 'axis_a')]
Explanation: Here are all the plotting actions, data, and settings called for figure_1 axis_a
End of explanation
# This is optional, but helps to connect the data in the markdown / csv file to the actual panel names you have
# If left blank (None), the Panel names will just be the layout_keys reformatted, e.g. 'figure_1_axis_b')
panel_id_to_layout_keys = {'a': [('figure_1', 'axis_a'), ('figure_1', 'axis_b')],
'b': [('figure_2', 'axis_a')]}
# Define a figure number
figure_number = 1
# Header, optional
header = '# This file contains the data needed for generating figure 1\n### FigureFirst example by Floris van Breugel'
fifi.regenerate.write_to_csv('figure_template_data.dillpickle', figure_number, \
panel_id_to_layout_keys, header=header)
# Take a look at the file. If you need a ".csv" file, just change the extension.
# Markdown files can be displayed nicely in Chrome:
# https://stackoverflow.com/questions/9843609/view-markdown-files-offline
with open('figure_template_data_summary.md', 'r') as fh:
content = fh.read()
display(Markdown(content))
Explanation: If you need a more standard and human readable format, you can convert the data file into a markdown / csv file
Only data that has argument descriptions associated with it will be saved. This prevents clogging the file with tick marks, etc. The titles and descriptions are drawn from the data file, so use descriptive titles when writing the code!
Because we did not provide descriptions for the arguments to the rectangle call, it's data is not saved.
End of explanation |
304 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple MLP demo for TIMIT using Keras
This notebook describes how to reproduce the results for the simple MLP architecture described in this paper
Step1: Here we import the stuff we use below
Step2: Loading the data
Here we load the corpus stored in HDF5 files. It contains both normalized and unnormalized data and we're interested in the former
Step3: The data can be loaded all at once into separate Numpy arrays
Step4: The loaded data is a list of utterances, where each utterance is a matrix (for inputs) or a vector (for outputs) of different sizes. That is why the whole corpus is not a matrix (which would require that each utterance is the same length)
Step5: The papers/thesis above use 26 features instead of the standrd 39, ie they only use first-order regression coefficients (deltas). We usually prepare a corpus for the full 39 features, so to be comparable, lets extract the 26 from that
Step6: Parameters
Here we'll define some standard sizes and parameters
Step7: 1-hot vectors
For most loss functions, the output for each utterance needs to be a matrix of size (output_dim,sample_num). That means we need to convert the output from a list of decisions to a list of 1-hot vectors. This is a requirement of Keras
Step8: Model definition
Here we define the model exactly as in the paper
Step9: Training
Here we have a training loop. We don't use the "fit" method to accomodate the specific conditions in the paper
Step10: Results
Here we can plot the loss and PER (phoneme error rate) while training
Step11: The final results are presented below. Please note that Keras usually calculates accuracy, while the papers generally prefer error rates. We generally shouldn't give the result of the minimum PER for the test set, but we can use the dev set, find it's minimum and provide the value of the test at that time. You can see that the correct PER is not too far from the minimum test PER anyway
Step12: The paper gives a value of 48.6% error rate for this architecture and claims it took 835 epochs to reach the value using SGD. Here we can see that ADAM got it a bit faster than that | Python Code:
import os
os.environ['CUDA_VISIBLE_DEVICES']='0'
Explanation: Simple MLP demo for TIMIT using Keras
This notebook describes how to reproduce the results for the simple MLP architecture described in this paper:
ftp://ftp.idsia.ch/pub/juergen/nn_2005.pdf
And in Chapter 5 of this thesis:
http://www.cs.toronto.edu/~graves/phd.pdf
To begin with, if you have a multi-gpu system (like I do), you may want to choose which GPU you want to run this on (indexing from 0):
End of explanation
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.optimizers import Adam, SGD
from IPython.display import clear_output
from tqdm import *
import sys
sys.path.append('../python')
from data import Corpus, History
Explanation: Here we import the stuff we use below:
End of explanation
train=Corpus('../data/TIMIT_train.hdf5',load_normalized=True)
dev=Corpus('../data/TIMIT_dev.hdf5',load_normalized=True)
test=Corpus('../data/TIMIT_test.hdf5',load_normalized=True)
Explanation: Loading the data
Here we load the corpus stored in HDF5 files. It contains both normalized and unnormalized data and we're interested in the former:
End of explanation
tr_in,tr_out_dec=train.get()
dev_in,dev_out_dec=dev.get()
tst_in,tst_out_dec=test.get()
Explanation: The data can be loaded all at once into separate Numpy arrays:
End of explanation
print tr_in.shape
print tr_in[0].shape
print tr_out_dec.shape
print tr_out_dec[0].shape
Explanation: The loaded data is a list of utterances, where each utterance is a matrix (for inputs) or a vector (for outputs) of different sizes. That is why the whole corpus is not a matrix (which would require that each utterance is the same length):
End of explanation
for u in range(tr_in.shape[0]):
tr_in[u]=tr_in[u][:,:26]
for u in range(dev_in.shape[0]):
dev_in[u]=dev_in[u][:,:26]
for u in range(tst_in.shape[0]):
tst_in[u]=tst_in[u][:,:26]
Explanation: The papers/thesis above use 26 features instead of the standrd 39, ie they only use first-order regression coefficients (deltas). We usually prepare a corpus for the full 39 features, so to be comparable, lets extract the 26 from that:
End of explanation
input_dim=tr_in[0].shape[1]
output_dim=61
hidden_num=250
epoch_num=1500
Explanation: Parameters
Here we'll define some standard sizes and parameters:
End of explanation
def dec2onehot(dec):
ret=[]
for u in dec:
assert np.all(u<output_dim)
num=u.shape[0]
r=np.zeros((num,output_dim))
r[range(0,num),u]=1
ret.append(r)
return np.array(ret)
tr_out=dec2onehot(tr_out_dec)
dev_out=dec2onehot(dev_out_dec)
tst_out=dec2onehot(tst_out_dec)
Explanation: 1-hot vectors
For most loss functions, the output for each utterance needs to be a matrix of size (output_dim,sample_num). That means we need to convert the output from a list of decisions to a list of 1-hot vectors. This is a requirement of Keras:
End of explanation
model = Sequential()
model.add(Dense(input_dim=input_dim,output_dim=hidden_num))
model.add(Activation('sigmoid'))
model.add(Dense(output_dim=output_dim))
model.add(Activation('softmax'))
optimizer= SGD(lr=3e-3,momentum=0.9,nesterov=False)
loss='categorical_crossentropy'
metrics=['accuracy']
model.compile(loss=loss, optimizer=optimizer, metrics=metrics)
Explanation: Model definition
Here we define the model exactly as in the paper: one hidden layer with 250 units, sigmoid activation in the hidden and softmax in the output, cross-entropy loss. The only thing that differs is the optimizer. You can use SGD, but the values in the paper seem to be far too small. Adam works just as well and maybe even a bit faster. Feel free to experiment:
End of explanation
from random import shuffle
tr_hist=History('Train')
dev_hist=History('Dev')
tst_hist=History('Test')
tr_it=range(tr_in.shape[0])
for e in range(epoch_num):
print 'Epoch #{}/{}'.format(e+1,epoch_num)
sys.stdout.flush()
shuffle(tr_it)
for u in tqdm(tr_it):
l,a=model.train_on_batch(tr_in[u],tr_out[u])
tr_hist.r.addLA(l,a,tr_out[u].shape[0])
clear_output()
tr_hist.log()
for u in range(dev_in.shape[0]):
l,a=model.test_on_batch(dev_in[u],dev_out[u])
dev_hist.r.addLA(l,a,dev_out[u].shape[0])
dev_hist.log()
for u in range(tst_in.shape[0]):
l,a=model.test_on_batch(tst_in[u],tst_out[u])
tst_hist.r.addLA(l,a,tst_out[u].shape[0])
tst_hist.log()
print 'Done!'
Explanation: Training
Here we have a training loop. We don't use the "fit" method to accomodate the specific conditions in the paper: we register the loss/accuracy of dev and test at each time step, we do weight update after each utterance.
End of explanation
import matplotlib.pyplot as P
%matplotlib inline
fig,ax=P.subplots(2,sharex=True,figsize=(12,10))
ax[0].set_title('Loss')
ax[0].plot(tr_hist.loss,label='Train')
ax[0].plot(dev_hist.loss,label='Dev')
ax[0].plot(tst_hist.loss,label='Test')
ax[0].legend()
ax[0].set_ylim((1.4,2.0))
ax[1].set_title('PER %')
ax[1].plot(100*(1-np.array(tr_hist.acc)),label='Train')
ax[1].plot(100*(1-np.array(dev_hist.acc)),label='Dev')
ax[1].plot(100*(1-np.array(tst_hist.acc)),label='Test')
ax[1].legend()
ax[1].set_ylim((45,55))
Explanation: Results
Here we can plot the loss and PER (phoneme error rate) while training:
End of explanation
print 'Min train PER: {:%}'.format(1-np.max(tr_hist.acc))
print 'Min test PER: {:%}'.format(1-np.max(tst_hist.acc))
print 'Min dev PER epoch: #{}'.format((np.argmax(dev_hist.acc)+1))
print 'Test PER on min dev: {:%}'.format(1-tst_hist.acc[np.argmax(dev_hist.acc)])
Explanation: The final results are presented below. Please note that Keras usually calculates accuracy, while the papers generally prefer error rates. We generally shouldn't give the result of the minimum PER for the test set, but we can use the dev set, find it's minimum and provide the value of the test at that time. You can see that the correct PER is not too far from the minimum test PER anyway:
End of explanation
wer=0.486999999999
print 'Epoch where PER reached {:%}: #{}'.format(wer,np.where((1-np.array(tst_hist.acc))<=wer)[0][0])
Explanation: The paper gives a value of 48.6% error rate for this architecture and claims it took 835 epochs to reach the value using SGD. Here we can see that ADAM got it a bit faster than that:
End of explanation |
305 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Classification and Clustering
In a typical classification problem you are given a set of input features and a set of discrete output classes and you want to model the relationship between the two. There is a myriad of classification algorithms that you could use for this problem - SVMs, Naive Bayes, k-NN, etc. But what if the input features are not independent such as with time series data? In this case SVMs and Naive Bayes would not be a good choice since they assume that the input features are independent. The k-NN algorithm could still work however it relies on the notion of a similarity measure between input examples. Now the question becomes how do we measure the similarity between two time series?
How about Euclidean distance?
The Euclidean distance between two time series $Q$ and $C$ of length $n$ is defined as
$$d(Q,C) = \sqrt{\sum^n_{i=1}[Q(i)-C(i)]^2}$$
At first glance, it seems like simply calculating the Euclidean distance between two time series would give us a good idea of the similarity between them. After all, the Euclidean distance between identical time series is zero and the Euclidean distance between very different time series is large. However, before we settle on Euclidean distance as a similarity measure we should clearly state our desired criteria for determining the similarity between two time series
With a good similarity measure, small changes in two time series should result in small changes in their similarity. With respect to Euclidean distance this is true for changes in the y-axis, but it is not true for changes in the time axis (i.e. compression and stretching). Consider the following example.
Step1: In the above example, it is clear that $ts1$ and $ts2$ are most similar (they are both $sin$ functions under different transformations). $ts3$ is clearly the most different. Let's compute the Euclidean distance $d(ts1,ts2)$ and $d(ts1,ts3)$ to see if the Euclidean distance measure agrees with what our intuition tells us. Let's first create a function that computes the Euclidean distance between two time series.
Step2: Let's now find the Euclidean distance between $ts1$ and $ts2$
Step3: and the Euclidean distance between $ts1$ and $ts3$
Step4: This is not good because according to the Euclidean distance measure, $ts1$ is more similar to $ts3$ than to $ts2$ which contradicts our intuition. This is the problem with using the Euclidean distance measure. It often produced pessimistic similarity measures when it encounters distortion in the time axis. The way to deal with this is to use dynamic time warping.
Dynamic Time Warping
Dynamic time warping finds the optimal non-linear alignment between two time series. The Euclidean distances between alignments are then much less susceptable to pessimistic similarity measurements due to distortion in the time axis. There is a price to pay for this, however, because dynamic time warping is quadratic in the length of the time series used.
Dynamic time warping works in the following way. Consider two time series $Q$ and $C$ of the same length $n$ where $$Q=q_1,q_2,...,q_n$$ and $$C=c_1,c_2,...,c_n$$ The first thing we do is construct an $n\times n$ matrix whose $i,j^{th}$ element is the Euclidean distance between $q_i$ and $c_j$. We want to find a path through this matrix that minimizes the cumulative distance. This path then determines the optimal alignment between the two time series. It should be noted that it is possible for one point in a time series to be mapped to multiple points in the other time series.
Let's call the path $W$ where $$W=w_1,w_2,...,w_K$$ where each element of $W$ represents the distance between a point $i$ in $Q$ and a point $j$ in $C$ i.e. $w_k=(q_i-c_j)^2$
So we want to find the path with the minimum Euclidean distance $$W^*=argmin_W(\sqrt{\sum_{k=1}^Kw_k})$$ The optimal path is found via dynamic programming, specifically the following recursive function. $$\gamma(i,j)=d(q_i,c_j)+min ( \gamma(i-1,j-1),\gamma(i-1,j),\gamma(i,j-1))$$
Step5: Now let's compute the Euclidean distance between $ts1$ and $ts2$ using dynamic time warping.
Step6: and now the dynamic time warping distance between $ts1$ and $ts3$
Step7: As you can see, our results have changed from when we only used the Euclidean distance measure. Now, in agreement with our intuition, $ts2$ is shown to be more similar to $ts1$ than $ts3$ is.
Speeding Up Dynamic Time Warping
Dynamic time warping has a complexity of $O(nm)$ where $n$ is the length of the first time series and $m$ is the length of the second time series. If you are performing dynamic time warping multiple times on long time series data, this can be prohibitively expensive. However, there are a couple of ways to speed things up. The first is to enforce a locality constraint. This works under the assumption that it is unlikely for $q_i$ and $c_j$ to be matched if $i$ and $j$ are too far apart. The threshold is determined by a window size $w$. This way, only mappings within this window are considered which speeds up the inner loop. The following is the modified code which includes the window size $w$.
Step8: Let's test this faster version.
Step9: Another way to speed things up is to use the LB Keogh lower bound of dynamic time warping. It is defined as $$LBKeogh(Q,C)=\sum_{i=1}^n (c_i-U_i)^2I(c_i > U_i)+(c_i-L_i)^2I(c_i < L_i)$$
where $U_i$ and $L_i$ are upper and lower bounds for time series $Q$ which are defined as $U_i=max(q_{i-r}
Step10: Let's now test on $ts1$ and $ts2$
Step11: and now $ts1$ and $ts3$.
Step12: The LB Keogh lower bound method is linear whereas dynamic time warping is quadratic in complexity which make it very advantageous for searching over large sets of time series.
Classification and Clustering
Now that we have a reliable method to determine the similarity between two time series, we can use the k-NN algorithm for classification. Empirically, the best results have come when $k=1$. The following is the 1-NN algorithm that uses dynamic time warping Euclidean distance. In this algorithm, $train$ is the training set of time series examples where the class that the time series belongs to is appended to the end of the time series. $test$ is the test set whose corresponding classes you are trying to predict. In this algorithm, for every time series in the test set, a search must be performed through all points in the training set so that the most similar point is found. Given that dynamic time warping is quadratic, this can be very computationally expensive. We can speed up classification using the LB Keogh lower bound. Computing LB Keogh is much less expensive than performing dynamic time warping. And since $LB Keogh(Q,C) \leq DTW(Q,C)$ , we can eliminate time series that cannot possibly be more similar that the current most similar time series. In this way we are eliminating many unnecessary dynamic time warping computations.
Step13: Now let's test it on some data. We will use a window size of 4. Although the code is sped up with the use of the LB Keogh bound and the dynamic time warping locality contraint, it may still take a few minutes to run.
Step14: The same idea can also be applied to k-means clustering. In this algorithm, the number of clusters is set apriori and similar time series are clustered together.
Step17: Let's test it on the entire data set (i.e. the training set and the test set stacked together). | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
x=np.linspace(0,50,100)
ts1=pd.Series(3.1*np.sin(x/1.5)+3.5)
ts2=pd.Series(2.2*np.sin(x/3.5+2.4)+3.2)
ts3=pd.Series(0.04*x+3.0)
#ts1.plot()
#ts2.plot()
#ts3.plot()
#plt.ylim(-2,10)
#plt.legend(['ts1','ts2','ts3'])
#plt.show()
Explanation: Time Series Classification and Clustering
In a typical classification problem you are given a set of input features and a set of discrete output classes and you want to model the relationship between the two. There is a myriad of classification algorithms that you could use for this problem - SVMs, Naive Bayes, k-NN, etc. But what if the input features are not independent such as with time series data? In this case SVMs and Naive Bayes would not be a good choice since they assume that the input features are independent. The k-NN algorithm could still work however it relies on the notion of a similarity measure between input examples. Now the question becomes how do we measure the similarity between two time series?
How about Euclidean distance?
The Euclidean distance between two time series $Q$ and $C$ of length $n$ is defined as
$$d(Q,C) = \sqrt{\sum^n_{i=1}[Q(i)-C(i)]^2}$$
At first glance, it seems like simply calculating the Euclidean distance between two time series would give us a good idea of the similarity between them. After all, the Euclidean distance between identical time series is zero and the Euclidean distance between very different time series is large. However, before we settle on Euclidean distance as a similarity measure we should clearly state our desired criteria for determining the similarity between two time series
With a good similarity measure, small changes in two time series should result in small changes in their similarity. With respect to Euclidean distance this is true for changes in the y-axis, but it is not true for changes in the time axis (i.e. compression and stretching). Consider the following example.
End of explanation
def euclid_dist(t1,t2):
return sqrt(sum((t1-t2)**2))
Explanation: In the above example, it is clear that $ts1$ and $ts2$ are most similar (they are both $sin$ functions under different transformations). $ts3$ is clearly the most different. Let's compute the Euclidean distance $d(ts1,ts2)$ and $d(ts1,ts3)$ to see if the Euclidean distance measure agrees with what our intuition tells us. Let's first create a function that computes the Euclidean distance between two time series.
End of explanation
#print euclid_dist(ts1,ts2)
Explanation: Let's now find the Euclidean distance between $ts1$ and $ts2$
End of explanation
#print euclid_dist(ts1,ts3)
Explanation: and the Euclidean distance between $ts1$ and $ts3$
End of explanation
def DTWDistance(s1, s2):
DTW={}
for i in range(len(s1)):
DTW[(i, -1)] = float('inf')
for i in range(len(s2)):
DTW[(-1, i)] = float('inf')
DTW[(-1, -1)] = 0
for i in range(len(s1)):
for j in range(len(s2)):
dist= (s1[i]-s2[j])**2
DTW[(i, j)] = dist + min(DTW[(i-1, j)],DTW[(i, j-1)], DTW[(i-1, j-1)])
return sqrt(DTW[len(s1)-1, len(s2)-1])
Explanation: This is not good because according to the Euclidean distance measure, $ts1$ is more similar to $ts3$ than to $ts2$ which contradicts our intuition. This is the problem with using the Euclidean distance measure. It often produced pessimistic similarity measures when it encounters distortion in the time axis. The way to deal with this is to use dynamic time warping.
Dynamic Time Warping
Dynamic time warping finds the optimal non-linear alignment between two time series. The Euclidean distances between alignments are then much less susceptable to pessimistic similarity measurements due to distortion in the time axis. There is a price to pay for this, however, because dynamic time warping is quadratic in the length of the time series used.
Dynamic time warping works in the following way. Consider two time series $Q$ and $C$ of the same length $n$ where $$Q=q_1,q_2,...,q_n$$ and $$C=c_1,c_2,...,c_n$$ The first thing we do is construct an $n\times n$ matrix whose $i,j^{th}$ element is the Euclidean distance between $q_i$ and $c_j$. We want to find a path through this matrix that minimizes the cumulative distance. This path then determines the optimal alignment between the two time series. It should be noted that it is possible for one point in a time series to be mapped to multiple points in the other time series.
Let's call the path $W$ where $$W=w_1,w_2,...,w_K$$ where each element of $W$ represents the distance between a point $i$ in $Q$ and a point $j$ in $C$ i.e. $w_k=(q_i-c_j)^2$
So we want to find the path with the minimum Euclidean distance $$W^*=argmin_W(\sqrt{\sum_{k=1}^Kw_k})$$ The optimal path is found via dynamic programming, specifically the following recursive function. $$\gamma(i,j)=d(q_i,c_j)+min ( \gamma(i-1,j-1),\gamma(i-1,j),\gamma(i,j-1))$$
End of explanation
#print DTWDistance(ts1,ts2)
Explanation: Now let's compute the Euclidean distance between $ts1$ and $ts2$ using dynamic time warping.
End of explanation
#print DTWDistance(ts1,ts3)
Explanation: and now the dynamic time warping distance between $ts1$ and $ts3$
End of explanation
def DTWDistance(s1, s2,w):
DTW={}
w = max(w, abs(len(s1)-len(s2)))
for i in range(-1,len(s1)):
for j in range(-1,len(s2)):
DTW[(i, j)] = float('inf')
DTW[(-1, -1)] = 0
for i in range(len(s1)):
for j in range(max(0, i-w), min(len(s2), i+w)):
dist= (s1[i]-s2[j])**2
DTW[(i, j)] = dist + min(DTW[(i-1, j)],DTW[(i, j-1)], DTW[(i-1, j-1)])
return sqrt(DTW[len(s1)-1, len(s2)-1])
Explanation: As you can see, our results have changed from when we only used the Euclidean distance measure. Now, in agreement with our intuition, $ts2$ is shown to be more similar to $ts1$ than $ts3$ is.
Speeding Up Dynamic Time Warping
Dynamic time warping has a complexity of $O(nm)$ where $n$ is the length of the first time series and $m$ is the length of the second time series. If you are performing dynamic time warping multiple times on long time series data, this can be prohibitively expensive. However, there are a couple of ways to speed things up. The first is to enforce a locality constraint. This works under the assumption that it is unlikely for $q_i$ and $c_j$ to be matched if $i$ and $j$ are too far apart. The threshold is determined by a window size $w$. This way, only mappings within this window are considered which speeds up the inner loop. The following is the modified code which includes the window size $w$.
End of explanation
#rint DTWDistance(ts1,ts2,10)
#print DTWDistance(ts1,ts3,10)
Explanation: Let's test this faster version.
End of explanation
def LB_Keogh(s1,s2,r):
LB_sum=0
for ind,i in enumerate(s1):
#print(ind -r, ind+r)
lower_bound=min(s2[(ind-r if ind-r>=0 else 0):(ind+r)])
upper_bound=max(s2[(ind-r if ind-r>=0 else 0):(ind+r)])
if i>upper_bound:
LB_sum=LB_sum+(i-upper_bound)**2
elif i<lower_bound:
LB_sum=LB_sum+(i-lower_bound)**2
return sqrt(LB_sum)
Explanation: Another way to speed things up is to use the LB Keogh lower bound of dynamic time warping. It is defined as $$LBKeogh(Q,C)=\sum_{i=1}^n (c_i-U_i)^2I(c_i > U_i)+(c_i-L_i)^2I(c_i < L_i)$$
where $U_i$ and $L_i$ are upper and lower bounds for time series $Q$ which are defined as $U_i=max(q_{i-r}:q_{i+r})$ and $L_i=min(q_{i-r}:q_{i+r})$ for a reach $r$ and $I(\cdot)$ is the indicator function. It can be implemented with the following function.
End of explanation
#print LB_Keogh(ts1,ts2,20)
Explanation: Let's now test on $ts1$ and $ts2$
End of explanation
#print LB_Keogh(ts1,ts3,20)
Explanation: and now $ts1$ and $ts3$.
End of explanation
#from sklearn.metrics import classification_report
from math import sqrt
def knn(train,test,w):
preds=[]
for ind,i in enumerate(test):
min_dist=float('inf')
closest_seq=[]
#print ind
for j in train:
if LB_Keogh(i[:-1],j[:-1],5)<min_dist:
dist=DTWDistance(i[:-1],j[:-1],w)
if dist<min_dist:
min_dist=dist
closest_seq=j
preds.append(closest_seq[-1])
return classification_report(test[:,-1],preds)
Explanation: The LB Keogh lower bound method is linear whereas dynamic time warping is quadratic in complexity which make it very advantageous for searching over large sets of time series.
Classification and Clustering
Now that we have a reliable method to determine the similarity between two time series, we can use the k-NN algorithm for classification. Empirically, the best results have come when $k=1$. The following is the 1-NN algorithm that uses dynamic time warping Euclidean distance. In this algorithm, $train$ is the training set of time series examples where the class that the time series belongs to is appended to the end of the time series. $test$ is the test set whose corresponding classes you are trying to predict. In this algorithm, for every time series in the test set, a search must be performed through all points in the training set so that the most similar point is found. Given that dynamic time warping is quadratic, this can be very computationally expensive. We can speed up classification using the LB Keogh lower bound. Computing LB Keogh is much less expensive than performing dynamic time warping. And since $LB Keogh(Q,C) \leq DTW(Q,C)$ , we can eliminate time series that cannot possibly be more similar that the current most similar time series. In this way we are eliminating many unnecessary dynamic time warping computations.
End of explanation
train = np.genfromtxt('datasets/train.csv', delimiter='\t')
test = np.genfromtxt('datasets/test.csv', delimiter='\t')
#print (knn(train,test,4))
Explanation: Now let's test it on some data. We will use a window size of 4. Although the code is sped up with the use of the LB Keogh bound and the dynamic time warping locality contraint, it may still take a few minutes to run.
End of explanation
import random
def k_means_clust(data,num_clust,num_iter,w=5):
centroids=random.sample(data,num_clust)
counter=0
for n in range(num_iter):
counter+=1
print (counter)
assignments={}
#assign data points to clusters
for ind,i in enumerate(data):
min_dist=float('inf')
closest_clust=None
for c_ind,j in enumerate(centroids):
if LB_Keogh(i,j,200)<min_dist:
cur_dist=DTWDistance(i,j,w)
if cur_dist<min_dist:
min_dist=cur_dist
closest_clust=c_ind
if closest_clust in assignments:
assignments[closest_clust].append(ind)
else:
assignments[closest_clust]=[]
#recalculate centroids of clusters
for key in assignments:
clust_sum=0
for k in assignments[key]:
clust_sum= clust_sum+data[k]
print("DEBUG")
for m in clust_sum:
#print(m)
t = m/float(len(assignments[key]))
centroids[key] = m/float(len(assignments[key])) #centroids[key]=[m/float(len(assignments[key])) for m in clust_sum]
return centroids
Explanation: The same idea can also be applied to k-means clustering. In this algorithm, the number of clusters is set apriori and similar time series are clustered together.
End of explanation
train = np.genfromtxt('datasets/train.csv', delimiter='\t')
test = np.genfromtxt('datasets/test.csv', delimiter='\t')
data1=np.vstack((train[:,:-1],test[:,:-1]))
#print(type(train))
#print(np.fromfile("ndarray.csv"))
#print("origi dataset")
df = pd.DataFrame.from_csv("ndarray.csv")
#data = np.ndarray(df)
#numpyMatrix = df.as_matrix()
data1=np.vstack((train[:,:-1],test[:,:-1]))
print(data1[0])
print(type(data1[0]))
data = np.fromfile("prices.csv")
data = np.vstack(data)
print(data[0])
print(type(data[0]))
d = df.values.tolist()
data = np.vstack(d)
for i in range(26):
if np.isnan(d[i][-1]):
d[i][-1] = 0.1
#data = np.ndarray(d)
type(data1[0])
input = data1
y=np.array([np.array(di)[:100] for di in d])
ts1 = y[0]
ts2 = y[1]
#print LB_Keogh(ts1,ts3,2)
print(data1[1])
y = np.delete(y, 25, 0)
# len of ts in example is 60 - range=5
# len of ts in datset is 1416 - 100
#for i in range(26):
# ts = y[i][:60]
# y[i] = ts
(y[1][-1])
#print(y[24])
len(y[1])
import matplotlib.pylab as plt
centroids=k_means_clust(data1,4,10,4) #data,num_clust,num_iter,w=5
print("centroids" ,centroids)
for i in centroids:
plt.plot(i)
plt.show()
import numpy as np;
import seaborn as sns;
import pandas as pd
from scipy import stats
import scipy.cluster.hierarchy as hac
import matplotlib.pyplot as plt
num_samples = 61
group_size = 10
df = pd.DataFrame.from_csv("ndarray.csv")
d = df.values.tolist()
data = np.vstack(d)
for i in range(26):
d[i][-1] = 0
#data = np.ndarray(d)
#type(data1[0])
input = data1
y=np.array([np.array(di)[:60] for di in d])
for i in range(26):
timeseries = y[i]
timeSeries = (timeseries-timeseries.min())/(timeseries.max()-timeseries.min())
y[i] = timeSeries
timeSeries = pd.DataFrame()
#timeSeries = (timeseries-timeseries.min())/(timeseries.max()-timeseries.min())
ax = None
for arr in y:
#for arr in data1:
#arr = arr + np.random.rand(group_size, num_samples) + (np.random.randn(group_size, 1)*3)
df = pd.DataFrame(arr)
#print(df)
timeSeries = timeSeries.append(df)
# We use seaborn to plot what we have
#ax = sns.tsplot(ax=ax, data=df.values, ci=[68, 95])
#ax = sns.tsplot(ax=ax, data=df.values, err_style="unit_traces")
# Just one line :)
Z = hac.linkage(timeSeries, 'ward')
import sys
sys.setrecursionlimit(15000) # DON'T TOUCH IT, IT's MAGIC
#sys.setrecursionlimit(10000)
# Plot the dendogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
hac.dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=14., # font size for the x axis labels
)
plt.show()
# Just one line :)
Z = hac.linkage(timeSeries, 'complete')
import sys
sys.setrecursionlimit(15000) # DON'T TOUCH IT, IT's MAGIC
#sys.setrecursionlimit(10000)
# Plot the dendogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
hac.dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=14., # font size for the x axis labels
)
plt.show()
# Just one line :)
Z = hac.linkage(timeSeries, 'average')
import sys
sys.setrecursionlimit(15000) # DON'T TOUCH IT, IT's MAGIC
#sys.setrecursionlimit(10000)
# Plot the dendogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
hac.dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=14., # font size for the x axis labels
)
plt.show()
# Just one line :)
Z = hac.linkage(timeSeries, 'centroid')
import sys
sys.setrecursionlimit(15000) # DON'T TOUCH IT, IT's MAGIC
#sys.setrecursionlimit(10000)
# Plot the dendogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
hac.dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=14., # font size for the x axis labels
)
plt.show()
# Just one line :)
Z = hac.linkage(timeSeries, 'single',DTWDistance)
import sys
sys.setrecursionlimit(15000) # DON'T TOUCH IT, IT's MAGIC
#sys.setrecursionlimit(10000)
# Plot the dendogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
hac.dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=14., # font size for the x axis labels
)
plt.show()
print("method is single")
Explanation: Let's test it on the entire data set (i.e. the training set and the test set stacked together).
End of explanation |
306 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
The purpose of this challenge is to classify authors using different novels that they have written. In this case supervised techniques have been used and compared to see which one is giving better results using tfidf and bag of words in all of them. Regarding the corpus, then authors have been chosen randomly from Gutenberg Project and 7 novels from those authors. Although initially ten novesl were picked, due to computing restrictions only seven have been left for the classification purposes. The authors that have been picked are
Step1: The information is added to the copus and stored as raw books so that they can be cleansed
Step2: 2. Cleanse and parse and tokenize text
Before generating the features, and to increase the explanatory power of them, text has been cleaned and parsed accordingly. The books have gone through an initial set of cleansing actions before been parsed using Spacy, to reduce the computing effort required by the latter and then have been cleaned again before the feature generation.
The initial cleansing action has had three steps. The first step consisted on deleting all references to the Gutenberg Project from every book. This way, it has been avoided that words like “Gutenberg” and “Gutenberg Project” appear as features and distort the clustering of the authors.
As described below, cleaning actions have gone from removing all references to chapters, digits double whitespaces and references to numbers like dates and ordinal numbers. This has been followed by removing punctuation and common stop words that will only add noise to the features that are generated afterwards.
The remaining words, considered to have the most explanatory power regarding each of the titles from the authors, have been lemmatized and stemmed reducing up to 60% the computing resources needed. In the first case words from the same family are reduced to their lemmas and in the second case, additional prefixes and suffixes are removed. All cleaning operations have been carried out in a way that remaining sentences are stored in a list of lists.
Step3: 3. Generate features and select the most appropiate for the models
Features using BoW
Texts have been vectorized using bag of words. In this case the algorithm counts the numnber of times a word appears in a certain text. During the creation of the bag of words space, ngrams up to 4 components have been considered and stop words in english to remove noise from the dataset. Due to the authors that have been chosen, this method will bias the models towards the authors that have longer texts being Elliot and Austen compared to Conan Doyle and Chesterton. The total number of features is 52k.
Step4: Features using Tf-idf
When using tfidf, the frequency of appearance is normalized and also considered the ones that appear in less than 75% of the documents. With this method, the value counts are smoothen considering additional features of the word such as the amount of information it adds to describe the novel. As in the case of the ba og words, ngamrs up to four have been considered, stop words removed and thesublinear_tf used. It Apply scales the word count obtained and smoothened by the frequency of appearence in the document and whithin a document.
Step5: Five folds have been defined and will be used to tune and evaluate the models
Step6: 4. Supervised models
All models have been run using the features obtained through bag of words and tfidf. In this case results are compared to see which one gives a better overall accuracy as it has been used as the score function. In all cases cross validation over five folds is applied.
Logistic Regression Classifier
Bag of Words
A Logistic Regression Classifier is trained using the features obtained through tfidf. Additionally, using fridsearch the parameters are tunned. As length of texts and therefore the features per author are not balanced, the class weight is set up so that is consideres unbalanced classes.
Step7: After the parameters are tunned, the model is fit in the test dataset. As a measurement of the computing effort it requires 3.6 min to fit the test set.
Step8: The model is evaluated on the test set. In this case the solver has been chosen between the different options that support multiclass classification. As it can be seen in the classification report the model presents overfitting being the precision and recall close to one in all classes expect for class five (Huxley) which is the one that reduces the overall accuracy of the model.
Step9: The logistic regression model is computationally efficient as it fits the dataset with over 50k in less than two minutes making it a string candidate to move intro production. The overall accuracy is nearly 77% which is roughly five percentage points more than in the challenge for this unit. The accuracy is higher than the one obainted by undsupervised methdos using clustering as is much more stable. In this case, the introduction of the test set, unseen by the model is not provoking unstable classifications.
TF-idf
A Logistic Regression Classifier is trained using the features obtained through tfidf. Additionally, using fridsearch the parameters are tunned. As length of texts and therefore the features per author are not balanced, the class weight is set up so that is consideres unbalanced classes. In this case the parameter of the model C is higher than the one used with the bag of words.
Step10: After the parameters are tunned, the model is fit in the test dataset. As a measurement of the computing effort it requires less than one min to fit the test set.
Step11: The model is evaluated on the test set. In this case the solver has been chosen between the different options that support multiclass classification. As it can be seen in the classification report the model presents overfitting being the precision and recall close to one in all classes expect for class five (Huxley) which is the one that reduces the overall accuracy of the model.
Step12: The logistic regression model is computationally efficient as it fits the dataset with over 80k in less than two minutes making it a string candidate to move intro production. The overall accuracy is nearly 80% which is roughly five percentage points more than in the challenge for this unit. The accuracy is higher than the one obainted by undsupervised methdos using clustering as is much more stable. In this case, the introduction of the test set, unseen by the model is not provoking unstable classifications.
Naive-Bayes Classifiers
Bernoulli Classifier
Bag of Words
A Bernoulli classifier has been tunned and trained in the feautures obtained through Tf-idf. In this case the simplicity of the model added to the good classification results make of this model a good candidate to move into production. The time required to train it is lower than the time required to train the logistic regression one.
Step13: After several runs, with different extremes in the values of the alpha parameter, the parameter chosen is always the one closer to zero. This means that the smoothing parameter is very low so the additive smoothing required is low. The model is fit within seconds which makes it a strong candidate (the best one from a computational and speed standpoint) to move intro production.
Step14: The model is evaluated using cross validation and five folds. In this case as in the case of logistic regression the model presents overfitting as it can be seen from the classification report. Both precision and recall is one for this reason.
Step15: The overall accuracy of the model is slightly lower than the accuracy obtained with the logistic regression classifier. However, the time required to fit the model is at least one tenth of the time required for the logistic regression presenting both overfitting. Hence, if overall accuracy is what is tried to be improved, this is the best model with a very small loss of accuracy scoring 81.75%.
Tf-idf
A Bernoulli classifier has been tunned and trained in the feautures obtained through Tf-idf. In this case the simplicity of the model added to the good classification results make of this model a good candidate to move into production. The time required to train it is lower than the time required to train the logistic regression one.
Step16: After several runs, with different extremes in the values of the alpha parameter, the parameter chosen is always the one closer to zero. This means that the smoothing parameter is very low so the additive smoothing required is low. The model is fit within seconds which makes it a strong candidate (the best one from a computational and speed standpoint) to move intro production.
Step17: he model is evaluated using cross validation and five folds. In this case as in the case of logistic regression the model presents overfitting as it can be seen from the classification report. Both precision and recall is one for this reason.
Step18: The overall accuracy of the model is slightly higher than the accuracy obtained with the logistic regression classifier (81.58%). However, the time required to fit the model is at least one tenth of the time required for the logistic regression presenting both overfitting. In this case is class seven (Shaw) the one that shows the lowest precision being the one that determines the lower value of the overall accuracy when compared to the Bernoulli model. Hence, if overall accuracy is what is tried to be improved, this is the best model with a very small loss of accuracy
Multinomial Classifier
BoW
A multinomial classifier is trained on the features obtained using tfidf and evaluated on the holdout. In this case, as in the previous Navy Bayes classification used, alpha always gets the value cloaer to zero, therefore there is no additive smoothing used in this classifier. From a compuational effort standpoint, as in the previous case, this is the one that requires less time to fit making it a strong candidate to move into production.
Step19: The value of alpha is in all trials the closest one to zero being the additive smoothing lose. In this case the time required for fitting is less than one minute. The model is then evaluated on the test set. For that, the first step is to fit the test hodout of the dataset.
Step20: The model presents overfitting and the accuracy is slightly higher than in the previous case 3% more. The confusion matrix presents a lower number of false positives and negatives for all categories, taking into account that the size of each of them is different results are consistent across all of them.
Step21: The time required to fit the model is lower than in any other case presenting a higher accuracy. In this case, the accuracy is close to 84.12% while the classification report shows values close to one, showing that there is overfitting. Hence, from the classifiers evaluated until now this is the one that presents better results, from an accuracy and a computational effort perspective. This is the best candidate to move into production for the moment.
Tf-idf
A multinomial classifier is trained on the features obtained using tfidf and evaluated on the holdout. In this case, as in the previous Navy Bayes classification used, alpha always gets the value cloaer to zero, therefore there is no additive smoothing used in this classifier. From a compuational effort standpoint, as in the previous case, this is the one that requires less time to fit making it a strong candidate to move into production.
Step22: he value of alpha is in all trials the closest one to zero being the additive smoothing lose. In this case the time required for fitting is less than one minute. The model is then evaluated on the test set. For that, the first step is to fit the test hodout of the dataset.
Step23: The model presents overfitting and the accuracy is slightly higher than in the previous case 3% more. The confusion matrix presents a lower number of false positives and negatives for all categories, taking into account that the size of each of them is different results are consistent across all of them.
Step24: The time required to fit the model is lower than in any other case presenting a higher accuracy. In this case, the accuracy is close to 83.67% while the classification report shows values close to one, showing that there is overfitting. Hence, from the classifiers evaluated until now this is the one that presents better results, from an accuracy and a computational effort perspective. This is the best candidate to move into production for the moment.
KNN Classifier
Bag of Words
The KNN classifier has been fit using bag of words. In this case during the gridsearch, five neighbors have been selected as the optimumm number of neighbors when using bag of words
Step25: Once the model has been tuned, it is fit in the test holdout
Step26: The evaluation of the model is done using the classification report, confusion matrix and overall accuracy. In this case KNN works worse than other models as it does not have enough data. From the classification report it can be seen that the model is not overfitting having a high but not equal to one precision and recall. Author two is the one that is scoring the worst results.
Step27: The model is scoring really low from the accuracy that is normally achieved when using KNN. One of the reaons is the amount of data used to fit the model.
Tf- idf
The model is fit on the training set using the features obtained using tfidf. In this case the tuning of the model give lower parameters as the features have been already smoothened being the number of neighbors equal to three.
Step28: Once the parameters are tuned the model is fit on the test set.
Step29: In this case, the accuracy obtained with tfidf is not very different from the accuracy obtained with the bag of words. Better results would be obtained if more data is used to run the model
Step30: Regarding the time used by this model, it is unexpectedly low as it runs over a small dataset. This is the reason why the values obtained are so low when compared to the results obtained through the bag of words.
SDG Classifier
Bag of Words
The SDG classifier is fit on the training set. The SGD Classifier uses regularized linear models with stochastic gradient descendent learning. The model is updated in its learning rate after the gradient of the loss is estaimated for each sample. This classifier can work with sparse data se the one obtained from bag of words. In this case from the types of penalties the algorithm accepts, it uses L2 instead of a combination of L! and L2 implemented through Elastic Net.
Step31: The parameters show that the smooting continues to be loose as a first option as it is a regression with a gradient descendent algorithm. Regarding the loss, the hinge loss is used which means that the real loss, in case it is not convergent due to the sparse data used is replaced by the upper bond forcing its convergence. Time required is significanlty higher than in the case of the Naive Bayes classifiers
Step32: This model presents overfitting as all precision and recall are equal to one for every class. The confusion matrix shows a lower number of false negatives and positives per class being more or less evenly represented except for class three.
Step33: In this case, the overall accuracy is 72.57%, very similar to the overall accuracy obtained using the multinomial classifier. The computational effort required by this model to achieve this accuracy is much higher than in the case of the multinomial classifier. Hence, from a production perspective, this model would not be recommended to move into production despite of its high accuracy.
Tf- idf
The SGD Classifier uses regularized linear models with stochastic gradient descendent learning. The model is updated in its learning rate after the gradient of the loss is estaimated for each sample. This classifier can work with sparse data se the one obtained from tfidf. In this case from the types of penalties the algorithm accepts, it uses L2 instead of a combination of L! and L2 implemented through Elastic Net.
Step34: The parameters show that the smooting continues to be loose as a first option as it is a regression with a gradient descendent algorithm. Regarding the loss, the hinge loss is used which means that the real loss, in case it is not convergent due to the sparse data used is replaced by the upper bond forcing its convergence. Time required is significanlty higher than in the case of the Naive Bayes classifiers
Step35: This model presents overfitting as all precision and recall are equal to one for every class. The confusion matrix shows a lower number of false negatives and positives per class being more or less evenly represented except for class one.
Step36: In this case, the overall accuracy is 80.78%, very similar to the overall accuracy obtained using the multinomial classifier. The computational effort required by this model to achieve this accuracy is much higher than in the case of the multinomial classifier . Hence, from a production perspective, this model would not be recommended to move into production despite of its high accuracy.
Random Forest
Bag of Words
The hyperparamters of the random forest model have been tuned one by one. After trying to tune them all at once, a significant increase of the overall performance of the classifier was obtained with the proposed method (one by one). The parameters to be tuned are (in the same order as the hyperparameter tuning has been performed)
Step37: The tuned model is fit and run on the test set
Step38: The overall accuracy of the model has significantly increase compared to the previous classifiers achieving 73%. This result is low for the type of classifier used. Additionally it is lower than the results obtained with other classifiers. In this case, author seven is the one that is decreasig the overall accuracy.
Step39: This classifier requires more time to run than the Naive Bayes ones and throws poorer results than them. Author three is the one that is reducing the overall accuracy.
Tf-idf
The hyperparamters of the random forest model have been tuned one by one. After trying to tune them all at once, a significant increase of the overall performance of the classifier was obtained with the proposed method (one by one). The parameters to be tuned are (in the same order as the hyperparameter tuning has been performed)
Step40: The tuned model is fit and run on the test set
Step41: The overall accuracy of the model has significantly increase compared to the previous classifiers achieving 73%. This result is low for the type of classifier used. Additionally it is lower than the results obtained with other classifiers. In this case, author seven is the one that is decreasig the overall accuracy.
Step42: This classifier requires more time to run than the Naive Bayes ones and throws poorer results than them. Author three is the one that is reducing the overall accuracy.
SVC
Bag of Words
A linear support vector classifier has been set up and tuned on the training data and run on the test set. The hyperparameters that have been tuned are
Step43: Once the parameters have been tunned the model is fit in the testing dataset
Step44: Although from a computational perspective it requires more effort, it presents better results than the previous algorithms. In this case, nearly 73% has been achieved competing agasint the multiclass algorithm in terms of accuracy but not in terms of computational effort.
Step45: The algorithm presents overfitting as it can be seen from the classification report. Although recall and precision are one, in reality they are lower than one having an overall accuracy of 79.37%. Furthermore, the time required to fit the dataset is higher than the one required wuth the Naive Bayes algorithms.
Tf-idf
A linear support vector classifier has been set up and tuned on the training data and run on the test set. The hyperparameters that have been tuned are
Step46: Once the parameters have been tunned the model is fit in the testing dataset
Step47: Although from a computational perspective it requires more effort, it presents better results than the previous algorithms. In this case, nearly 79% has been achieved competing agasint the multiclass algorithm in terms of accuracy but not in terms of computational effort. | Python Code:
# Create a list of all of our book files.
book_filenames_austen = sorted(glob.glob("/home/borjaregueral/challengesuper2/austen/*.txt"))
book_filenames_chesterton = sorted(glob.glob("/home/borjaregueral/challengesuper2/chesterton/*.txt"))
book_filenames_conandoyle = sorted(glob.glob("/home/borjaregueral/challengesuper2/conandoyle/*.txt"))
book_filenames_elliot = sorted(glob.glob("/home/borjaregueral/challengesuper2/elliot/*.txt"))
Explanation: Introduction
The purpose of this challenge is to classify authors using different novels that they have written. In this case supervised techniques have been used and compared to see which one is giving better results using tfidf and bag of words in all of them. Regarding the corpus, then authors have been chosen randomly from Gutenberg Project and 7 novels from those authors. Although initially ten novesl were picked, due to computing restrictions only seven have been left for the classification purposes. The authors that have been picked are:
Jane Austen
Chesterton
Conan Doyle
Charles Dickens
Elliot
In this notebook we will see the following steps:
Retreive and store the data creating the dataset
Cleanse and parse and tokenize texts
Generate features and select the most appropiate for the models
Supervised models
Increase the performance of one of the models by 5 percentage points
To run the supervised parts of this challenge a new virtual machine has been set up to improve the computational performance. After initial trials on the machine with increased RAM 12GB, the conditions of the challenge were too resource intensive reasing why a virtual machine 8 vCPUs, 30 GB memory was set using Google Compute Engine.
1. Retreive and store the data creating the dataset
Ten novels from four different authors have been retreived form Gutenberg project and a list of all the book files is created.
End of explanation
#Read and add the text of each book to corpus_raw.
corpus_raw_austen = u""
for book_filename in book_filenames_austen:
print("Reading '{0}'...".format(book_filename))
with codecs.open(book_filename, "r", "utf-8") as book_file:
corpus_raw_austen += book_file.read()
print("Corpus is now {0} characters long".format(len(corpus_raw_austen)))
print()
#Read and add the text of each book to corpus_raw.
corpus_raw_chesterton = u""
for book_filename in book_filenames_chesterton:
print("Reading '{0}'...".format(book_filename))
with codecs.open(book_filename, "r", "utf-8") as book_file:
corpus_raw_chesterton += book_file.read()
print("Corpus is now {0} characters long".format(len(corpus_raw_chesterton)))
print()
#Read and add the text of each book to corpus_raw.
corpus_raw_conandoyle = u""
for book_filename in book_filenames_conandoyle:
print("Reading '{0}'...".format(book_filename))
with codecs.open(book_filename, "r", "utf-8") as book_file:
corpus_raw_conandoyle += book_file.read()
print("Corpus is now {0} characters long".format(len(corpus_raw_conandoyle)))
print()
#Read and add the text of each book to corpus_raw.
corpus_raw_elliot = u""
for book_filename in book_filenames_elliot:
print("Reading '{0}'...".format(book_filename))
with codecs.open(book_filename, "r", "utf-8") as book_file:
corpus_raw_elliot += book_file.read()
print("Corpus is now {0} characters long".format(len(corpus_raw_elliot)))
print()
doc_complete = [corpus_raw_austen, corpus_raw_chesterton, corpus_raw_conandoyle,
corpus_raw_elliot]
book_file.close()
Explanation: The information is added to the copus and stored as raw books so that they can be cleansed
End of explanation
#Create a set of stopwords in english from nltk
stop = set(stopwords.words('english'))
# Create a set of punctuation marks to exclude them from the text
exclude = set(string.punctuation)
# Call the lemmatizer
lemma = WordNetLemmatizer()
#Define a cleaning function that incorporates the different steps in the pipeline to clean the texts
def clean(doc):
doc = re.sub(r'--',' ',doc)
doc = re.sub("[\[].*?[\]]", "", doc)
doc = re.sub(r'Chapter \d+', '', doc)
doc = re.sub(r'CHAPTER .*', '', doc)
doc = re.sub('[0-9]+', '', doc)
doc = re.sub("^\d+\s|\s\d+\s|\s\d+$", " ", doc)
stop_free = " ".join([i for i in doc.lower().split() if i not in stop])
punc_free = ''.join(ch for ch in stop_free if ch not in exclude)
normalized = " ".join(lemma.lemmatize(word) for word in punc_free.split())
return normalized
#Create a list of lists with all the documents
doc_clean = [clean(doc) for doc in doc_complete]
# Parse the cleaned novels
#load spacy for english language as all novels are in english
nlp = spacy.load('en')
#Parse novels one by one to maintain the author tagging
austen_doc = nlp(doc_clean[0])
chesterton_doc = nlp(doc_clean[1])
conandoyle_doc = nlp(doc_clean[2])
elliot_doc = nlp(doc_clean[3])
# Group into sentences.
austen_sents = [[str(sent), "Austen"] for sent in austen_doc.sents]
chesterton_sents = [[str(sent), "Chesterton"] for sent in chesterton_doc.sents]
conandoyle_sents = [[str(sent), "Conandoyle"] for sent in conandoyle_doc.sents]
elliot_sents = [[str(sent), "elliot"] for sent in elliot_doc.sents]
# Combine the sentences from the two novels into one data frame.
names = ['Sentences','Author']
sent = pd.DataFrame(austen_sents + chesterton_sents +
conandoyle_sents +
elliot_sents, columns = names)
#Plot the contribution of each author to the corpus (sentences)
sent.Author.value_counts().plot(kind='bar', grid=False, figsize=(16, 9))
#Aadd numerical column to tag the authors for supervised classification
sent.loc[sent['Author'] == 'Austen', 'Target'] = 0
sent.loc[sent['Author'] == 'Chesterton', 'Target'] = 1
sent.loc[sent['Author'] == 'Conandoyle', 'Target'] = 2
sent.loc[sent['Author'] == 'elliot', 'Target'] = 3
Explanation: 2. Cleanse and parse and tokenize text
Before generating the features, and to increase the explanatory power of them, text has been cleaned and parsed accordingly. The books have gone through an initial set of cleansing actions before been parsed using Spacy, to reduce the computing effort required by the latter and then have been cleaned again before the feature generation.
The initial cleansing action has had three steps. The first step consisted on deleting all references to the Gutenberg Project from every book. This way, it has been avoided that words like “Gutenberg” and “Gutenberg Project” appear as features and distort the clustering of the authors.
As described below, cleaning actions have gone from removing all references to chapters, digits double whitespaces and references to numbers like dates and ordinal numbers. This has been followed by removing punctuation and common stop words that will only add noise to the features that are generated afterwards.
The remaining words, considered to have the most explanatory power regarding each of the titles from the authors, have been lemmatized and stemmed reducing up to 60% the computing resources needed. In the first case words from the same family are reduced to their lemmas and in the second case, additional prefixes and suffixes are removed. All cleaning operations have been carried out in a way that remaining sentences are stored in a list of lists.
End of explanation
#Transform into Bag of Words
vec = CountVectorizer(max_df = 0.75 , min_df = 2 , ngram_range = (1,4), stop_words = 'english')
#Build the predictors and the predicted variable applying BoW.
X = vec.fit_transform(sent['Sentences'])
y = sent['Target']
#Split the data set into train and test 70/30
X_train_bow, X_test_bow, y_train_bow, y_test_bow = train_test_split(X,y, test_size=0.30, random_state=1234)
X_train_bow.shape
Explanation: 3. Generate features and select the most appropiate for the models
Features using BoW
Texts have been vectorized using bag of words. In this case the algorithm counts the numnber of times a word appears in a certain text. During the creation of the bag of words space, ngrams up to 4 components have been considered and stop words in english to remove noise from the dataset. Due to the authors that have been chosen, this method will bias the models towards the authors that have longer texts being Elliot and Austen compared to Conan Doyle and Chesterton. The total number of features is 52k.
End of explanation
#Transform into Tf-idf considering the relative frequency
vect = TfidfVectorizer(norm = 'l2', max_df = 0.75 , min_df = 2 , ngram_range = (1,4), stop_words = 'english',
use_idf = True, sublinear_tf = True)
#Build the predictors and the predicted variable applying BoW.
X_tfidf = vect.fit_transform(sent['Sentences'])
y_tfidf = sent['Target']
#Split the data set into train and test 70/30
X_train_tfidf, X_test_tfidf, y_train_tfidf, y_test_tfidf = train_test_split(X_tfidf,y_tfidf, test_size=0.30, random_state=1234)
Explanation: Features using Tf-idf
When using tfidf, the frequency of appearance is normalized and also considered the ones that appear in less than 75% of the documents. With this method, the value counts are smoothen considering additional features of the word such as the amount of information it adds to describe the novel. As in the case of the ba og words, ngamrs up to four have been considered, stop words removed and thesublinear_tf used. It Apply scales the word count obtained and smoothened by the frequency of appearence in the document and whithin a document.
End of explanation
#KFold for cross validation analysis
kf = KFold(n_splits=5, shuffle=True, random_state=123)
Explanation: Five folds have been defined and will be used to tune and evaluate the models
End of explanation
# Initialize and fit the model.
log_reg_bow = LogisticRegression(class_weight='balanced', penalty = 'l2', multi_class= 'multinomial', max_iter = 1000)
#Tune parameters: C parameter
c_param = [ 0.1, 0.5, 1 ]
#Tune the type of penalty used between l1 and l2
solver_param = ['newton-cg', 'lbfgs']
parameters = {'C': c_param, 'solver': solver_param}
#Fit parameters
log_reg_tuned_bow = GridSearchCV(log_reg_bow, param_grid=parameters, n_jobs = -1, cv=kf, verbose = 1)
#Fit the tunned classifier in the training space
log_reg_tuned_bow.fit(X_train_bow, y_train_bow)
#Print the best parameters
print(('Best paramenters logistic regression BoW:\n {}\n').format(log_reg_tuned_bow.best_params_))
Explanation: 4. Supervised models
All models have been run using the features obtained through bag of words and tfidf. In this case results are compared to see which one gives a better overall accuracy as it has been used as the score function. In all cases cross validation over five folds is applied.
Logistic Regression Classifier
Bag of Words
A Logistic Regression Classifier is trained using the features obtained through tfidf. Additionally, using fridsearch the parameters are tunned. As length of texts and therefore the features per author are not balanced, the class weight is set up so that is consideres unbalanced classes.
End of explanation
#Once the model has been trained test it on the test dataset
log_reg_tuned_bow.fit(X_test_bow, y_test_bow)
# Predict on test set
predtest_y_bow = log_reg_tuned_bow.predict(X_test_bow)
Explanation: After the parameters are tunned, the model is fit in the test dataset. As a measurement of the computing effort it requires 3.6 min to fit the test set.
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report BoW: \n {}')
.format(classification_report(y_test_bow, predtest_y_bow,
target_names=target_names)))
confusion_bow = confusion_matrix(y_test_bow, predtest_y_bow)
print(('Confusion Matrix BoW: \n\n {}\n'
).format(confusion_bow))
print(('Logistic Regression set accuracy BoW: {0:.2f} % \n'
).format(cross_val_score(log_reg_tuned_bow, X_test_bow, y_test_bow,cv=kf).mean()*100
))
Explanation: The model is evaluated on the test set. In this case the solver has been chosen between the different options that support multiclass classification. As it can be seen in the classification report the model presents overfitting being the precision and recall close to one in all classes expect for class five (Huxley) which is the one that reduces the overall accuracy of the model.
End of explanation
# Initialize and fit the model.
log_reg_tfidf = LogisticRegression(class_weight='balanced', penalty = 'l2', multi_class= 'multinomial', max_iter = 600)
#Tune parameters
#C parameter
c_param = [ 0.1, 0.5, 1 ]
#Tune the type of penalty used between l1 and l2
solver_param = ['newton-cg','lbfgs']
parameters = {'C': c_param, 'solver': solver_param}
#Fit parameters
log_reg_tuned_tfidf = GridSearchCV(log_reg_tfidf, param_grid=parameters, n_jobs = -1, cv=kf, verbose = 1)
#Fit the tunned classifier in the training space
log_reg_tuned_tfidf.fit(X_train_tfidf, y_train_tfidf)
#Print the best parameters
print(('Best paramenters logistic regression Tfidf: \n{}\n'
).format(log_reg_tuned_tfidf.best_params_))
Explanation: The logistic regression model is computationally efficient as it fits the dataset with over 50k in less than two minutes making it a string candidate to move intro production. The overall accuracy is nearly 77% which is roughly five percentage points more than in the challenge for this unit. The accuracy is higher than the one obainted by undsupervised methdos using clustering as is much more stable. In this case, the introduction of the test set, unseen by the model is not provoking unstable classifications.
TF-idf
A Logistic Regression Classifier is trained using the features obtained through tfidf. Additionally, using fridsearch the parameters are tunned. As length of texts and therefore the features per author are not balanced, the class weight is set up so that is consideres unbalanced classes. In this case the parameter of the model C is higher than the one used with the bag of words.
End of explanation
#Once the model has been trained test it on the test dataset
log_reg_tuned_tfidf.fit(X_test_tfidf, y_test_tfidf)
# Predict on test set
predtest_y_tfidf = log_reg_tuned_tfidf.predict(X_test_tfidf)
Explanation: After the parameters are tunned, the model is fit in the test dataset. As a measurement of the computing effort it requires less than one min to fit the test set.
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report Tf-idf: \n {}')
.format(classification_report(y_test_tfidf, predtest_y_tfidf,
target_names=target_names)))
confusion_tfidf = confusion_matrix(y_test_tfidf, predtest_y_tfidf)
print(('Confusion Matrix Tf-idf: \n\n {}\n'
).format(confusion_tfidf))
print(('Logistic Regression set accuracy Tf-idf: {0:.2f} % \n'
).format(cross_val_score(log_reg_tuned_tfidf, X_test_tfidf, y_test_tfidf,cv=kf).mean()*100
))
Explanation: The model is evaluated on the test set. In this case the solver has been chosen between the different options that support multiclass classification. As it can be seen in the classification report the model presents overfitting being the precision and recall close to one in all classes expect for class five (Huxley) which is the one that reduces the overall accuracy of the model.
End of explanation
# Initialize and fit the model.
naive_bayes_bernoulli_bow = BernoulliNB()
#Tune hyperparameters
#Create range of values to fit parameters
alpha = [0.0001, 0.001, 0.01]
parameters = {'alpha': alpha}
#Fit parameters using gridsearch
naive_bayes_bernoulli_tuned_bow = GridSearchCV(naive_bayes_bernoulli_bow, n_jobs = -1, param_grid=parameters, cv=kf, verbose = 1)
#Fit the tunned classifier in the training space
naive_bayes_bernoulli_tuned_bow.fit(X_train_bow, y_train_bow)
#Print the best parameters
print(('Best paramenters logistic Naive-Bayes Bernoulli BoW: \n{}\n').format(naive_bayes_bernoulli_tuned_bow.best_params_))
Explanation: The logistic regression model is computationally efficient as it fits the dataset with over 80k in less than two minutes making it a string candidate to move intro production. The overall accuracy is nearly 80% which is roughly five percentage points more than in the challenge for this unit. The accuracy is higher than the one obainted by undsupervised methdos using clustering as is much more stable. In this case, the introduction of the test set, unseen by the model is not provoking unstable classifications.
Naive-Bayes Classifiers
Bernoulli Classifier
Bag of Words
A Bernoulli classifier has been tunned and trained in the feautures obtained through Tf-idf. In this case the simplicity of the model added to the good classification results make of this model a good candidate to move into production. The time required to train it is lower than the time required to train the logistic regression one.
End of explanation
#Once the model has been trained test it on the test dataset
naive_bayes_bernoulli_tuned_bow.fit(X_test_bow, y_test_bow)
# Predict on test set
predtest_y_bow = naive_bayes_bernoulli_tuned_bow.predict(X_test_bow)
Explanation: After several runs, with different extremes in the values of the alpha parameter, the parameter chosen is always the one closer to zero. This means that the smoothing parameter is very low so the additive smoothing required is low. The model is fit within seconds which makes it a strong candidate (the best one from a computational and speed standpoint) to move intro production.
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report BoW: \n {}\n').format(
classification_report(y_test_bow, predtest_y_bow,
target_names=target_names)))
confusion_bow = confusion_matrix(y_test_bow, predtest_y_bow)
print(('Confusion Matrix BoW: \n\n {}\n\n').format(confusion_bow))
print(('Bernoulli Classifier set accuracy BoW: {0:.2f} %\n').format(cross_val_score(naive_bayes_bernoulli_tuned_bow,
X_test_bow,
y_test_bow,cv=kf).mean()*100))
Explanation: The model is evaluated using cross validation and five folds. In this case as in the case of logistic regression the model presents overfitting as it can be seen from the classification report. Both precision and recall is one for this reason.
End of explanation
# Initialize and fit the model.
naive_bayes_bernoulli_tfidf = BernoulliNB()
#Tune hyperparameters
#Create range of values to fit parameters
alpha = [0.001, 0.01,0.1]
parameters = {'alpha': alpha}
#Fit parameters using gridsearch
naive_bayes_bernoulli_tuned_tfidf = GridSearchCV(naive_bayes_bernoulli_tfidf,
n_jobs = -1,
param_grid=parameters,
cv=kf, verbose = 1)
#Fit the tunned classifier in the training space
naive_bayes_bernoulli_tuned_tfidf.fit(X_train_tfidf, y_train_tfidf)
#Print the best parameters
print(('Best paramenters logistic Naive-Bayes Bernoulli Tfidf: \n{}\n').format(naive_bayes_bernoulli_tuned_tfidf.best_params_))
Explanation: The overall accuracy of the model is slightly lower than the accuracy obtained with the logistic regression classifier. However, the time required to fit the model is at least one tenth of the time required for the logistic regression presenting both overfitting. Hence, if overall accuracy is what is tried to be improved, this is the best model with a very small loss of accuracy scoring 81.75%.
Tf-idf
A Bernoulli classifier has been tunned and trained in the feautures obtained through Tf-idf. In this case the simplicity of the model added to the good classification results make of this model a good candidate to move into production. The time required to train it is lower than the time required to train the logistic regression one.
End of explanation
#Once the model has been trained test it on the test dataset
naive_bayes_bernoulli_tuned_tfidf.fit(X_test_tfidf, y_test_tfidf)
# Predict on test set
predtest_y_tfidf = naive_bayes_bernoulli_tuned_tfidf.predict(X_test_tfidf)
Explanation: After several runs, with different extremes in the values of the alpha parameter, the parameter chosen is always the one closer to zero. This means that the smoothing parameter is very low so the additive smoothing required is low. The model is fit within seconds which makes it a strong candidate (the best one from a computational and speed standpoint) to move intro production.
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report Tfidf: \n {}').format(classification_report(y_test_tfidf, predtest_y_tfidf,
target_names=target_names)))
confusion_tfidf = confusion_matrix(y_test_tfidf, predtest_y_tfidf)
print(('Confusion Matrix Tf-idf: \n\n {}\n').format(confusion_tfidf))
print(('Bernoulli Classifier Tf-Idf set accuracy Tf-idf: {0:.2f} % \n').format(cross_val_score(naive_bayes_bernoulli_tuned_tfidf,
X_test_tfidf,
y_test_tfidf,
cv=kf).mean()*100))
Explanation: he model is evaluated using cross validation and five folds. In this case as in the case of logistic regression the model presents overfitting as it can be seen from the classification report. Both precision and recall is one for this reason.
End of explanation
# Initialize and fit the model.
naive_bayes_multinomial_bow = MultinomialNB()
#Tune hyperparameters
#Create range of values to fit parameters
alpha = [0.01,0.1,0.5]
parameters = {'alpha': alpha}
#Fit parameters using gridsearch
naive_bayes_multinomial_tuned_bow = GridSearchCV(naive_bayes_multinomial_bow,
n_jobs = -1,
param_grid=parameters,
cv=kf, verbose = 1)
#Fit the tunned classifier in the training space
naive_bayes_multinomial_tuned_bow.fit(X_train_bow, y_train_bow)
#Print the best parameters
print(('Best paramenters Naive-Bayes Multinomial BoW:\n {}\n').format(
naive_bayes_multinomial_tuned_bow.best_params_))
Explanation: The overall accuracy of the model is slightly higher than the accuracy obtained with the logistic regression classifier (81.58%). However, the time required to fit the model is at least one tenth of the time required for the logistic regression presenting both overfitting. In this case is class seven (Shaw) the one that shows the lowest precision being the one that determines the lower value of the overall accuracy when compared to the Bernoulli model. Hence, if overall accuracy is what is tried to be improved, this is the best model with a very small loss of accuracy
Multinomial Classifier
BoW
A multinomial classifier is trained on the features obtained using tfidf and evaluated on the holdout. In this case, as in the previous Navy Bayes classification used, alpha always gets the value cloaer to zero, therefore there is no additive smoothing used in this classifier. From a compuational effort standpoint, as in the previous case, this is the one that requires less time to fit making it a strong candidate to move into production.
End of explanation
#Once the model has been trained test it on the test dataset
naive_bayes_multinomial_tuned_bow.fit(X_test_bow, y_test_bow)
# Predict on test set
predtest_y_bow = naive_bayes_multinomial_tuned_bow.predict(X_test_bow)
Explanation: The value of alpha is in all trials the closest one to zero being the additive smoothing lose. In this case the time required for fitting is less than one minute. The model is then evaluated on the test set. For that, the first step is to fit the test hodout of the dataset.
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report BoW: \n {}\n').format(
classification_report(y_test_bow, predtest_y_bow,
target_names=target_names)))
confusion_bow = confusion_matrix(y_test_bow, predtest_y_bow)
print((
'Confusion Matrix BoW: \n\n {}\n\n').format(confusion_bow))
print((
'Multinomial Classifier set accuracy BoW: {0:.2f} %\n'
).format(cross_val_score(naive_bayes_multinomial_tuned_bow, X_test_bow, y_test_bow,cv=kf).mean()*100))
Explanation: The model presents overfitting and the accuracy is slightly higher than in the previous case 3% more. The confusion matrix presents a lower number of false positives and negatives for all categories, taking into account that the size of each of them is different results are consistent across all of them.
End of explanation
# Initialize and fit the model.
naive_bayes_multinomial_tfidf = MultinomialNB()
#Tune hyperparameters
#Create range of values to fit parameters
alpha = [0.01,0.1,0.5,1]
parameters = {'alpha': alpha}
#Fit parameters using gridsearch
naive_bayes_multinomial_tuned_tfidf = GridSearchCV(naive_bayes_multinomial_tfidf,
n_jobs = -1,
param_grid=parameters,
cv=kf, verbose = 1)
#Fit the tunned classifier in the training space
naive_bayes_multinomial_tuned_tfidf.fit(X_train_tfidf, y_train_tfidf)
#Print the best parameters
print(('Best paramenters Naive-Bayes Multinomial BoW:\n {}\n').format(
naive_bayes_multinomial_tuned_tfidf.best_params_))
Explanation: The time required to fit the model is lower than in any other case presenting a higher accuracy. In this case, the accuracy is close to 84.12% while the classification report shows values close to one, showing that there is overfitting. Hence, from the classifiers evaluated until now this is the one that presents better results, from an accuracy and a computational effort perspective. This is the best candidate to move into production for the moment.
Tf-idf
A multinomial classifier is trained on the features obtained using tfidf and evaluated on the holdout. In this case, as in the previous Navy Bayes classification used, alpha always gets the value cloaer to zero, therefore there is no additive smoothing used in this classifier. From a compuational effort standpoint, as in the previous case, this is the one that requires less time to fit making it a strong candidate to move into production.
End of explanation
#Once the model has been trained test it on the test dataset
naive_bayes_multinomial_tuned_tfidf.fit(X_test_tfidf, y_test_tfidf)
# Predict on test set
predtest_y_tfidf = naive_bayes_multinomial_tuned_tfidf.predict(X_test_tfidf)
Explanation: he value of alpha is in all trials the closest one to zero being the additive smoothing lose. In this case the time required for fitting is less than one minute. The model is then evaluated on the test set. For that, the first step is to fit the test hodout of the dataset.
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report tfidf: \n {}').format(classification_report(y_test_tfidf,
predtest_y_tfidf,
target_names=target_names)))
confusion_tfidf = confusion_matrix(y_test_tfidf, predtest_y_tfidf)
print(('Confusion Matrix Tf-idf: \n\n {}\n').format(confusion_tfidf))
print(('Multinomial Classifier set accuracy Tf-idf: {0:.2f} % \n').format(cross_val_score(naive_bayes_multinomial_tuned_tfidf,
X_test_tfidf,
y_test_tfidf,
cv=kf).mean()*100))
Explanation: The model presents overfitting and the accuracy is slightly higher than in the previous case 3% more. The confusion matrix presents a lower number of false positives and negatives for all categories, taking into account that the size of each of them is different results are consistent across all of them.
End of explanation
# Initialize and fit the model.
KNN_bow = KNeighborsClassifier(weights = 'distance')
#Tune hyperparameters
#Create range of values to fit parameters
neighbors = [3, 5, 7,9]
#Fit parameters
parameters = {'n_neighbors': neighbors}
#Fit parameters using gridsearch
KNN_tuned_bow = GridSearchCV(KNN_bow, param_grid=parameters, n_jobs = -1, cv=kf, verbose = 1)
#Fit the tunned classifier in the training space
KNN_tuned_bow.fit(X_train_bow, y_train_bow)
#Print the best parameters
print(('Best paramenters KNN BoW:\n {}\n').format(
KNN_tuned_bow.best_params_))
Explanation: The time required to fit the model is lower than in any other case presenting a higher accuracy. In this case, the accuracy is close to 83.67% while the classification report shows values close to one, showing that there is overfitting. Hence, from the classifiers evaluated until now this is the one that presents better results, from an accuracy and a computational effort perspective. This is the best candidate to move into production for the moment.
KNN Classifier
Bag of Words
The KNN classifier has been fit using bag of words. In this case during the gridsearch, five neighbors have been selected as the optimumm number of neighbors when using bag of words
End of explanation
#Once the model has been trained test it on the test dataset
KNN_tuned_bow.fit(X_test_bow, y_test_bow)
# Predict on test set
predtest_y_bow = KNN_tuned_bow.predict(X_test_bow)
Explanation: Once the model has been tuned, it is fit in the test holdout
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report BoW: \n {}\n').format(
classification_report(y_test_bow, predtest_y_bow,
target_names=target_names)))
confusion_bow = confusion_matrix(y_test_bow, predtest_y_bow)
print((
'Confusion Matrix BoW: \n\n {}\n\n').format(confusion_bow))
print((
'KNN accuracy BoW: {0:.2f} %\n'
).format(cross_val_score(KNN_tuned_bow, X_test_bow, y_test_bow,cv=kf).mean()*100))
Explanation: The evaluation of the model is done using the classification report, confusion matrix and overall accuracy. In this case KNN works worse than other models as it does not have enough data. From the classification report it can be seen that the model is not overfitting having a high but not equal to one precision and recall. Author two is the one that is scoring the worst results.
End of explanation
# Initialize and fit the model.
KNN_tfidf = KNeighborsClassifier(weights = 'distance')
#Tune hyperparameters
#Create range of values to fit parameters
neighbors = [3, 5, 7,9]
#Fit parameters
parameters = {'n_neighbors': neighbors}
#Fit parameters using gridsearch
KNN_tuned_tfidf = GridSearchCV(KNN_tfidf,
param_grid=parameters,
n_jobs = -1,
cv=kf,
verbose = 1)
#Fit the tunned classifier in the training space
KNN_tuned_tfidf.fit(X_train_tfidf, y_train_tfidf)
#Print the best parameters
print(('Best paramenters KNN Tfidf:\n {}\n').format(KNN_tuned_tfidf.best_params_))
Explanation: The model is scoring really low from the accuracy that is normally achieved when using KNN. One of the reaons is the amount of data used to fit the model.
Tf- idf
The model is fit on the training set using the features obtained using tfidf. In this case the tuning of the model give lower parameters as the features have been already smoothened being the number of neighbors equal to three.
End of explanation
#Once the model has been trained test it on the test dataset
KNN_tuned_tfidf.fit(X_test_tfidf, y_test_tfidf)
# Predict on test set
predtest_y_tfidf = KNN_tuned_tfidf.predict(X_test_tfidf)
Explanation: Once the parameters are tuned the model is fit on the test set.
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report Tfidf: \n {}\n').format(
classification_report(y_test_tfidf, predtest_y_tfidf,
target_names=target_names)))
confusion_tfidf = confusion_matrix(y_test_tfidf, predtest_y_tfidf)
print((
'Confusion Matrix Tfidf: \n\n {}\n\n').format(confusion_tfidf))
print((
'KNN accuracy Tfidf: {0:.2f} %\n'
).format(cross_val_score(KNN_tuned_tfidf, X_test_tfidf, y_test_tfidf,cv=kf).mean()*100))
Explanation: In this case, the accuracy obtained with tfidf is not very different from the accuracy obtained with the bag of words. Better results would be obtained if more data is used to run the model
End of explanation
# Initialize and fit the model.
SGD_bow = SGDClassifier(class_weight = 'balanced', max_iter=1000)
#Tune hyperparameters
#Create range of values to fit parameters
loss_param = ['hinge', 'squared_hinge']
penalty_param = ['l2', 'elasticnet']
alpha_param = [0.1, 1, 10, 100]
#Fit parameters
parameters = {'loss': loss_param,
'penalty': penalty_param,
'alpha': alpha_param}
#Fit parameters using gridsearch
SGD_tuned_bow = GridSearchCV(SGD_bow, param_grid=parameters, n_jobs = -1, cv=kf, verbose = 1)
#Fit the tunned classifier in the training space
SGD_tuned_bow.fit(X_train_bow, y_train_bow)
#Print the best parameters
print(('Best paramenters SGD BoW:\n {}\n').format(
SGD_tuned_bow.best_params_))
Explanation: Regarding the time used by this model, it is unexpectedly low as it runs over a small dataset. This is the reason why the values obtained are so low when compared to the results obtained through the bag of words.
SDG Classifier
Bag of Words
The SDG classifier is fit on the training set. The SGD Classifier uses regularized linear models with stochastic gradient descendent learning. The model is updated in its learning rate after the gradient of the loss is estaimated for each sample. This classifier can work with sparse data se the one obtained from bag of words. In this case from the types of penalties the algorithm accepts, it uses L2 instead of a combination of L! and L2 implemented through Elastic Net.
End of explanation
#Once the model has been trained test it on the test dataset
SGD_tuned_bow.fit(X_test_bow, y_test_bow)
# Predict on test set
predtest_y_bow = SGD_tuned_bow.predict(X_test_bow)
Explanation: The parameters show that the smooting continues to be loose as a first option as it is a regression with a gradient descendent algorithm. Regarding the loss, the hinge loss is used which means that the real loss, in case it is not convergent due to the sparse data used is replaced by the upper bond forcing its convergence. Time required is significanlty higher than in the case of the Naive Bayes classifiers
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report BoW: \n {}\n').format(
classification_report(y_test_bow, predtest_y_bow,
target_names=target_names)))
confusion_bow = confusion_matrix(y_test_bow, predtest_y_bow)
print((
'Confusion Matrix BoW: \n\n {}\n\n').format(confusion_bow))
print((
'SGD accuracy BoW: {0:.2f} %\n'
).format(cross_val_score(SGD_tuned_bow, X_test_bow, y_test_bow,cv=kf).mean()*100))
Explanation: This model presents overfitting as all precision and recall are equal to one for every class. The confusion matrix shows a lower number of false negatives and positives per class being more or less evenly represented except for class three.
End of explanation
# Initialize and fit the model.
SGD_tfidf = SGDClassifier(class_weight = 'balanced', max_iter=1000)
#Tune hyperparameters
#Create range of values to fit parameters
loss_param = ['hinge', 'squared_hinge']
penalty_param = ['elasticnet', 'l2' ]
alpha_param = [1, 0.0001, 0.001, 0.01, 0.1]
#Fit parameters
parameters = {'loss': loss_param,
'penalty': penalty_param,
'alpha': alpha_param}
#Fit parameters using gridsearch
SGD_tuned_tfidf = GridSearchCV(SGD_tfidf, param_grid=parameters, n_jobs = -1, cv=kf, verbose = 1)
#Fit the tunned classifier in the training space
SGD_tuned_tfidf.fit(X_train_tfidf, y_train_tfidf)
#Print the best parameters
print(('Best paramenters SDG Tfidf:\n {}\n').format(
SGD_tuned_tfidf.best_params_))
Explanation: In this case, the overall accuracy is 72.57%, very similar to the overall accuracy obtained using the multinomial classifier. The computational effort required by this model to achieve this accuracy is much higher than in the case of the multinomial classifier. Hence, from a production perspective, this model would not be recommended to move into production despite of its high accuracy.
Tf- idf
The SGD Classifier uses regularized linear models with stochastic gradient descendent learning. The model is updated in its learning rate after the gradient of the loss is estaimated for each sample. This classifier can work with sparse data se the one obtained from tfidf. In this case from the types of penalties the algorithm accepts, it uses L2 instead of a combination of L! and L2 implemented through Elastic Net.
End of explanation
#Once the model has been trained test it on the test dataset
SGD_tuned_tfidf.fit(X_test_tfidf, y_test_tfidf)
# Predict on test set
predtest_y_tfidf = SGD_tuned_tfidf.predict(X_test_tfidf)
Explanation: The parameters show that the smooting continues to be loose as a first option as it is a regression with a gradient descendent algorithm. Regarding the loss, the hinge loss is used which means that the real loss, in case it is not convergent due to the sparse data used is replaced by the upper bond forcing its convergence. Time required is significanlty higher than in the case of the Naive Bayes classifiers
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report Tfidf: \n {}\n').format(
classification_report(y_test_tfidf, predtest_y_tfidf,
target_names=target_names)))
confusion_bow = confusion_matrix(y_test_tfidf, predtest_y_tfidf)
print((
'Confusion Matrix Tfidf: \n\n {}\n\n').format(confusion_tfidf))
print((
'SGD accuracy Tfidf: {0:.2f} %\n'
).format(cross_val_score(SGD_tuned_tfidf, X_test_tfidf, y_test_tfidf,cv=kf).mean()*100))
Explanation: This model presents overfitting as all precision and recall are equal to one for every class. The confusion matrix shows a lower number of false negatives and positives per class being more or less evenly represented except for class one.
End of explanation
# Initialize and fit the model.
rf_bow = RandomForestClassifier(class_weight = 'balanced')
#Tune hyperparameters
#Create range of values to fit parameters
n_estimators_param = np.arange(250,401,20)
max_depth_param = np.arange(46,63,2)
#Fit parameters
parameters = {'n_estimators': n_estimators_param,
'max_depth': max_depth_param}
#Fit parameters using gridsearch
rf_tuned_bow = GridSearchCV(rf_bow, param_grid=parameters, n_jobs = -1, cv=kf, verbose = 1)
#Fit the tunned classifier in the training space
rf_tuned_bow.fit(X_train_bow, y_train_bow)
#Print the best parameters
print(('Best paramenters Random Forest BoW:\n {}\n').format(rf_tuned_bow.best_params_))
Explanation: In this case, the overall accuracy is 80.78%, very similar to the overall accuracy obtained using the multinomial classifier. The computational effort required by this model to achieve this accuracy is much higher than in the case of the multinomial classifier . Hence, from a production perspective, this model would not be recommended to move into production despite of its high accuracy.
Random Forest
Bag of Words
The hyperparamters of the random forest model have been tuned one by one. After trying to tune them all at once, a significant increase of the overall performance of the classifier was obtained with the proposed method (one by one). The parameters to be tuned are (in the same order as the hyperparameter tuning has been performed):
N_estimators determining the number of trees that will be part of the algorithm.
Max depth determining the size of the tree.
End of explanation
#Once the model has been trained test it on the test dataset
rf_tuned_bow.fit(X_test_bow, y_test_bow)
# Predict on test set
predtest_y_bow = rf_tuned_bow.predict(X_test_bow)
Explanation: The tuned model is fit and run on the test set
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report BoW: \n {}\n').format(
classification_report(y_test_bow, predtest_y_bow,
target_names=target_names)))
confusion_bow = confusion_matrix(y_test_bow, predtest_y_bow)
print((
'Confusion Matrix BoW: \n\n {}\n\n').format(confusion_bow))
print((
'Random Forest accuracy BoW: {0:.2f} %\n'
).format(cross_val_score(rf_tuned_bow, X_test_bow, y_test_bow,cv=kf).mean()*100))
Explanation: The overall accuracy of the model has significantly increase compared to the previous classifiers achieving 73%. This result is low for the type of classifier used. Additionally it is lower than the results obtained with other classifiers. In this case, author seven is the one that is decreasig the overall accuracy.
End of explanation
# Initialize and fit the model.
rf_tfidf = RandomForestClassifier(class_weight = 'balanced')
#Tune hyperparameters
#Create range of values to fit parameters
n_estimators_param = np.arange(100,201,10)
max_depth_param = np.arange(50,71,5)
#Fit parameters
parameters = {'n_estimators': n_estimators_param,
'max_depth': max_depth_param}
#Fit parameters using gridsearch
rf_tuned_tfidf = GridSearchCV(rf_tfidf, param_grid=parameters, n_jobs = -1, cv=kf, verbose = 1)
#Fit the tunned classifier in the training space
rf_tuned_tfidf.fit(X_train_bow, y_train_bow)
#Print the best parameters
print(('Best paramenters Random Forest Tfidf:\n {}\n').format(
rf_tuned_tfidf.best_params_))
Explanation: This classifier requires more time to run than the Naive Bayes ones and throws poorer results than them. Author three is the one that is reducing the overall accuracy.
Tf-idf
The hyperparamters of the random forest model have been tuned one by one. After trying to tune them all at once, a significant increase of the overall performance of the classifier was obtained with the proposed method (one by one). The parameters to be tuned are (in the same order as the hyperparameter tuning has been performed):
N_estimators determining the number of trees that will be part of the algorithm.
Max depth determining the size of the tree.
End of explanation
#Once the model has been trained test it on the test dataset
rf_tuned_tfidf.fit(X_test_tfidf, y_test_tfidf)
# Predict on test set
predtest_y_tfidf = rf_tuned_tfidf.predict(X_test_tfidf)
Explanation: The tuned model is fit and run on the test set
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report Tfidf: \n {}\n').format(
classification_report(y_test_tfidf, predtest_y_tfidf,
target_names=target_names)))
confusion_tfidf = confusion_matrix(y_test_tfidf, predtest_y_tfidf)
print((
'Confusion Matrix Tfidf: \n\n {}\n\n').format(confusion_tfidf))
print((
'Random Forest accuracy Tfidf: {0:.2f} %\n'
).format(cross_val_score(rf_tuned_tfidf, X_test_tfidf, y_test_tfidf,cv=kf).mean()*100))
Explanation: The overall accuracy of the model has significantly increase compared to the previous classifiers achieving 73%. This result is low for the type of classifier used. Additionally it is lower than the results obtained with other classifiers. In this case, author seven is the one that is decreasig the overall accuracy.
End of explanation
# Initialize and fit the model.
LSVC_bow = LinearSVC(class_weight='balanced', multi_class = 'crammer_singer')
#Tune hyperparameters
#Create range of values to fit parameters
loss_param = ['hinge','squared_hinge']
C_param = [1, 10, 100, 100000]
#Fit parameters
parameters = { 'loss': loss_param,
'C': C_param}
#Fit parameters using gridsearch
LSVC_tuned_bow = GridSearchCV(LSVC_bow, param_grid=parameters, n_jobs = -1, cv=kf, verbose = 1)
#Fit the tunned classifier in the training space
LSVC_tuned_bow.fit(X_train_bow, y_train_bow)
#Print the best parameters
print(('Best paramenters LinearSVC BoW:\n {}\n').format(
LSVC_tuned_bow.best_params_))
Explanation: This classifier requires more time to run than the Naive Bayes ones and throws poorer results than them. Author three is the one that is reducing the overall accuracy.
SVC
Bag of Words
A linear support vector classifier has been set up and tuned on the training data and run on the test set. The hyperparameters that have been tuned are:
C parameter, acting on the margin hyperplane having a bigger margin when C is smaller. (The value of C will tell the SVM how much misclassification is to be avoided).
The loss parameter.
In this case the crammer singer algorithm is used to solve the multiclass classification problem. This algorithm optimizes the joint objective over all classes but it is not interesting from a production standpoint as it rarely leads to better accuracy and is more expensive to compute. Due to the size of the feature´s space the linear SVC has been used instead of the SVC due to computational restrictions.
End of explanation
#Once the model has been trained test it on the test dataset
LSVC_tuned_bow.fit(X_test_bow, y_test_bow)
# Predict on test set
predtest_y_bow = LSVC_tuned_bow.predict(X_test_bow)
Explanation: Once the parameters have been tunned the model is fit in the testing dataset
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report BoW: \n {}\n').format(
classification_report(y_test_bow, predtest_y_bow,
target_names=target_names)))
confusion_bow = confusion_matrix(y_test_bow, predtest_y_bow)
print((
'Confusion Matrix BoW: \n\n {}\n\n').format(confusion_bow))
print((
'Linear SVC accuracy BoW: {0:.2f} %\n'
).format(cross_val_score(LSVC_tuned_bow, X_test_bow, y_test_bow,cv=kf).mean()*100))
Explanation: Although from a computational perspective it requires more effort, it presents better results than the previous algorithms. In this case, nearly 73% has been achieved competing agasint the multiclass algorithm in terms of accuracy but not in terms of computational effort.
End of explanation
# Initialize and fit the model.
LSVC_tfidf = LinearSVC(class_weight='balanced', multi_class = 'crammer_singer')
#Tune hyperparameters
#Create range of values to fit parameters
loss_param = ['hinge','squared_hinge']
C_param = [0.1, 1, 10, 100]
#Fit parameters
parameters = {
'loss': loss_param,
'C': C_param}
#Fit parameters using gridsearch
LSVC_tuned_tfidf = GridSearchCV(LSVC_tfidf, param_grid=parameters, n_jobs = -1, cv=kf, verbose = 1)
#Fit the tunned classifier in the training space
LSVC_tuned_tfidf.fit(X_train_tfidf, y_train_tfidf)
#Print the best parameters
print(('Best paramenters Linear SVC Tfidf:\n {}\n').format(LSVC_tuned_tfidf.best_params_))
Explanation: The algorithm presents overfitting as it can be seen from the classification report. Although recall and precision are one, in reality they are lower than one having an overall accuracy of 79.37%. Furthermore, the time required to fit the dataset is higher than the one required wuth the Naive Bayes algorithms.
Tf-idf
A linear support vector classifier has been set up and tuned on the training data and run on the test set. The hyperparameters that have been tuned are:
C parameter, acting on the margin hyperplane having a bigger margin when C is smaller. (The value of C will tell the SVM how much misclassification is to be avoided).
The loss parameter.
In this case the crammer singer algorithm is used to solve the multiclass classification problem. This algorithm optimizes the joint objective over all classes but it is not interesting from a production standpoint as it rarely leads to better accuracy and is more expensive to compute. Due to the size of the feature´s space the linear SVC has been used instead of the SVC due to computational restrictions.
End of explanation
#Once the model has been trained test it on the test dataset
LSVC_tuned_tfidf.fit(X_test_tfidf, y_test_tfidf)
# Predict on test set
predtest_y_tfidf = LSVC_tuned_tfidf.predict(X_test_tfidf)
Explanation: Once the parameters have been tunned the model is fit in the testing dataset
End of explanation
#Evaluation of the model (testing)
target_names = ['0.0', '1.0', '2.0', '3.0']
print(('Classification Report Tfidf: \n {}\n').format(
classification_report(y_test_tfidf, predtest_y_tfidf,
target_names=target_names)))
confusion_tfidf = confusion_matrix(y_test_tfidf, predtest_y_tfidf)
print((
'Confusion Matrix Tfidf: \n\n {}\n\n').format(confusion_tfidf))
print((
'Linear SVC accuracy Tfidf: {0:.2f} %\n'
).format(cross_val_score(LSVC_tuned_tfidf, X_test_tfidf, y_test_tfidf,cv=kf).mean()*100))
Explanation: Although from a computational perspective it requires more effort, it presents better results than the previous algorithms. In this case, nearly 79% has been achieved competing agasint the multiclass algorithm in terms of accuracy but not in terms of computational effort.
End of explanation |
307 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Serving ML Predictions in batch and real-time
Learning Objectives
1. Copy trained model into your bucket
2. Deploy AI Platform trained model
Introduction
In this notebook, we will create a prediction service that calls your trained model deployed in Cloud to serve predictions.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Copy trained model
Set necessary variables
Step1: Create a bucket and copy trained model in it
Step2: Deploy trained model
We'll now deploy our model. This will take a few minutes. Once the cell below completes, you should be able to see your newly deployed model in the 'Models' portion of the AI Platform section of the GCP console. | Python Code:
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "2.6" # TF version for CMLE to use
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
Explanation: Serving ML Predictions in batch and real-time
Learning Objectives
1. Copy trained model into your bucket
2. Deploy AI Platform trained model
Introduction
In this notebook, we will create a prediction service that calls your trained model deployed in Cloud to serve predictions.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Copy trained model
Set necessary variables
End of explanation
%%bash
if ! gsutil ls -r gs://${BUCKET} | grep -q gs://${BUCKET}/babyweight/trained_model/; then
gsutil mb -l ${REGION} gs://${BUCKET}
# copy canonical model if you didn't do previous notebook
# TODO
gsutil -m cp -R gs://cloud-training-demos/babyweight/trained_model gs://${BUCKET}/babyweight
fi
Explanation: Create a bucket and copy trained model in it
End of explanation
%%bash
# Set necessary variables:
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/babyweight/export/exporter/ | tail -1)
# Set the region to global by executing the following command:
gcloud config set ai_platform/region global
echo "Deploying the model '$MODEL_NAME', version '$MODEL_VERSION' from $MODEL_LOCATION"
echo "... this will take a few minutes"
# Deploy trained model:
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
# Create a new AI Platform version.
# TODO
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--runtime-version $TFVERSION
Explanation: Deploy trained model
We'll now deploy our model. This will take a few minutes. Once the cell below completes, you should be able to see your newly deployed model in the 'Models' portion of the AI Platform section of the GCP console.
End of explanation |
308 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Variables
resources used - http
Step1: Running the graph in a tf session
Step2: Section 2 - moving average | Python Code:
import tensorflow as tf
x = tf.constant(35, name='x')
y = tf.Variable(x + 5, name='y')
model = tf.global_variables_initializer()
Explanation: Variables
resources used - http://learningtensorflow.com/lesson2/
Section 1 - a simple representation
A simple representation of variables and constants in a tf graph
End of explanation
with tf.Session() as session:
session.run(model)
print(session.run(y))
Explanation: Running the graph in a tf session
End of explanation
import tensorflow as tf
x = tf.Variable(0, name='x')
model = tf.global_variables_initializer()
with tf.Session() as session:
session.run(model)
for i in range(5):
x = x + 1
avg = x/(i+1)
print('x = ',session.run(x))
print('moving avg = ',session.run(avg))
Explanation: Section 2 - moving average
End of explanation |
309 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithmn Re-Assesment
Introduction
Step1: Inspired by the Classifier comparision from SciKit Example, we are trying to see which algorithm work better.
Due to heavyness of data, we are avoiding checking Linear, RBF, SVM | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.multiclass import OneVsOneClassifier, OneVsRestClassifier
from scripts.tools import df_check_stats, game, sam_pickle_save, sam_pickle_load
np.set_printoptions(precision=5)
np.random.seed(69572)
plt.style.use('ggplot')
sns.set(color_codes=True)
crazy_list = dir()
%matplotlib inline
for each in dir():
if each not in crazy_list:
del each
print('Length of dir():', len(dir()))
Explanation: Algorithmn Re-Assesment
Introduction:
Using the data gathered from Taarifa and the Tanzanian Ministry of Water, can we predict which pumps are functional, which need some repairs, and which don't work at all? Predicting one of these three classes based and a smart understanding of which waterpoints will fail, can improve the maintenance operations and ensure that clean, potable water is available to communities across Tanzania.
This is also an intermediate-level competition by DataDriven! All code & support scripts are in Github Repo
End of explanation
X, y, TEST_X = sam_pickle_load(prefix="tmp/Iteration2_vt_kb_")
df_check_stats(X, y, TEST_X)
clf = RandomForestClassifier(random_state=192)
scores = cross_val_score(clf, X, y, cv=5, n_jobs=-1)
print('AC Score:', scores.mean())
# preprocess dataset, split into training and test part
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, random_state=42)
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
names = ["Nearest Neighbors",
# "Linear SVM",
# "RBF SVM",
# "Gaussian Process",
"Decision Tree", "Random Forest", "Neural Net", "AdaBoost",
"Naive Bayes", "QDA", "GBT",]
classifiers = [
KNeighborsClassifier(3),
# SVC(kernel="linear", C=0.025),
# SVC(gamma=2, C=1),
# GaussianProcessClassifier(1.0 * RBF(1.0), warm_start=True),
DecisionTreeClassifier(),
RandomForestClassifier(),
MLPClassifier(),
AdaBoostClassifier(),
GaussianNB(),
QuadraticDiscriminantAnalysis(),
GradientBoostingClassifier()]
# iterate over classifiers
te_scores = []
tr_scores = []
for name, clf in zip(names, classifiers):
clf.fit(X_train, y_train)
tr_scores.append(clf.score(X_train, y_train))
te_scores.append(clf.score(X_test, y_test))
plt.figure(figsize=(12, 3))
plt.plot(range(len(names)), tr_scores)
plt.plot(range(len(names)), te_scores)
plt.xticks(range(len(names)), names, fontsize=8)
plt.yticks(fontsize=8)
plt.ylabel('Scores', fontsize=8)
# plt.xlabel('Alog', fontsize=8)
plt.title('Classifiers Performance on Pump it Data', fontsize=9)
lines_scores = te_scores.copy()
lines_scores.sort()
plt.plot((0, len(tr_scores) -1 ), (lines_scores[-1], lines_scores[-1]), '-.')
plt.plot((0, len(tr_scores) -1 ), (lines_scores[-2], lines_scores[-2]), '--')
plt.legend(['Train Score',
'Test Score',
'Top Test Score',
'2nd Top Test Score'], fontsize=8)
te_scores
Explanation: Inspired by the Classifier comparision from SciKit Example, we are trying to see which algorithm work better.
Due to heavyness of data, we are avoiding checking Linear, RBF, SVM
End of explanation |
310 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CHIPPR
This notebook demonstrates the use of the Cosmological Hierarchical Inference with Probabilistic Photometric Redshifts (CHIPPR) package to estimate population distributions based on a catalog of probability distributions.
The package supports two primary objectives
Step1: Simulation
Many of chippr's modules are used to produce mock catalogs of individual posterior probability distributions.
To create a catalog, we must first define a true redshift distribution function. It may be a Gaussian distribution of the gauss class, a Gaussian mixture distribution of the gmix class, or a binned discrete distribution of the discrete class. In this case, we will consider a mixture of three Gaussian distributions to represent the true redshift distribution.
Step2: chippr supports the use of a parameter file to specify various options for the catalog simulator and inference module to turn on and off.
Step3: To make a catalog, we must specify an interim prior redshift distribution $n^{*}(z)$, regardless of what quantity we wish to infer using the catalog. So far, only discrete distributions are supported, but this will soon be changed. The simplest discrete distribution is uniform.
Step4: Now we are ready to make a catalog. To do this we instantiate a catalog object and then create a catalog of indiviual posterior distributions based on the true redshift distribution and the interim prior. By default, the catalog generator will make some informative plots along the way. The included plots are a histogram of the true redshifts and a scatterplot of the true redshifts and the centers of the individual Gaussian posteriors. Support for other posterior forms will be added soon. Additionally, the catalog is expressed as normalized binned histogram heights. Support for other parametrizations of the individual posteriors may be added in the future.
Step5: We can also plot a histogram of the centers of the individual Gaussian posteriors, a binned version of the true redshift distribution, and the $n(z)$ resulting from stacking the individual posteriors.
Step6: It is also informative to see what a few individual likelihoods and binned posteriors look like.
Step7: We finish by saving the data as a plaintext file. Support for more file formats will be added soon.
Step8: Inference
chippr currently contains one inference module to probe the posterior distribution of parameters defining the redshift distribution function.
To perform inference, we must create a catalog object. This may be done by making a new catalog as is done above or by reading in an existing catalog file.
Step9: The catalog file contains three components
Step10: The prior distribution must be a mvn object, defined by a mean vector and covariance matrix over the parameters defining the redshift distribution. In this case, it is intuitive to use the definition of the binning strategy to create the prior distribution since the parameters are normalized histogram bin heights, the same parametrization used for the catalog entries themselves.
Step11: We create a log_z_dens object from the dictionary of catalog parameters and the prior distribution. We include the optional specification of the true distribution, since it is available in this case. We also include a parameter file that may contain default constants for the inference. Now the log_z_dens object can plot some samples from the prior so we can see what it looks like.
Step12: We perform calculations of a few of the simplest estimators of the redshift distribution function $\hat{n}(z)$. The stacked estimator is defined as $\hat{n}(z)=\frac{1}{N}\sum p(z|\vec{d},n^{}(z))$. The marginalized maximum a posteriori estimator is defined as $\hat{n}(z)=\hat{n}({argmax[p(z|\vec{d},n^{}(z))]})$. The marginalized expected value estimator is defined as $\hat{n}(z)=\hat{n}({E[p(z|\vec{d},n^{*}(z))]})$.
Step13: The log_z_dens object enables easy comparison between estimators using the Kullback-Leibler Divergences (when the true distribution is available) and root-mean-square differences.
We may next calculate the marginalized maximum likelihood estimator (which actually returns the parameters maximizing the posterior probability).
Step14: If we are very ambitious, we can run an MCMC sampler (currently use of emcee is supported, but other samplers may be added in the future) to probe the posterior distribution of the parameter values. To do this, we initialize the sampler with samples from the prior distribution.
Step15: The log_z_dens object stores the estimators that have been calculated as well as all metadata associated with the posterior samples. The storage of the metadata and samples will soon be eliminated in favor of saved files, as that information may necessitate a great deal of memory.
Step16: Currently, the results of all previously calculated estimators (and the true redshift density function, if it was provided) may be plotted automatically.
Step17: The log_z_dens object supports writing the information associated with the estimators to a file in the pickle format, though other formats may be added in the future.
Step18: Here we demonstrate that the written estimators may be loaded from files as well for future use. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import timeit
import cProfile, pstats, StringIO
import os
import chippr
help(chippr)
Explanation: CHIPPR
This notebook demonstrates the use of the Cosmological Hierarchical Inference with Probabilistic Photometric Redshifts (CHIPPR) package to estimate population distributions based on a catalog of probability distributions.
The package supports two primary objectives: simulation of catalogs and inference of posterior distributions over parameters defining population distributions.
End of explanation
true_amps = np.array([0.20, 0.35, 0.55])
true_means = np.array([0.5, 0.2, 0.75])
true_sigmas = np.array([0.4, 0.2, 0.1])
n_mix_comps = len(true_amps)
true_funcs = []
for c in range(n_mix_comps):
true_funcs.append(chippr.gauss(true_means[c], true_sigmas[c]**2))
true_nz = chippr.gmix(true_amps, true_funcs, limits=(0., 1.))
Explanation: Simulation
Many of chippr's modules are used to produce mock catalogs of individual posterior probability distributions.
To create a catalog, we must first define a true redshift distribution function. It may be a Gaussian distribution of the gauss class, a Gaussian mixture distribution of the gmix class, or a binned discrete distribution of the discrete class. In this case, we will consider a mixture of three Gaussian distributions to represent the true redshift distribution.
End of explanation
param_loc = 'params.txt'
params = chippr.utils.ingest(param_loc)
params = chippr.defaults.check_sim_params(params)
print(params)
Explanation: chippr supports the use of a parameter file to specify various options for the catalog simulator and inference module to turn on and off.
End of explanation
bin_ends = np.array([params['bin_min'], params['bin_max']])
weights = np.array([1.])
int_prior = chippr.discrete(bin_ends, weights)
Explanation: To make a catalog, we must specify an interim prior redshift distribution $n^{*}(z)$, regardless of what quantity we wish to infer using the catalog. So far, only discrete distributions are supported, but this will soon be changed. The simplest discrete distribution is uniform.
End of explanation
results_loc = os.path.join(os.path.join(os.path.join(os.path.join('..', '..'), 'research'), 'results'), 'demo')
posteriors = chippr.catalog(params=param_loc, loc=results_loc)
output = posteriors.create(true_nz, int_prior, N=params['n_gals'])
data = np.exp(output['log_interim_posteriors'])
Explanation: Now we are ready to make a catalog. To do this we instantiate a catalog object and then create a catalog of indiviual posterior distributions based on the true redshift distribution and the interim prior. By default, the catalog generator will make some informative plots along the way. The included plots are a histogram of the true redshifts and a scatterplot of the true redshifts and the centers of the individual Gaussian posteriors. Support for other posterior forms will be added soon. Additionally, the catalog is expressed as normalized binned histogram heights. Support for other parametrizations of the individual posteriors may be added in the future.
End of explanation
plt.hist(posteriors.samps.T[1], bins=100, normed=True, color="k")
plt.plot(posteriors.z_coarse, true_nz.evaluate(posteriors.z_coarse), "r-")
plt.plot(posteriors.z_coarse, np.sum(data, axis=0) / 10**params['n_gals'], "go")
plt.xlabel("z")
Explanation: We can also plot a histogram of the centers of the individual Gaussian posteriors, a binned version of the true redshift distribution, and the $n(z)$ resulting from stacking the individual posteriors.
End of explanation
for n, z in enumerate(data[:10]):
plt.plot(posteriors.z_coarse, data[n], 'ko')
plt.plot(posteriors.z_fine, posteriors.obs_lfs[n], 'k-')
plt.show()
Explanation: It is also informative to see what a few individual likelihoods and binned posteriors look like.
End of explanation
saved_location = 'data'
saved_type = '.txt'
posteriors.write(loc=saved_location, style=saved_type)
Explanation: We finish by saving the data as a plaintext file. Support for more file formats will be added soon.
End of explanation
param_loc = 'params.txt'
results_loc = os.path.join(os.path.join(os.path.join(os.path.join('..', '..'), 'research'), 'results'), 'demo')
simulated_posteriors = chippr.catalog(params=param_loc, loc=results_loc)
saved_location = 'data'
saved_type = '.txt'
data = simulated_posteriors.read(loc=saved_location, style=saved_type)
Explanation: Inference
chippr currently contains one inference module to probe the posterior distribution of parameters defining the redshift distribution function.
To perform inference, we must create a catalog object. This may be done by making a new catalog as is done above or by reading in an existing catalog file.
End of explanation
zs = data['bin_ends']
log_nz_intp = data['log_interim_prior']
log_z_posts = data['log_interim_posteriors']
z_difs = zs[1:]-zs[:-1]
z_mids = (zs[1:]+zs[:-1])/2.
n_bins = len(z_mids)
Explanation: The catalog file contains three components: the bin_ends, the log_interim_prior, and the log_interim_posteriors. The bin endpoints can be processed to enable their use in constructing a prior distribution over the parameters determining the redshift distribution function.
End of explanation
# prior_sigma = 0.16
# prior_var = np.eye(n_bins)
# for b in range(n_bins):
# prior_var[b] = 1. * np.exp(-0.5 * (z_mids[b] - z_mids) ** 2 / prior_sigma ** 2)
# l = 1.e-4
# prior_var = prior_var+l*np.identity(n_bins)
a = 1.# / n_bins
b = 20.#1. / z_difs ** 2
c = 0.
prior_var = np.eye(n_bins)
for k in range(n_bins):
prior_var[k] = a * np.exp(-0.5 * b * (z_mids[k] - z_mids) ** 2)
prior_var += c * np.identity(n_bins)
prior_mean = log_nz_intp
prior = chippr.mvn(prior_mean, prior_var)
Explanation: The prior distribution must be a mvn object, defined by a mean vector and covariance matrix over the parameters defining the redshift distribution. In this case, it is intuitive to use the definition of the binning strategy to create the prior distribution since the parameters are normalized histogram bin heights, the same parametrization used for the catalog entries themselves.
End of explanation
nz = chippr.log_z_dens(data, prior, truth=true_nz, vb=True)
prior_samples = prior.sample(7)
chippr.log_z_dens_plots.plot_ivals(prior_samples, nz.info, nz.plot_dir)
Explanation: We create a log_z_dens object from the dictionary of catalog parameters and the prior distribution. We include the optional specification of the true distribution, since it is available in this case. We also include a parameter file that may contain default constants for the inference. Now the log_z_dens object can plot some samples from the prior so we can see what it looks like.
End of explanation
nz_stacked = nz.calculate_stacked()
nz_mmap = nz.calculate_mmap()
nz_mexp = nz.calculate_mexp()
Explanation: We perform calculations of a few of the simplest estimators of the redshift distribution function $\hat{n}(z)$. The stacked estimator is defined as $\hat{n}(z)=\frac{1}{N}\sum p(z|\vec{d},n^{}(z))$. The marginalized maximum a posteriori estimator is defined as $\hat{n}(z)=\hat{n}({argmax[p(z|\vec{d},n^{}(z))]})$. The marginalized expected value estimator is defined as $\hat{n}(z)=\hat{n}({E[p(z|\vec{d},n^{*}(z))]})$.
End of explanation
nz_mmle = nz.calculate_mmle(nz_stacked)
Explanation: The log_z_dens object enables easy comparison between estimators using the Kullback-Leibler Divergences (when the true distribution is available) and root-mean-square differences.
We may next calculate the marginalized maximum likelihood estimator (which actually returns the parameters maximizing the posterior probability).
End of explanation
n_ivals = 2*n_bins
initial_values = prior.sample(n_ivals)
nz_samps = nz.calculate_samples(initial_values)
nz_stats = nz.compare()
Explanation: If we are very ambitious, we can run an MCMC sampler (currently use of emcee is supported, but other samplers may be added in the future) to probe the posterior distribution of the parameter values. To do this, we initialize the sampler with samples from the prior distribution.
End of explanation
nz.info['estimators'].keys()
Explanation: The log_z_dens object stores the estimators that have been calculated as well as all metadata associated with the posterior samples. The storage of the metadata and samples will soon be eliminated in favor of saved files, as that information may necessitate a great deal of memory.
End of explanation
nz.plot_estimators()
Explanation: Currently, the results of all previously calculated estimators (and the true redshift density function, if it was provided) may be plotted automatically.
End of explanation
nz.write('nz.p')
Explanation: The log_z_dens object supports writing the information associated with the estimators to a file in the pickle format, though other formats may be added in the future.
End of explanation
nz.info = nz.read('nz.p')
print(nz)
Explanation: Here we demonstrate that the written estimators may be loaded from files as well for future use.
End of explanation |
311 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
Positional Astronomy
Previous
Step1: Import section specific modules
Step2: 3.3 Horizontal Coordinates (ALT,AZ)
3.3.1 Coordinate Definitions
In $\S$ 3.2.1 ➞ we introduced the concept of an hour angle, which allows us to determine the time that still needs to elapse before a source crosses the local meridian. This however does not tell us where we should point a telescope on earth inorder to observe a source with a specific hour angle. The horizontal coordinates azimuth $\mathcal{A}$ and altitude $\mathcal{E}$ (elevation) is used to enable an observer on earth to locate celestial objects in the observer's local sky. The observer's horizontal plane is the fundamental plane of this coordinate system and is known as the celestial horizon. The azimuth angle is measured in the celestial horizon from due north towards the east, while the altitude of a celestial object is the angle between it and the celestial horizon. Both azimuth and elevation is measured in degrees. The azimuth and elevation angle is depicted in Fig. 3.3.1 ⤵ <!--\ref{pos
Step3: Figure 3.3.3
Step4: Figure 3.3.4 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
Positional Astronomy
Previous: 3.2 Hour Angle (HA) and Local Sidereal Time (LST)
Next: 3.4 Direction Cosine Coordinates ($l,m,n$)
Import standard modules:
End of explanation
from IPython.display import HTML
HTML('../style/code_toggle.html')
import ephem
import matplotlib
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 10)
Explanation: Import section specific modules:
End of explanation
#Creating the observer: KAT-7
KAT7 = ephem.Observer()
KAT7.lat = '-30:43:17'
KAT7.lon = '21:25:40.08'
KAT7.elevation = 0.0
KAT7.date = '2016/5/30 00:00:00' #UTC
#Creating the celestial bodies
star_names = np.array(["Rigel","Thuban","Mimosa","Procyon","Sirius","Achernar","Menkar","Zaurak","Aldebaran","Betelgeuse"])
star_objects = np.empty((len(star_names),),dtype=object)
for k in xrange(len(star_names)):
star_objects[k] = ephem.star(star_names[k],KAT7)
#Creating the time-strings at which we observe
hours = np.empty((96,),dtype=object)
minutes = np.empty((96,),dtype=object)
alt_az_mat = np.zeros((len(star_names),len(hours)+1,2),dtype=float) #(sources,hours,horz_coord)
hours_c = 0
for k in xrange(len(hours)):
if k % 4 == 0:
if hours_c < 10:
hours[k] = '0'+str(hours_c)
else:
hours[k] = str(hours_c)
minutes[k] = "00"
elif k % 4 == 1:
if hours_c < 10:
hours[k] = '0'+str(hours_c)
else:
hours[k] = str(hours_c)
minutes[k] = "15"
elif k % 4 == 2:
if hours_c < 10:
hours[k] = '0'+str(hours_c)
else:
hours[k] = str(hours_c)
minutes[k] = "30"
elif k % 4 == 3:
if hours_c < 10:
hours[k] = '0'+str(hours_c)
else:
hours[k] = str(hours_c)
hours_c = hours_c + 1
minutes[k] = "45"
#Compute the alt/az for different stars observed by KAT-7 at different times on 2016/5/30
for k in xrange(len(hours)):
#Set new time
n_date = '2016/5/30 ' + hours[k] + ':' + minutes[k] + ':00'
KAT7.date = n_date
#Calculate new alt/az
for j in xrange(len(star_names)):
star_objects[j].compute(KAT7)
alt_az_mat[j,k,0] = float(star_objects[j].alt)
alt_az_mat[j,k,1] = float(star_objects[j].az)
#Copy first value to last value
alt_az_mat[:,-1,:] = alt_az_mat[:,0,:]
time_v = np.linspace(0,24,len(hours)+1,endpoint=True)
#Plot alt
matplotlib.rcParams.update({'font.size': 13.75})
fig, ax = plt.subplots()
c = ["r","b","g","y","m","c","k"]
l = ["-","--"]
l_ind = 0
c_ind = 0
for k in xrange(len(star_names)):
if c_ind == 7:
c_ind = 0
l_ind = 1
mask = np.logical_not(np.logical_and(alt_az_mat[k,:,0]*(180/np.pi)>-5,alt_az_mat[k,:,0]*(180/np.pi)<5))
new_curve_y = alt_az_mat[k,mask,0]*(180/np.pi)
new_curve_x = time_v[mask]
ax.plot(new_curve_x,new_curve_y,c[c_ind]+l[l_ind],label=star_names[k],lw=2,zorder=k)
c_ind = c_ind +1
ax.fill_between(time_v, -5, 5, facecolor='k',alpha=1,zorder=k+1)
ax.annotate("HORIZON", xy = (11.5,5), xytext=(11.5, 15),arrowprops=dict(facecolor="b", shrink=1))
ax.legend()
ax.set_xlim([0,24])
ax.set_ylim([-90,90])
ticks = np.array([-90,-80,-70,-60,-50,-40,-30,-20,-10,0,10,20,30,40,50,60,70,80,90])
plt.yticks(ticks)
ticks = np.array([0,2,4,6,8,10,12,14,16,18,20,22,24])
plt.xticks(ticks)
plt.xlabel("UTC [$h$]")
plt.ylabel("Altitude [$^{\circ}$]")
plt.title("KAT-7: 2016/5/30")
labels = [item.get_text() for item in ax.get_yticklabels()]
labels = np.array(["-90$^{\circ}$","-80$^{\circ}$","-70$^{\circ}$","-60$^{\circ}$","-50$^{\circ}$","-40$^{\circ}$","-30$^{\circ}$","-20$^{\circ}$","-10$^{\circ}$","0$^{\circ}$","10$^{\circ}$","20$^{\circ}$","30$^{\circ}$","40$^{\circ}$","50$^{\circ}$","60$^{\circ}$","70$^{\circ}$","80$^{\circ}$","90$^{\circ}$"])
ax.set_yticklabels(labels)
ax.grid('on')
Explanation: 3.3 Horizontal Coordinates (ALT,AZ)
3.3.1 Coordinate Definitions
In $\S$ 3.2.1 ➞ we introduced the concept of an hour angle, which allows us to determine the time that still needs to elapse before a source crosses the local meridian. This however does not tell us where we should point a telescope on earth inorder to observe a source with a specific hour angle. The horizontal coordinates azimuth $\mathcal{A}$ and altitude $\mathcal{E}$ (elevation) is used to enable an observer on earth to locate celestial objects in the observer's local sky. The observer's horizontal plane is the fundamental plane of this coordinate system and is known as the celestial horizon. The azimuth angle is measured in the celestial horizon from due north towards the east, while the altitude of a celestial object is the angle between it and the celestial horizon. Both azimuth and elevation is measured in degrees. The azimuth and elevation angle is depicted in Fig. 3.3.1 ⤵ <!--\ref{pos:fig:horizontal}-->
<img src='figures/horizontal.svg' width=40%>
Figure 3.3.1: The horizontal coordinates.<a id='pos:fig:horizontal'></a> <!--\label{pos:fig:horizontal}-->
The equations below allow us to convert between equatorial and horizontal coordinates
<p class=conclusion>
<font size=4><b> Converting between equatorial and horizontal </b></font>
<br>
<br>
\begin{eqnarray}
\cos\delta\cos H &=& \cos L_a\sin \mathcal{E} - \sin L_a\cos \mathcal{E}\cos \mathcal{A}\\
-\cos\delta\sin H&=& \cos \mathcal{E}\sin \mathcal{A}\\
\sin\delta &=& \sin L_a\sin \mathcal{E}+\cos L_a \cos \mathcal{E} \cos \mathcal{A}
\end{eqnarray}
</p>
<div class=advice>
<b>Note:</b> In the conversion equations above $L_a$ denotes latitude (see <a href='3_1_Equatorial_Coordinates.ipynb'>$\S$ 3.1 ➞</a>).
</div>
The above equations were derived by applying the spherical trigonometric identities in $\S$ 2.13 ➞ <!--\ref{math:sec:st}--> to
the triangle $\Delta PSZ$ which is depicted in Fig. 3.3.2 ⤵ <!--\ref{pos:fig:conversion_alaz_radec}--> (see Appendix ➞).
<img src='figures/conversion.svg' width=40%>
Figure 3.3.2: The source-celestial pole-zenith triangle; which enables us to derive the conversion equations between horizontal and equatorial coordinates. The red plane represents the fundamental plane of the horizontal coordinate system, while the blue plane represents the
fundamental plane of the celestial coordinate system. <a id='pos:fig:conversion_alaz_radec'></a> <!--\label{pos:fig:conversion_alaz_radec}-->
<div class=advice>
<b>Note:</b> The parallactic angle $q$ associated with a specific location on the celestial sphere $S$ is the angle between two great circles; the hour circle of $S$ and the great circle that passes through zenith and $S$. The parallactic angle $q$ is depicted in <a href='#pos:fig:conversion_alaz_radec'>Fig. 3.3.2 ⤵</a>. <!--\ref{pos:fig:conversion_alaz_radec}-->
The parallactic angle, and how it pertains to radio interferometry is discussed in more detail in <a href='../7_Observing_Systems/7_7_antenna_mounts_and_parallactic_angle.ipynb'>$\S$ 7.7 ➞</a>.
</div>
3.3.2 Examples
Let us cement the concpets we have learned in this section by once again making use of the pyephem package. In this section we will use it to compute the horizontal coordinates for two primary use cases. In the first use case we plot the horizontal coordinates of a few randomly selected stars under the assumption that they are "observed" with KAT7. We will compute the horizontal coordinates of the selected stars for one entire day (2016/5/30). As we have already mentioned, the horizontal coordinates of the stars change during the course of one day, since the earth is rotating around its own axis. To achieve this we first create a pyephem observer object acting as a proxy for the KAT-7 array. Next we create pyephem body objects for the randomly selected stars. Each of the body objects has a method called compute. This compute method can take in an observer object. The compute method of the body object uses the geometrical location and the date attributes of the observer object to calculate the horizontal coordinates of the celestial body (in this case a star) the body object embodies. To track the change of the horizontal coordinates of stars (i.e. the stars we are interested in) we only need to iteratively call the compute methods of the body objects associated with them. Every time we call the compute method we just pass in an observer object with an appropriatly altered date atrribute. The code snippet below implements the above procedure. The altitude and azimuth angles of ten well known stars, calculated with pyephem, is depicted in Fig. 3.3.3 ⤵ <!--\ref{pos:fig:alt_stars}--> and Fig. 3.3.4 ⤵ <!--\ref{pos:fig:az_stars}-->.
End of explanation
#Plot az
matplotlib.rcParams.update({'font.size': 13.75})
fig, ax = plt.subplots()
c = ["r","b","g","y","m","c","k"]
l = ["-","--"]
l_ind = 0
c_ind = 0
for i in xrange(10):
if c_ind == 7:
c_ind = 0
l_ind = 1
plt.plot(time_v,alt_az_mat[i,:,1]*(180/np.pi),c[c_ind]+l[l_ind],lw=2,label=star_names[i])
c_ind = c_ind +1
ax.legend()
ax.set_xlim([0,24])
ax.set_ylim([0,360])
ticks = np.array([0,60,120,180,240,300,360])
plt.yticks(ticks)
ticks = np.array([0,2,4,6,8,10,12,14,16,18,20,22,24])
plt.xticks(ticks)
plt.xlabel("UTC [$h$]")
plt.ylabel("Azimuth [$^{\circ}$]")
plt.title("KAT-7: 2016/5/30")
labels = [item.get_text() for item in ax.get_yticklabels()]
labels = np.array(["0$^{\circ}$","60$^{\circ}$","120$^{\circ}$","180$^{\circ}$","240$^{\circ}$","300$^{\circ}$","360$^{\circ}$"])
ax.set_yticklabels(labels)
ax.grid('on')
Explanation: Figure 3.3.3: The altitude angle of ten well know stars during 2016/5/30 as observed by the KAT-7 array. The altitude angle was computed by employing pyephem. The peaks of the curves indicate the times at which the stars were at transit. The black rectangle represents the fundamental horizon. Any star that stays below the horizon would not be observable at all (see the curve associated with Thuban for an example). Any star that stays above the horizon for the entire day is a circumpolar star. Mimosa can almost be classified as a circumpolar star. <a id='pos:fig:alt_stars'></a> <!--\label{pos:fig:alt_stars-->
We have not yet plotted the azimuth coordinates for the randomly selected stars. We do so by using the code snippet below.
End of explanation
#Preliminaries
matplotlib.rcParams.update({'font.size': 13.75})
observatories = ["LOFAR","KAT7","MWA","VLA","ALMA","GMRT"]
lat_v = ["52:54:32","-30:43:17","-26:42:12","34:04:43","-23:01:09","19:05:47"]
lon_v = ["06:52:08","21:25:40.08","116:40:16","-107:37:05","-67:45:12","74:02:59"]
alt_az = np.zeros((len(observatories),2),dtype=float)
#Loading different observatories and calculating alt/az of Betelgeuse for each of them
for k in xrange(len(observatories)):
obs = ephem.Observer()
obs.lat = lat_v[k]
obs.lon = lon_v[k]
obs.elevation = 0.0
obs.date = '2016/5/30 00:00:00' #UTC
betelgeuse = ephem.star("Betelgeuse",obs)
alt_az[k,0] = float(betelgeuse.alt)
alt_az[k,1] = float(betelgeuse.az)
#Plotting
cluster = ['o','^','>','s','*','v']
col = ['b','r','g','k','c','m']
fig, ax = plt.subplots()
for xp, yp, m, n, col_v in zip(alt_az[:,0]*(180/np.pi), alt_az[:,1]*(180/np.pi), cluster, observatories,col):
ax.plot([xp],[yp], marker=m, c = col_v, label = n, markersize = 20, linestyle='None')
ax.legend(numpoints=1)
ax.set_xlim([-90,90])
ax.set_ylim([0,360])
ticks = np.array([0,60,120,180,240,300,360])
plt.yticks(ticks)
ticks = np.array([-90,-80,-70,-60,-50,-40,-30,-20,-10,0,10,20,30,40,50,60,70,80,90])
plt.xticks(ticks)
labels = [item.get_text() for item in ax.get_yticklabels()]
labels = np.array(["0$^{\circ}$","60$^{\circ}$","120$^{\circ}$","180$^{\circ}$","240$^{\circ}$","300$^{\circ}$","360$^{\circ}$"])
ax.set_yticklabels(labels)
labels = [item.get_text() for item in ax.get_xticklabels()]
labels = np.array(["-90$^{\circ}$","-80$^{\circ}$","-70$^{\circ}$","-60$^{\circ}$","-50$^{\circ}$","-40$^{\circ}$","-30$^{\circ}$","-20$^{\circ}$","-10$^{\circ}$","0$^{\circ}$","10$^{\circ}$","20$^{\circ}$","30$^{\circ}$","40$^{\circ}$","50$^{\circ}$","60$^{\circ}$","70$^{\circ}$","80$^{\circ}$","90$^{\circ}$"])
ax.set_xticklabels(labels)
plt.xlabel("Altitude [$^{\circ}$]")
plt.ylabel("Azimuth [$^{\circ}$]")
plt.title("Betelgeuse: 2016/5/30 - 00:00:00 UTC")
ax.grid('on')
Explanation: Figure 3.3.4: The azimuth angle of ten well know stars during 2016/5/30 as observed by the KAT-7 array. The azimuth angle was computed by employing pyephem. <a id='pos:fig:az_stars'></a> <!--\label{pos:fig:az_stars-->
In the second use case we determine the horizontal coordinates of Betelgeuse
for different arrays around the world at a specific moment in time (2016/5/30 00:00:00). We again use pyephem to accomplish this. See the code snippet below for the exact details of how this can be achieved. We plot the main result of the code snippet in Fig. 3.3.5 ⤵ <!--\ref{pos:fig:h_betelgeuse}-->.
End of explanation |
312 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
4. Solving the model
4.1 Solow model as an initial value problem
The Solow model with can be formulated as an initial value problem (IVP) as follows.
$$ \dot{k}(t) = sf(k(t)) - (g + n + \delta)k(t),\ t\ge t_0,\ k(t_0) = k_0 \tag{4.1.0} $$
The solution to this IVP is a function $k(t)$ describing the time-path of capital stock (per unit effective labor). Our objective in this section will be to explore methods for approximating the function $k(t)$. The methods we will learn are completely general and can be used to solve any IVP. Those interested in learning more about these methods should start by reading Chapter 10 of Numerical Methods for Economists by Ken Judd before proceeding to John Butcher's excellent book entitled Numerical Methods for solving Ordinary Differential Equations.
Before discussing numerical methods we should stop and consider whether or not there are any special cases (i.e., combintions of parameters) for which the the initial value problem defined in 4.2.1 might have an analytic solution. Analytic results can be very useful in building intuition about the economic mechanisms at play in a model and are invaluable for debugging code.
4.2 Analytic methods
4.2.1 Analytic solution for a model with Cobb-Douglas production
The Solow model with Cobb-Douglas production happens to have a completely general analytic solution
Step1: Example
Step2: ...and we can make a plot of this solution like so...
Step3: 4.2.2 Linearized solution to general model
In general there will not be closed-form solutions for the Solow model. The standard approach to obtaining general analytical results for the Solow model is to linearize the equation of motion for capital stock (per unit effective labor). Linearizing the equation of motion of capital (per unit effective labor) amounts to taking a first-order Taylor approximation of equation 4.1.0 around its long-run equilibrium $k^*$
Step4: 4.2.3 Accuracy of the linear approximation
Step5: 4.3 Finite-difference methods
Four of the best, most widely used ODE integrators have been implemented in the scipy.integrate module (they are called dopri5, dop85, lsoda, and vode). Each of these integrators uses some type of adaptive step-size control
Step6: 4.3.2 Accuracy of finite-difference methods | Python Code:
solowpy.CobbDouglasModel.analytic_solution?
Explanation: 4. Solving the model
4.1 Solow model as an initial value problem
The Solow model with can be formulated as an initial value problem (IVP) as follows.
$$ \dot{k}(t) = sf(k(t)) - (g + n + \delta)k(t),\ t\ge t_0,\ k(t_0) = k_0 \tag{4.1.0} $$
The solution to this IVP is a function $k(t)$ describing the time-path of capital stock (per unit effective labor). Our objective in this section will be to explore methods for approximating the function $k(t)$. The methods we will learn are completely general and can be used to solve any IVP. Those interested in learning more about these methods should start by reading Chapter 10 of Numerical Methods for Economists by Ken Judd before proceeding to John Butcher's excellent book entitled Numerical Methods for solving Ordinary Differential Equations.
Before discussing numerical methods we should stop and consider whether or not there are any special cases (i.e., combintions of parameters) for which the the initial value problem defined in 4.2.1 might have an analytic solution. Analytic results can be very useful in building intuition about the economic mechanisms at play in a model and are invaluable for debugging code.
4.2 Analytic methods
4.2.1 Analytic solution for a model with Cobb-Douglas production
The Solow model with Cobb-Douglas production happens to have a completely general analytic solution:
$$ k(t) = \left[\left(\frac{s}{n+g+\delta}\right)\left(1 - e^{-(n + g + \delta) (1-\alpha) t}\right) + k_0^{1-\alpha}e^{-(n + g + \delta) (1-\alpha) t}\right]^{\frac{1}{1-\alpha}} \tag{4.2.0}$$
This analytic result is available via the analytic_solution method of the solow.CobbDouglasModel class.
End of explanation
# define model parameters
cobb_douglas_params = {'A0': 1.0, 'L0': 1.0, 'g': 0.02, 'n': 0.03, 's': 0.15,
'delta': 0.05, 'alpha': 0.33}
# create an instance of the solow.Model class
cobb_douglas_model = solowpy.CobbDouglasModel(params=cobb_douglas_params)
# specify some initial condition
k0 = 0.5 * cobb_douglas_model.steady_state
# grid of t values for which we want the value of k(t)
ti = np.linspace(0, 100, 10)
# generate a trajectory!
cobb_douglas_model.analytic_solution(ti, k0)
Explanation: Example: Computing the analytic trajectory
We can compute an analytic solution for our Solow model like so...
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(8,6))
# compute the solution
ti = np.linspace(0, 100, 1000)
analytic_traj = cobb_douglas_model.analytic_solution(ti, k0)
# plot this trajectory
ax.plot(ti, analytic_traj[:,1], 'r-')
# equilibrium value of capital stock (per unit effective labor)
ax.axhline(cobb_douglas_model.steady_state, linestyle='dashed',
color='k', label='$k^*$')
# axes, labels, title, etc
ax.set_xlabel('Time, $t$', fontsize=20, family='serif')
ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')
ax.set_title('Analytic solution to a Solow model\nwith Cobb-Douglas production',
fontsize=25, family='serif')
ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
ax.grid('on')
plt.show()
Explanation: ...and we can make a plot of this solution like so...
End of explanation
# specify some initial condition
k0 = 0.5 * cobb_douglas_model.steady_state
# grid of t values for which we want the value of k(t)
ti = np.linspace(0, 100, 10)
# generate a trajectory!
cobb_douglas_model.linearized_solution(ti, k0)
Explanation: 4.2.2 Linearized solution to general model
In general there will not be closed-form solutions for the Solow model. The standard approach to obtaining general analytical results for the Solow model is to linearize the equation of motion for capital stock (per unit effective labor). Linearizing the equation of motion of capital (per unit effective labor) amounts to taking a first-order Taylor approximation of equation 4.1.0 around its long-run equilibrium $k^*$:
$$ \dot{k}(t) \approx -\lambda (k(t) - k^*),\ t \ge t_0,\ k(t_0)=k_0 \tag{4.2.1}$$
where the speed of convergence, $\lambda$, is defined as
$$ \lambda = -\frac{\partial \dot{k}(k(t))}{\partial k(t)}\bigg|_{k(t)=k^*} \tag{4.2.2} $$
The solution the the linear differential equation 4.2.1 is
$$ k(t) = k^ + e^{-\lambda t}(k_0 - k^). \tag{4.2.3} $$
To complete the solution it remains to find an expression for the speed of convergence $\lambda$:
\begin{align}
\lambda \equiv -\frac{\partial \dot{k}(k(t))}{\partial k(t)}\bigg|_{k(t)=k^} =& -[sf'(k^) - (g + n+ \delta)] \
=& (g + n+ \delta) - sf'(k^) \
=& (g + n + \delta) - (g + n + \delta)\frac{k^f'(k^)}{f(k^)} \
=& (1 - \alpha_K(k^*))(g + n + \delta) \tag{4.2.4}
\end{align}
where the elasticity of output with respect to capital, $\alpha_K(k)$, is
$$\alpha_K(k) = \frac{k^f'(k^)}{f(k^*)}. \tag{4.2.5}$$
Example: Computing the linearized trajectory
One can compute a linear approximation of the model solution using the linearized_solution method of the solow.Model class as follows.
End of explanation
# initial condition
t0, k0 = 0.0, 0.5 * cobb_douglas_model.steady_state
# grid of t values for which we want the value of k(t)
ti = np.linspace(t0, 100, 1000)
# generate the trajectories
analytic = cobb_douglas_model.analytic_solution(ti, k0)
linearized = cobb_douglas_model.linearized_solution(ti, k0)
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ax.plot(ti, analytic[:,1], 'r-', label='Analytic')
ax.plot(ti, linearized[:,1], 'b-', label='Linearized')
# demarcate k*
ax.axhline(cobb_douglas_model.steady_state, linestyle='dashed',
color='k', label='$k^*$')
# axes, labels, title, etc
ax.set_xlabel('Time, $t$', fontsize=20, family='serif')
ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')
ax.set_title('Analytic vs. linearized solutions', fontsize=25, family='serif')
ax.legend(loc='best', frameon=False, prop={'family':'serif'},
bbox_to_anchor=(1.0, 1.0))
ax.grid('on')
fig.show()
Explanation: 4.2.3 Accuracy of the linear approximation
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(8,6))
# lower and upper bounds for initial conditions
k_star = solow.cobb_douglas.analytic_steady_state(cobb_douglas_model)
k_l = 0.5 * k_star
k_u = 2.0 * k_star
for k0 in np.linspace(k_l, k_u, 5):
# compute the solution
ti = np.linspace(0, 100, 1000)
analytic_traj = solow.cobb_douglas.analytic_solution(cobb_douglas_model, ti, k0)
# plot this trajectory
ax.plot(ti, analytic_traj[:,1])
# equilibrium value of capital stock (per unit effective labor)
ax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$')
# axes, labels, title, etc
ax.set_xlabel('Time, $t$', fontsize=15, family='serif')
ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')
ax.set_title('Analytic solution to a Solow model\nwith Cobb-Douglas production',
fontsize=20, family='serif')
ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
ax.grid('on')
plt.show()
k0 = 0.5 * ces_model.steady_state
numeric_trajectory = ces_model.ivp.solve(t0=0, y0=k0, h=0.5, T=100, integrator='dopri5')
ti = numeric_trajectory[:,0]
linearized_trajectory = ces_model.linearized_solution(ti, k0)
Explanation: 4.3 Finite-difference methods
Four of the best, most widely used ODE integrators have been implemented in the scipy.integrate module (they are called dopri5, dop85, lsoda, and vode). Each of these integrators uses some type of adaptive step-size control: the integrator adaptively adjusts the step size $h$ in order to keep the approximation error below some user-specified threshold). The cells below contain code which compares the approximation error of the forward Euler with those of lsoda and vode. Instead of simple linear interpolation (i.e., k=1), I set k=5 for 5th order B-spline interpolation.
...finally, we can plot trajectories for different initial conditions. Note that the analytic solutions converge to the long-run equilibrium no matter the initial condition of capital stock (per unit of effective labor) providing a nice graphical demonstration that the Solow model is globally stable.
End of explanation
t0, k0 = 0.0, 0.5
numeric_soln = cobb_douglas_model.ivp.solve(t0, k0, T=100, integrator='lsoda')
fig, ax = plt.subplots(1, 1, figsize=(8,6))
# compute and plot the numeric approximation
t0, k0 = 0.0, 0.5
numeric_soln = cobb_douglas_model.ivp.solve(t0, k0, T=100, integrator='lsoda')
ax.plot(numeric_soln[:,0], numeric_soln[:,1], 'bo', markersize=3.0)
# compute and plot the analytic solution
ti = np.linspace(0, 100, 1000)
analytic_soln = solow.cobb_douglas.analytic_solution(cobb_douglas_model, ti, k0)
ax.plot(ti, analytic_soln[:,1], 'r-')
# equilibrium value of capital stock (per unit effective labor)
k_star = solow.cobb_douglas.analytic_steady_state(cobb_douglas_model)
ax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$')
# axes, labels, title, etc
ax.set_xlabel('Time, $t$', fontsize=15, family='serif')
ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')
ax.set_title('Numerical approximation of the solution',
fontsize=20, family='serif')
ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
ax.grid('on')
plt.show()
ti = np.linspace(0, 100, 1000)
interpolated_soln = cobb_douglas_model.ivp.interpolate(numeric_soln, ti, k=3)
fig, ax = plt.subplots(1, 1, figsize=(8,6))
# compute and plot the numeric approximation
ti = np.linspace(0, 100, 1000)
interpolated_soln = cobb_douglas_model.ivp.interpolate(numeric_soln, ti, k=3)
ax.plot(ti, interpolated_soln[:,1], 'b-')
# compute and plot the analytic solution
analytic_soln = solow.cobb_douglas.analytic_solution(cobb_douglas_model, ti, k0)
ax.plot(ti, analytic_soln[:,1], 'r-')
# equilibrium value of capital stock (per unit effective labor)
k_star = solow.cobb_douglas.analytic_steady_state(cobb_douglas_model)
ax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$')
# axes, labels, title, etc
ax.set_xlabel('Time, $t$', fontsize=15, family='serif')
ax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')
ax.set_title('Numerical approximation of the solution',
fontsize=20, family='serif')
ax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
ax.grid('on')
plt.show()
ti = np.linspace(0, 100, 1000)
residual = cobb_douglas_model.ivp.compute_residual(numeric_soln, ti, k=3)
# extract the raw residuals
capital_residual = residual[:, 1]
# typically, normalize residual by the level of the variable
norm_capital_residual = np.abs(capital_residual) / interpolated_soln[:,1]
# create the plot
fig = plt.figure(figsize=(8, 6))
plt.plot(ti, norm_capital_residual, 'b-', label='$k(t)$')
plt.axhline(np.finfo('float').eps, linestyle='dashed', color='k', label='Machine eps')
plt.xlabel('Time', fontsize=15)
plt.ylim(1e-16, 1)
plt.ylabel('Residuals (normalized)', fontsize=15, family='serif')
plt.yscale('log')
plt.title('Residual', fontsize=20, family='serif')
plt.grid()
plt.legend(loc=0, frameon=False, bbox_to_anchor=(1.0,1.0))
plt.show()
Explanation: 4.3.2 Accuracy of finite-difference methods
End of explanation |
313 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calgary Coffee Shops
By
Step1: Load from xml to mongobd
Load the data from xml and convert to json so it can be loaded into mongodb.
osmToMongo.py handles the conversion to json as well as the cleaning of the data.
Step2: While converting the data, 2 kinds of cleaning will happen.
Pre-processing
Step3: Coffee Shop Classification
For coffee shops there are different ways of classification. The 'amenity' will be 'cafe' and/or the 'cuisine' will be 'coffee_shop'. Not all are the same. Here is what I mean.
Step4: To start with, running this code on the full dataset I got (the output changed after adding cleaning logic)
Step5: From the first run of this, here are the first 5 results
Step6: From the first run
Step7: Where does coffee shop rank as an cuisine?
Answered using the following query
Step8: Coffee shops seem to be the most popular cuisine added.
What users have added the most kind of coffee shops?
Who likes coffee the most? Let's find out!
Step9: AKC seems to like coffee a lot. What else do they like?
Step10: I guess AKC just likes adding things to the openmap. Good for them.
What are the most common coffee shop chains?
Group by the name where cuisine is coffee_shop
Step11: Tim Hortons wins! Wait, there are Starbucks and Starbucks Coffee. They should be the same, and then it would win. In fact there are a few on this list that need cleaning. Good Earth Cafe and Good Earth Coffeehouse and Bakery for example. Cleaning this up would be a future improvement (unless you are a big Timmy's fan).
Probably a good way to attack this would be to create a renaming dict, and use it to clean the names of coffee shops (and other kinds of things). Creating the dict would likely be a manual process...
What are the names, phone numbers and postal codes of all the coffee shops? | Python Code:
#Creates and uses sample file if True
USE_SAMPLE = False
k = 10
inputFile = "calgary_canada.osm"
sampleFile = "calgary_canada_sample.osm"
if USE_SAMPLE:
import createTestFile
createTestFile.createTestFile(inputFile,sampleFile,k)
print '%s created from %s for testing.' % (sampleFile,inputFile)
# For the rest of the project the sample file can be used instead of the orginal file
inputFile = sampleFile
Explanation: Calgary Coffee Shops
By: Nick Shaw
Date: 2016-07-13
Project: P3 from the Udacity Data Analyst Nano Degree
Introduction
This report details the investigation of coffee shops in Calgary Canada using data from OpenStreetMap.org.
Specifically, the following questions are answered:
What does the map data look like? Size of file, number of elements, number of nodes, number of ways and number of unique users.
Where does coffee shop rank as an cuisine?
What users have added the most kind of coffee shops?
What are the most common coffee shop chains?
What are the names, phone numbers and postal codes of all the coffee shops.
The investigation will be conducted using python and mongodb. For the python, not all code will be shown in this notebook and can be found in the file osmToMongo.py in the project folder.
The project folder is hosted on github here.
Source Data
The data was downloaded with a pre-made xml osm file from MapZen. Here is the link.
In the project folder, the file is calgary_canada.osm.
For testing, a sample of calgary_canada.osm was made using createTestFile.py, which was provided as part of this project.
The resulting file is calgary_canada_sample.osm
End of explanation
import osmToMongo
from pymongo import MongoClient
data = osmToMongo.process_map(inputFile)
osmToMongo.loadIntoMongoDB(data)
Explanation: Load from xml to mongobd
Load the data from xml and convert to json so it can be loaded into mongodb.
osmToMongo.py handles the conversion to json as well as the cleaning of the data.
End of explanation
client = MongoClient('localhost', 27017)
db = client.p3
Explanation: While converting the data, 2 kinds of cleaning will happen.
Pre-processing: This happens before data is converted to json. Mainly, it is making consistent and valid keys.
Post-processing: Once an element is converted to JSON, it will be cleaned according to different rules.
Pre-processing Cleaning
To start, the following rules were applied.
'created' index for elements. Elements have attributes related to their creation ("version", "changeset", "timestamp", "user", "uid"). These are combined into a sub-document with the index 'created'
'pos' index. For elements with lat and lon (latitude and longitude) attributes, they will be combined into an float list (2 elements) and given the index of 'pos'.
Valid keys. Elements have sub elements called 'tags'. Each tag has a key (k) and value (v). Before changing to json the keys need to be checked. Keys can only have lowercase letter or an _. For tags with a single ':', a sub-document is created with the string before the ':' being the key in the main document and string after the ':' being the key in the sub-document. Address is a special. There are records with addr:<something>, and instead of using addr for the key, address will be used.
Applying these rules, 15289 tags were dropped (pre_process_log_v0.txt). Here is a sample:
Bad tag key, not added: geobase:datasetName
Bad tag key, not added: geobase:acquisitionTechnique
Bad tag key, not added: geobase:datasetName
Bad tag key, not added: geobase:acquisitionTechnique
Bad tag key, not added: catmp-RoadID
Bad tag key, not added: catmp-RoadID
Bad tag key, not added: catmp-RoadID
Tags with keys like geobase:acquisitionTechnique and geobase:datasetName were being dropped because they had a capital letter. These appear to be using a naming standard that separates words using capital letters to start words. I added logic to instead use _ to divide these words. For example, datasetName becomes dataset_name.
I added code to replace - with _ and to make all words lower cases (after _ were added where applicable)
Regex code to find lowerCase letters followed be upper case letters, then mode to make the replacement
```python
lowerUpper = re.compile(r'([a-z])([A-Z])')
def replaceLowerUpper(match):
lower = str(match.group(1))
upper = str(match.group(2))
return lower + "" + upper.lower()
```
Code inside shape_element (in osmToMongo.py), which converts an xml element to a json document. k is the key for a tag.
```python
k = re.sub(lowerUpper,replaceLowerUpper,k)
k = k.lower()
k = str.replace(k,"-","")
```
After adding the code, there were 1181 dropped tags due to key name (pre_process_log_v2.txt). Here is a sample:
Bad tag key, not added: addr:housenumber:first:right
Bad tag key, not added: name_1
Bad tag key, not added: geobase:route_name1:en
Bad tag key, not added: addr:housenumber:first:right
Bad tag key, not added: addr:housenumber:last:right
Bad tag key, not added: addr:housenumber:first:right
Bad tag key, not added: addr:housenumber:last:right
Post-processing Cleaning
I wanted to focus on a few things to help with my questions.
Make sure things are properly classified as coffee shops.
Make sure phone numbers are in a standard format
Make sure postal codes are in a standard format
Note: For auditing, I added some new fields. In practice, I would take this code (or comment it out) once I was satisfied with the data, as there is no need for it once the data is clean.
For the auditing of the data I will run some queries on the db using the python mongodb driver. Connect to the database with the driver first.
End of explanation
cafePipeline = [{"$match" : {"amenity":"cafe"}}]
coffeePipeline = [{"$match" : {"cuisine" :"coffee_shop"}}]
cafeNoCoffeePipeline = [{"$match" : {"$and" : [{"amenity":"cafe"},{"cuisine" :{"$ne":"coffee_shop"}}] }}]
coffeeNoCafePipeline = [{"$match" : {"$and" : [{"amenity": {"$ne":"cafe"}},{"cuisine" :"coffee_shop"}] }}]
print "Places with cafe as amenity: %d" % len(list(db.maps.aggregate(cafePipeline)))
print "Places with coffee_shop as cuisine: %d" % len(list(db.maps.aggregate(coffeePipeline)))
print "Places with cafe as amenity but not coffee_shop as cuisine: %d" % len(list(db.maps.aggregate(cafeNoCoffeePipeline)))
print "Places with coffee_shop as cuisine but not cafe as amenity: %d" % len(list(db.maps.aggregate(coffeeNoCafePipeline)))
Explanation: Coffee Shop Classification
For coffee shops there are different ways of classification. The 'amenity' will be 'cafe' and/or the 'cuisine' will be 'coffee_shop'. Not all are the same. Here is what I mean.
End of explanation
#Query all documents with something in the 'phone_format' field, group by the phone format,
#count and sort by the number in each format
formatPipeline = [{"$match" : {"phone_format":{"$exists":{"$ne":"null"}}}},
{"$group": { "_id": "$phone_format", "total" : {"$sum":1}}},
{"$sort": { "total": -1}}]
results = list(db.maps.aggregate(formatPipeline))
for r in results:
print '%s : %d' % (r['_id'],r['total'])
Explanation: To start with, running this code on the full dataset I got (the output changed after adding cleaning logic):
Places with cafe as amenity: 190
Places with coffee_shop as cuisine: 78
Places with cafe as amenity but not coffee_shop as cuisine: 121
Places with coffee_shop as cuisine but not cafe as amenity: 9
To clean this up, I want all coffee shops to look the same. To do this I decided that I would make it so all cafe's have the cuisine coffee_shop and from here on, to use the field, 'cuisine' to to the research.
The follow code is included in the code for cleaning a document:
```python
if('amenity' in d and d['amenity'] == 'cafe'):
d['cuisine'] = 'coffee_shop'
```
After adding this code, the output for the 'Coffee Audit' was:
Places with cafe as amenity: 190
Places with coffee_shop as cuisine: 199
Places with cafe as amenity but not coffee_shop as cuisine: 0
Places with coffee_shop as cuisine but not cafe as amenity: 9
Now {'cuisine' : 'coffee_shop'} includes all the coffee shops.
Phone Numbers
Check the format of the phone numbers, pick a standard (the most used maybe) then convert all to that standard.
During the clean stage, add a field to all documents that has the format of the number. D is digit Eg (###) ###-####
Code in cleanDocument:
python
if('phone' in d):
d['phone_format'] = re.sub('[0-9]', '#', d['phone'])
print d['phone_format']
Now run a query to see the common formats.
End of explanation
formatPipeline = [{"$match" : {"address.post_format":{"$exists":{"$ne":"null"}}}},
{"$group": { "_id": "$address.post_format", "total" : {"$sum":1}}},
{"$sort": { "total": -1}}]
results = list(db.maps.aggregate(formatPipeline))
for r in results:
print '%s : %d' % (r['_id'],r['total'])
Explanation: From the first run of this, here are the first 5 results:
```
+#-###-###-#### : 135
(###) ###-#### : 47
+# ### ### #### : 25
-###-#### : 21
(###)###-#### : 10
```
I will use the first format as a template and put the rest into that format.
From my knowledge of Canadian numbers there are 10 digits, not including the country code. If the country code is not included, it is assumed to be 1.
All the digits will be extracted (in order), then put into the correct format.
The code for cleaning is:
```python
if('phone' in d and not phoneNumber.match(d['phone'])):
#The phone number with nothing but digits
stripped = re.sub('[^0-9]', '', d['phone'])
#The number should be of length 10 or 11
#If it is 10, add a 1 to the start
if(len(stripped) == 10):
stripped = '1' + stripped
#Put into the correct format +#-###-###-####
if(len(stripped) == 11):
d['phone'] = '+' + stripped[0] \
+ '-' + stripped[1:4] \
+ '-' + stripped[4:7] \
+ '-' + stripped[7:11]
```
and the regex phoneNumber is:
python
phoneNumber = re.compile(r'^[+][0-9][-][0-9]{3}[-][0-9]{3}[-][0-9]{4}$')
Now the results of the phone number query are:
```
+#-###-###-#### : 277
-###-#### ext ## : 1
```
Much better. For more improvement, figure out a format for extensions.
Postal Codes
Check the format of the postal code, and make sure all addresses are using postal codes of the same (most common) format.
End of explanation
#Use the collstats fucntion to get the file size and nunmber of elemements
stats = db.command("collstats", "maps")
total = stats['count']
size = stats['storageSize']
#Use the count function to get the amount of each type
nodes = db.maps.find({'type':'node'}).count()
ways = db.maps.find({'type':'way'}).count()
#Find the unique users query
#Group by users, then group and count the results.
pipeline = [{"$group": { "_id": "$created.user", "total" : {"$sum":1}}},
{"$group": { "_id": "null", "total" : {"$sum":1}}}]
results = list(db.maps.aggregate(pipeline))
uniqueUsers = results[0]['total']
print "\nData Summary"
print "=================================="
print "Storage Size: %d bytes" % size
print "Total number of elements: %d " % total
print "Total number of nodes: %d " % nodes
print "Total number of ways: %d " % ways
print "Total number of unique users: %d " % uniqueUsers
Explanation: From the first run:
```
U#U #U# : 1112
U#U#U# : 48
UU U#U #U# : 2
U#U #U# : 2
U#U-#U# : 1
U#U#U#;U#U #U# : 1
-###-#### : 1
: 1
U#U : 1
```
Not so bad. Some of these just need to be thrown away. The only thing acceptable is (without punctuation or spaces is) L#L#L#.
Anything without 3 numbers and 3 letters not in that order is not valid.
The following code was added to clean the postal codes:
```python
if ('address' in d \
and 'postcode' in d['address'] \
and not postCode.match(d['address']['postcode'])):
#Remove everything but letter and numbers. Make it upper case
stripped = re.sub('[^0-9A-Za-z]', '', d['address']['postcode']).upper()
#Check if the stripped (only letters and numbers) post code is valid
#Drop it if it isn't
if(postCodeStripped.match(stripped)):
d['address']['postcode'] = stripped[0:3] + " " + stripped[3:]
else:
d['address'].pop("postcode", None)
```
Using the regex's:
```python
postCode = re.compile(r'^[A-Z][0-9][A-Z][\s][0-9][A-Z][0-9]$')
postCodeStripped = re.compile(r'^[A-Z][0-9][A-Z][0-9][A-Z][0-9]$')
```
After the cleaning code was added this is the output of the audit:
U#U #U# : 1163
Good. That's better.
Analysis
Okay. The data is loaded into the database and cleaned (kind of). Now to run some queries to answer my original questions.
What does the map data look like? Size of file, number of elements, number of nodes, number of ways and number of unique users.
Answered using the following queries:
End of explanation
#Query how many non null cuisines sorted
pipeline = [{"$match" : {"cuisine":{"$exists":{"$ne":"null"}}}},
{"$group": { "_id": "$cuisine", "total" : {"$sum":1}}},
{"$sort": { "total": -1}}]
results = list(db.maps.aggregate(pipeline))
print '\nTop 10 cuisine types'
i = 0
for result in results:
if i < 10:
print '%s:%s' % (result['_id'],result['total'])
else:
break
i += 1
Explanation: Where does coffee shop rank as an cuisine?
Answered using the following query
End of explanation
#Find nodes where cuisine is coffee_shop, then group and count the users (where there is a user) then sort
pipeline = [{"$match" : {"cuisine":"coffee_shop"}},
{"$match" : {"created.user":{"$exists":{"$ne":"null"}}}},
{"$group": { "_id": "$created.user", "total" : {"$sum":1}}},
{"$sort": { "total": -1}}]
results = list(db.maps.aggregate(pipeline))
print '\nTop 5 coffee lovers'
i = 0
for result in results:
if i < 5:
print '%s:%s' % (result['_id'],result['total'])
else:
break
i += 1
Explanation: Coffee shops seem to be the most popular cuisine added.
What users have added the most kind of coffee shops?
Who likes coffee the most? Let's find out!
End of explanation
#Find nodes where user is AKC, then group and count the cuisines (where there is a user) then sort
pipeline = [{"$match" : {"created.user":"AKC"}},
{"$match" : {"cuisine":{"$exists":{"$ne":"null"}}}},
{"$group": { "_id": "$cuisine", "total" : {"$sum":1}}},
{"$sort": { "total": -1}}]
results = list(db.maps.aggregate(pipeline))
for result in results:
print '%s:%s' % (result['_id'],result['total'])
Explanation: AKC seems to like coffee a lot. What else do they like?
End of explanation
#Find all coffee shops with names. Group by name and report and sort the count
pipeline = [{"$match" : {"cuisine":"coffee_shop"}},
{"$match" : {"name":{"$exists":{"$ne":"null"}}}},
{"$group": { "_id": "$name", "total" : {"$sum":1}}},
{"$sort": { "total": -1}}]
results = list(db.maps.aggregate(pipeline))
print '\nTop 10 coffee shops'
i = 0
for result in results:
if i < 10:
print '%s:%s' % (result['_id'],result['total'])
else:
break
i += 1
Explanation: I guess AKC just likes adding things to the openmap. Good for them.
What are the most common coffee shop chains?
Group by the name where cuisine is coffee_shop
End of explanation
#Find node where cuisine is coffee_shop and that have name, address and postal code
pipeline = [{"$match" : {"cuisine":"coffee_shop"}},
{"$match" : {"$and": [{"name":{"$exists":{"$ne":"null"}}},
{"phone":{"$exists":{"$ne":"null"}}},
{"address.postcode":{"$exists":{"$ne":"null"}}}]}}]
results = list(db.maps.aggregate(pipeline))
for r in results:
print "name: %s, phone: %s, postal code: %s" % (r['name'],r['phone'],r['address']['postcode'])
Explanation: Tim Hortons wins! Wait, there are Starbucks and Starbucks Coffee. They should be the same, and then it would win. In fact there are a few on this list that need cleaning. Good Earth Cafe and Good Earth Coffeehouse and Bakery for example. Cleaning this up would be a future improvement (unless you are a big Timmy's fan).
Probably a good way to attack this would be to create a renaming dict, and use it to clean the names of coffee shops (and other kinds of things). Creating the dict would likely be a manual process...
What are the names, phone numbers and postal codes of all the coffee shops?
End of explanation |
314 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solution of Fox et al. 2015
Step1: First, we read the input, and take a look at the column names
Step2: Extract the unique manuscripts and count them
Step3: Now we want to elaborate the data, by saving into lists the number of reviewers, the final decision, and the year for each manuscript. At the end of the code, we convert the lists into np.arrays, as it is much easier to subset them.
Step4: Now we write a function that takes a year as input, and prints the rejection rate for each number of reviewers, along with some other summary information. If we call the function with 'all' instead of a year, then the analysis is performed on the whole data set.
Step5: ### Compile a table measuring the probability of rejection given the number of reviewers. Does having more reviewers increase the probability of being rejected?
Step6: It seems so. Especially, look at the difference between one and two reviewers...
Repeat the analysis above for each year represented in the database.
We can simply call the function for each year. For example | Python Code:
import pandas
import numpy as np
Explanation: Solution of Fox et al. 2015
End of explanation
fox = pandas.read_csv("../data/Fox2015_data.csv")
fox.columns
Explanation: First, we read the input, and take a look at the column names
End of explanation
unique_ms = list(set(fox['MsID']))
num_ms = len(unique_ms)
print(num_ms)
Explanation: Extract the unique manuscripts and count them
End of explanation
num_reviewers = []
final_decision = []
year = []
for ms in unique_ms:
# extract the rows
subset = fox[fox['MsID'] == ms]
# count number of reviewers by summing ReviewerAgreed
num_reviewers.append(sum(subset['ReviewerAgreed']))
# extract final decision
if list(subset['FinalDecision'])[0] == 1:
final_decision.append(1)
else:
final_decision.append(0)
# extract year
year.append(list(subset['Year'])[0])
# convert to np.array
num_reviewers = np.array(num_reviewers)
final_decision = np.array(final_decision)
year = np.array(year)
Explanation: Now we want to elaborate the data, by saving into lists the number of reviewers, the final decision, and the year for each manuscript. At the end of the code, we convert the lists into np.arrays, as it is much easier to subset them.
End of explanation
def get_prob_rejection(my_year = 'all'):
# subset the data
if my_year != 'all':
my_num_reviewers = num_reviewers[year == my_year]
my_final_decision = final_decision[year == my_year]
else:
my_num_reviewers = num_reviewers
my_final_decision = final_decision
# start printing output
print("===============================")
print("Year:", my_year)
print("Submissions:", len(my_final_decision))
print("Overall rejection rate:",
round(my_final_decision.mean(),3))
print("NumRev", '\t', "NumMs", '\t', "rejection rate")
for i in range(max(my_num_reviewers) + 1):
print(i, '\t',
len(my_final_decision[my_num_reviewers == i]), '\t',
round(my_final_decision[my_num_reviewers == i].mean(), 3))
print("===============================")
Explanation: Now we write a function that takes a year as input, and prints the rejection rate for each number of reviewers, along with some other summary information. If we call the function with 'all' instead of a year, then the analysis is performed on the whole data set.
End of explanation
get_prob_rejection('all')
Explanation: ### Compile a table measuring the probability of rejection given the number of reviewers. Does having more reviewers increase the probability of being rejected?
End of explanation
get_prob_rejection(2009)
for yr in range(2004, 2015):
get_prob_rejection(yr)
Explanation: It seems so. Especially, look at the difference between one and two reviewers...
Repeat the analysis above for each year represented in the database.
We can simply call the function for each year. For example:
End of explanation |
315 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Podemos ver que el método no converge con el numero de clusters. Se estabiliza ligeramente con alrededor de 10 clusters, por lo que ese puede ser un numero util de clusters, sin embargo el desempeño sigue sin ser muy bueno y no hay ningun numero que sea evidentemente optimo.
Step1: Con un unico componente se logra explicar casi toda la varianza. Sin embargo incluso con esta separacion en componentes principales, los diferentes tipos de vino se sobrelapan especialmente en el rango de -0.05 a 0.05 en la componente principal. Esto se ve un poco mejor en la figura del medio donde se grafican las dos componentes principales. Si bien se puede distinguir un partron, los componentes principales no separan a los puntos verdes de los rojos. Esto significa que PCA no resulta especialmente util para distinguir entre los distintos tipos de vino y clasificarlos. | Python Code:
fig,ax=subplots(3,3,figsize=(10, 10))
n=1
for i in range(3):
for j in range(3):
ax[i,j].scatter(X[:,0],X[:,n],c=Y)
n+=1
Xnorm=sklearn.preprocessing.normalize(X)
pca=sklearn.decomposition.PCA()
pca.fit(Xnorm)
fig,ax=subplots(1,3,figsize=(16, 4))
ax[0].scatter(pca.transform(X)[:,0],Y,c=Y)
ax[0].set_xlabel('Componente principal')
ax[0].set_ylabel('Tipo de vino')
ax[1].scatter(pca.transform(X)[:,0],pca.transform(X)[:,1],c=Y)
ax[1].set_xlabel('Componente principal')
ax[1].set_ylabel('segunda componente principal')
ax[2].plot(pca.explained_variance_ratio_)
xlabel('numero componentes')
ylabel('Varianza explicada')
Explanation: Podemos ver que el método no converge con el numero de clusters. Se estabiliza ligeramente con alrededor de 10 clusters, por lo que ese puede ser un numero util de clusters, sin embargo el desempeño sigue sin ser muy bueno y no hay ningun numero que sea evidentemente optimo.
End of explanation
n=50
N=10
treesScore=zeros(n)
for k in range(N):
for i,j in zip(logspace(0,1.5,n),range(n)):
rf = RandomForestClassifier(n_estimators=int(i))
rf.fit(X,Y)
treesScore[j]+=rf.score(X,Y)*1.0/N
plot(logspace(0,2,n),treesScore)
xscale('log')
Explanation: Con un unico componente se logra explicar casi toda la varianza. Sin embargo incluso con esta separacion en componentes principales, los diferentes tipos de vino se sobrelapan especialmente en el rango de -0.05 a 0.05 en la componente principal. Esto se ve un poco mejor en la figura del medio donde se grafican las dos componentes principales. Si bien se puede distinguir un partron, los componentes principales no separan a los puntos verdes de los rojos. Esto significa que PCA no resulta especialmente util para distinguir entre los distintos tipos de vino y clasificarlos.
End of explanation |
316 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> <h1>Python in the Lab</h1> </center>
Topics
Python
Control Flow
Data Structures
Modules and Packages
Object-oriented programming
Iterators
Generators
Decorators
Magic Methods
Context Manager
All the other cool stuff
Science
Plotting
Numerical Calculations
Reading/Writing Data (Txt, CSV, Binary, HDF, ...)
Fitting
FFT
Symbolic Math
Plotly
Pymeasure
Tools
IPython and Jupyter
Spyder IDE
pip/PyPI and conda
Git and Github
virtualenv
Be Pythonic
Zen of Python
PEP 8
Docstrings and Spinx
Ducktyping
Design Patterns
Functional programming
Unicode and UTF-8
Speed
Numpy
Cython
Python
<a href='http
Step1: Documentation
Quickstart
Video Tutorial with Notebooks
Step2: <a href="http
Step3: <i>"NumPy is an extension to the Python programming language, adding <b>support for large, multi-dimensional arrays and matrices</b>, along with a large library of <b>high-level mathematical functions</b> to operate on these arrays.
Using NumPy in Python gives <b>functionality comparable to MATLAB</b> ..."</i>, Wikipedia
* Documentation
* Quickstart
* Video Tutorial with Slides
Step4: <a href="http
Step5: <i>matplotlib is a <b>plotting library</b> for the Python programming language and its numerical mathematics extension NumPy.
There is also a procedural "pylab" interface ..., designed to closely resemble that of MATLAB.</i>
Documentation
Step6: <a href='http
Step7: <i>"SciPy contains modules for <b>optimization, linear algebra, integration, interpolation, special functions, FFT, signal and image processing, ODE solvers</b> and other tasks common in science and engineering."</i>, Wikipedia
Documentation
Book
Step8: <a href='http
Step9: <a href='http
Step10: <i>"HDF5 is a <b>data model, library, and file format</b> for storing and managing data. It supports an <b>unlimited variety of datatypes</b>, and is designed for <b>flexible and efficient I/O and for high volume and complex data</b>. HDF5 is <b>portable and is extensible</b>, allowing applications to evolve in their use of HDF5. The HDF5 Technology suite includes tools and applications for managing, manipulating, viewing, and analyzing data in the HDF5 format."</i>, HDF Group
Documentation
Quickstart
Video Introduction
Step11: <a href='http
Step12: <i>"Cython is a compiled language that generates <b>CPython extension modules</b>. These extension modules can then be loaded and used by regular Python code using the import statement.
It works by producing a standard Python module. However, the behavior differs from standard Python in that the module code, originally written in Python, is translated into C. While <b>the resulting code is fast</b>, ..."
</i>, Wikipedia
Documentation
Video Tutorial with Slides
Step13: Examples
Typical imports
Step14: Simple Plot
Step15: Interactive
Step16: Subplots
Step17: Fitting | Python Code:
import IPython
IPython.__version__
Explanation: <center> <h1>Python in the Lab</h1> </center>
Topics
Python
Control Flow
Data Structures
Modules and Packages
Object-oriented programming
Iterators
Generators
Decorators
Magic Methods
Context Manager
All the other cool stuff
Science
Plotting
Numerical Calculations
Reading/Writing Data (Txt, CSV, Binary, HDF, ...)
Fitting
FFT
Symbolic Math
Plotly
Pymeasure
Tools
IPython and Jupyter
Spyder IDE
pip/PyPI and conda
Git and Github
virtualenv
Be Pythonic
Zen of Python
PEP 8
Docstrings and Spinx
Ducktyping
Design Patterns
Functional programming
Unicode and UTF-8
Speed
Numpy
Cython
Python
<a href='http://www.python.org'>
<img src="images/python_logo.png", align='left'>
</a>
Documentation
Online Course: Python 3 Tutorial (English and German)
Online Book with some Videos: Automate the Boring Stuff with Python.
Online Book and interactive Tutorial: How to Think Like a Computer Scientist: Learning with Python 3
PyCon: Videos of 2015, 2014, 2013, ...
SyiPy: Videos of 2015, 2014, 2013, ...
EuroPython: Videos of 2015, 2014, ...
Scientific Python Distributions
All distributions provide the same functionality.
<a href="https://winpython.github.io/">
<img src="images/winpython_logo.png", align="left">
</a>
portable
Do not download the Qt5 version at the moment.
<a href="https://www.continuum.io/">
<img src="images/anaconda_logo.png", align="left">
</a>
platform independent (Window, Linux, Mac)
<a href='https://www.enthought.com/products/canopy/'>
<img src='images/canopy_logo.png', align='left'>
</a>
platform independent (Window, Linux, Mac)
Scientific Packages
<a href='http://ipython.org/'>
<img src="images/ipython_logo.png", align="left"*>
</a>
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo("05fA_DXgW-Y")
Explanation: Documentation
Quickstart
Video Tutorial with Notebooks: IPython & Jupyter in depth: high productivity interactive and parallel python, PyCon 2015
Book: IPython Interactive Computing and Visualization Cookbook, Packt Publishing (2014)
End of explanation
import numpy
numpy.__version__
Explanation: <a href="http://numpy.org/">
<img src="images/numpy_logo.jpg", align="left">
</a>
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo("1zmV8lZsHF4")
Explanation: <i>"NumPy is an extension to the Python programming language, adding <b>support for large, multi-dimensional arrays and matrices</b>, along with a large library of <b>high-level mathematical functions</b> to operate on these arrays.
Using NumPy in Python gives <b>functionality comparable to MATLAB</b> ..."</i>, Wikipedia
* Documentation
* Quickstart
* Video Tutorial with Slides: Introduction to NumPy, SciPy 2015
* Book: Numpy Beginner's Guide - Third Edition, Packt Publishing (2015)
End of explanation
import matplotlib
matplotlib.__version__
Explanation: <a href="http://matplotlib.org/">
<img src="images/matplotlib_logo.png", align="left">
</a>
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo("MKucn8NtVeI")
Explanation: <i>matplotlib is a <b>plotting library</b> for the Python programming language and its numerical mathematics extension NumPy.
There is also a procedural "pylab" interface ..., designed to closely resemble that of MATLAB.</i>
Documentation: Guides, FAQ, Resources, API, ...
Gallery: Unterschiedlichste Plots mit Code Beispielen
Quickstart
Video Tutorial with Notebooks: Anatomy of matplotlib, SciPy (2015)
Buch: Mastering matplotlib, Packt Publishing (2015)
End of explanation
import scipy
scipy.__version__
Explanation: <a href='http://www.scipy.org'>
<img src="images/scipy_logo.png", align="left">
</a>
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('0CFFTJUZ2dc')
Explanation: <i>"SciPy contains modules for <b>optimization, linear algebra, integration, interpolation, special functions, FFT, signal and image processing, ODE solvers</b> and other tasks common in science and engineering."</i>, Wikipedia
Documentation
Book: Learning SciPy for Numerical and Scientific Computing Second Edition, Packt Publishing (2015)
<a href='http://pandas.pydata.org/'>
<img src='images/pandas_logo.png', align="left">
</a>
<i>"Pandas is a software library written for the Python programming language for <b>data manipulation and analysis</b>. In particular, it offers <b>data structures and operations for manipulating numerical tables and time series.</b>"</i>, Wikpedia
Documentation
Quickstart
Video Tutorial with Notebooks: Analyzing and Manipulating Data with Pandas, SciPy (2015
)* Book: Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython, O'Reilly and Associates (2012)
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('Lgp442bibDM')
Explanation: <a href='http://www.sympy.org'>
<img src='images/sympy_logo.png', align='left'>
</a>
<i>"SymPy is a Python library for symbolic computation...
SymPy includes features ranging from basic symbolic arithmetic to calculus, algebra, discrete mathematics and quantum physics. It is capable of formatting the result of the computations as LaTeX code."</i> Wikipedia
Documentation
Quickstart
Video with Slides: SymPy, SciPy (2014)
End of explanation
import h5py
h5py.__version__
Explanation: <a href='http://www.h5py.org'>
<img src='images/hdf_logo.jpg', align='left'>
</a>
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('nddj5OA8LJo')
Explanation: <i>"HDF5 is a <b>data model, library, and file format</b> for storing and managing data. It supports an <b>unlimited variety of datatypes</b>, and is designed for <b>flexible and efficient I/O and for high volume and complex data</b>. HDF5 is <b>portable and is extensible</b>, allowing applications to evolve in their use of HDF5. The HDF5 Technology suite includes tools and applications for managing, manipulating, viewing, and analyzing data in the HDF5 format."</i>, HDF Group
Documentation
Quickstart
Video Introduction: HDF5 is Eating the World, SyiPy (2015)
Book: Python and HDF5, O'Reilly Media (2013)
End of explanation
import Cython
Cython.__version__
Explanation: <a href='http://cython.org/'>
<img src='images/cython_logo.jpg', align='left'>
</a>
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('gMvkiQ-gOW8')
Explanation: <i>"Cython is a compiled language that generates <b>CPython extension modules</b>. These extension modules can then be loaded and used by regular Python code using the import statement.
It works by producing a standard Python module. However, the behavior differs from standard Python in that the module code, originally written in Python, is translated into C. While <b>the resulting code is fast</b>, ..."
</i>, Wikipedia
Documentation
Video Tutorial with Slides: Cython: Blend the Best of Python and C++, SciPy 2015
Book Cython, A Guide for Python Programmers, O'Reilly Media (2015)
End of explanation
# Import necassary packages
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# Set up matpltolib for nice plotting in the notebook
%matplotlib notebook
plt.style.use('seaborn-notebook')
Explanation: Examples
Typical imports
End of explanation
x = np.linspace(0, 4 * np.pi, 200)
rad = x / np.pi
plt.figure(figsize=(12, 3))
line, = plt.plot(rad, 2 * np.sin(x))
plt.ylim(-5, 5)
plt.grid()
plt.tight_layout()
Explanation: Simple Plot
End of explanation
from ipywidgets import interact
@interact(a=(1, 4), f=(0.1, 2, 0.1), phi=(0, 2, 0.1))
def update(a=2, f = 1, phi=0):
line.set_ydata(a * np.sin((x + phi * np.pi) / f))
Explanation: Interactive
End of explanation
x = np.linspace(0, 4 * np.pi, 200)
rad = x / np.pi
#Create some noise
noise = 0.75 * np.random.randn(x.size)
# Define differnt harmonic functions
y0 = 1.0 * np.sin(x + 0) + noise
y1 = 1.5 * np.sin(x + np.pi / 2) + noise
y2 = 2.5 * np.sin(x + np.pi) + noise
# Plot everything
fig, axs = plt.subplots(3 , 1, figsize=(12, 6))
axs[0].plot(rad, y0, 'b.')
axs[0].set_xticks([])
axs[1].plot(rad, y1, 'g.')
axs[1].set_xticks([])
axs[2].plot(rad, y2, 'k.')
axs[2].set_xlabel('x / 2$\pi$')
for ax in axs:
ax.set_ylim(-5.5, 5.5)
plt.tight_layout(h_pad=0)
Explanation: Subplots
End of explanation
# Define the fit function
def sin(x, a, phi):
return a * np.sin(x + phi)
# Find the fit parameters
(a0, phi0), *err = curve_fit(sin, x, y0)
(a1, phi1), *err = curve_fit(sin, x, y1)
(a2, phi2), *err = curve_fit(sin, x, y2)
# Plot fits into subplots
axs[0].plot(rad, sin(x, a0, phi0), 'r--', lw=3, label='${:.2f} \cdot Sin(x + {:.2f}\pi$)'.format(a0, phi0 / np.pi))
axs[1].plot(rad, sin(x, a1, phi1), 'r--', lw=3, label='${:.2f} \cdot Sin(x + {:.2f}\pi$)'.format(a1, phi1 / np.pi))
axs[2].plot(rad, sin(x, a2, phi2), 'r--', lw=3, label='${:.2f} \cdot Sin(x + {:.2f}\pi$)'.format(a2, phi2 / np.pi))
for ax in axs:
ax.legend(loc=4)
Explanation: Fitting
End of explanation |
317 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The
Step1:
Step2: Now, we can create an
Step3: Epochs behave similarly to
Step4: You can select subsets of epochs by indexing the
Step5: Note the '/'s in the event code labels. These separators allow tag-based
selection of epoch sets; every string separated by '/' can be entered, and
returns the subset of epochs matching any of the strings. E.g.,
Step6: Note that MNE will not complain if you ask for tags not present in the
object, as long as it can find some match
Step7: It is also possible to iterate through
Step8: You can manually remove epochs from the Epochs object by using
Step9: If you wish to save the epochs as a file, you can do it with
Step10: Later on you can read the epochs with
Step11: If you wish to look at the average across trial types, then you may do so,
creating an | Python Code:
import mne
import os.path as op
import numpy as np
from matplotlib import pyplot as plt
Explanation: The :class:Epochs <mne.Epochs> data structure: epoched data
:class:Epochs <mne.Epochs> objects are a way of representing continuous
data as a collection of time-locked trials, stored in an array of shape
(n_events, n_channels, n_times). They are useful for many statistical
methods in neuroscience, and make it easy to quickly overview what occurs
during a trial.
End of explanation
data_path = mne.datasets.sample.data_path()
# Load a dataset that contains events
raw = mne.io.read_raw_fif(
op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif'))
# If your raw object has a stim channel, you can construct an event array
# easily
events = mne.find_events(raw, stim_channel='STI 014')
# Show the number of events (number of rows)
print('Number of events:', len(events))
# Show all unique event codes (3rd column)
print('Unique event codes:', np.unique(events[:, 2]))
# Specify event codes of interest with descriptive labels.
# This dataset also has visual left (3) and right (4) events, but
# to save time and memory we'll just look at the auditory conditions
# for now.
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2}
Explanation: :class:Epochs <mne.Epochs> objects can be created in three ways:
1. From a :class:Raw <mne.io.Raw> object, along with event times
2. From an :class:Epochs <mne.Epochs> object that has been saved as a
.fif file
3. From scratch using :class:EpochsArray <mne.EpochsArray>. See
tut_creating_data_structures
End of explanation
epochs = mne.Epochs(raw, events, event_id, tmin=-0.1, tmax=1,
baseline=(None, 0), preload=True)
print(epochs)
Explanation: Now, we can create an :class:mne.Epochs object with the events we've
extracted. Note that epochs constructed in this manner will not have their
data available until explicitly read into memory, which you can do with
:func:get_data <mne.Epochs.get_data>. Alternatively, you can use
preload=True.
Expose the raw data as epochs, cut from -0.1 s to 1.0 s relative to the event
onsets
End of explanation
print(epochs.events[:3])
print(epochs.event_id)
Explanation: Epochs behave similarly to :class:mne.io.Raw objects. They have an
:class:info <mne.Info> attribute that has all of the same
information, as well as a number of attributes unique to the events contained
within the object.
End of explanation
print(epochs[1:5])
print(epochs['Auditory/Right'])
Explanation: You can select subsets of epochs by indexing the :class:Epochs <mne.Epochs>
object directly. Alternatively, if you have epoch names specified in
event_id then you may index with strings instead.
End of explanation
print(epochs['Right'])
print(epochs['Right', 'Left'])
Explanation: Note the '/'s in the event code labels. These separators allow tag-based
selection of epoch sets; every string separated by '/' can be entered, and
returns the subset of epochs matching any of the strings. E.g.,
End of explanation
epochs_r = epochs['Right']
epochs_still_only_r = epochs_r[['Right', 'Left']]
print(epochs_still_only_r)
try:
epochs_still_only_r["Left"]
except KeyError:
print("Tag-based selection without any matches raises a KeyError!")
Explanation: Note that MNE will not complain if you ask for tags not present in the
object, as long as it can find some match: the below example is parsed as
(inclusive) 'Right' OR 'Left'. However, if no match is found, an error is
returned.
End of explanation
# These will be epochs objects
for i in range(3):
print(epochs[i])
# These will be arrays
for ep in epochs[:2]:
print(ep)
Explanation: It is also possible to iterate through :class:Epochs <mne.Epochs> objects
in this way. Note that behavior is different if you iterate on Epochs
directly rather than indexing:
End of explanation
epochs.drop([0], reason='User reason')
epochs.drop_bad(reject=dict(grad=2500e-13, mag=4e-12, eog=200e-6), flat=None)
print(epochs.drop_log)
epochs.plot_drop_log()
print('Selection from original events:\n%s' % epochs.selection)
print('Removed events (from numpy setdiff1d):\n%s'
% (np.setdiff1d(np.arange(len(events)), epochs.selection).tolist(),))
print('Removed events (from list comprehension -- should match!):\n%s'
% ([li for li, log in enumerate(epochs.drop_log) if len(log) > 0]))
Explanation: You can manually remove epochs from the Epochs object by using
:func:epochs.drop(idx) <mne.Epochs.drop>, or by using rejection or flat
thresholds with :func:epochs.drop_bad(reject, flat) <mne.Epochs.drop_bad>.
You can also inspect the reason why epochs were dropped by looking at the
list stored in epochs.drop_log or plot them with
:func:epochs.plot_drop_log() <mne.Epochs.plot_drop_log>. The indices
from the original set of events are stored in epochs.selection.
End of explanation
epochs_fname = op.join(data_path, 'MEG', 'sample', 'sample-epo.fif')
epochs.save(epochs_fname, overwrite=True)
Explanation: If you wish to save the epochs as a file, you can do it with
:func:mne.Epochs.save. To conform to MNE naming conventions, the
epochs file names should end with '-epo.fif'.
End of explanation
epochs = mne.read_epochs(epochs_fname, preload=False)
Explanation: Later on you can read the epochs with :func:mne.read_epochs. For reading
EEGLAB epochs files see :func:mne.read_epochs_eeglab. We can also use
preload=False to save memory, loading the epochs from disk on demand.
End of explanation
ev_left = epochs['Auditory/Left'].average()
ev_right = epochs['Auditory/Right'].average()
f, axs = plt.subplots(3, 2, figsize=(10, 5))
_ = f.suptitle('Left / Right auditory', fontsize=20)
_ = ev_left.plot(axes=axs[:, 0], show=False, time_unit='s')
_ = ev_right.plot(axes=axs[:, 1], show=False, time_unit='s')
plt.tight_layout()
Explanation: If you wish to look at the average across trial types, then you may do so,
creating an :class:Evoked <mne.Evoked> object in the process. Instances
of Evoked are usually created by calling :func:mne.Epochs.average. For
creating Evoked from other data structures see :class:mne.EvokedArray and
tut_creating_data_structures.
End of explanation |
318 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
In Depth
Step1: Motivating GMM
Step2: From an intuitive standpoint, we might expect that the clustering assignment for some points is more certain than others
Step3: An important observation for k-means is that these cluster models must be circular
Step4: By eye, we recognize that these transformed clusters are non-circular, and thus circular clusters would be a poor fit.
Nevertheless, k-means is not flexible enough to account for this, and tries to force-fit the data into four circular clusters.
This results in a mixing of cluster assignments where the resulting circles overlap
Step5: But because GMM contains a probabilistic model under the hood, it is also possible to find probabilistic cluster assignments—in Scikit-Learn this is done using the predict_proba method.
This returns a matrix of size [n_samples, n_clusters] which measures the probability that any point belongs to the given cluster
Step6: We can visualize this uncertainty by, for example, making the size of each point proportional to the certainty of its prediction; looking at the following figure, we can see that it is precisely the points at the boundaries between clusters that reflect this uncertainty of cluster assignment
Step8: Under the hood, a Gaussian mixture model is very similar to k-means
Step9: With this in place, we can take a look at what the four-component GMM gives us for our initial data
Step10: Similarly, we can use the GMM approach to fit our stretched dataset; allowing for a full covariance the model will fit even very oblong, stretched-out clusters
Step11: This makes clear that GMM addresses the two main practical issues with k-means encountered before.
GMM as Density Estimation
Though GMM is often categorized as a clustering algorithm, fundamentally it is an algorithm for density estimation.
That is to say, the result of a GMM fit to some data is technically not a clustering model, but a generative probabilistic model describing the distribution of the data.
As an example, consider some data generated from Scikit-Learn's make_moons function, which we saw in In Depth
Step12: If we try to fit this with a two-component GMM viewed as a clustering model, the results are not particularly useful
Step13: But if we instead use many more components and ignore the cluster labels, we find a fit that is much closer to the input data
Step14: Here the mixture of 16 Gaussians serves not to find separated clusters of data, but rather to model the overall distribution of the input data.
This is a generative model of the distribution, meaning that the GMM gives us the recipe to generate new random data distributed similarly to our input.
For example, here are 400 new points drawn from this 16-component GMM fit to our original data
Step15: GMM is convenient as a flexible means of modeling an arbitrary multi-dimensional distribution of data.
How many components?
The fact that GMM is a generative model gives us a natural means of determining the optimal number of components for a given dataset.
A generative model is inherently a probability distribution for the dataset, and so we can simply evaluate the likelihood of the data under the model, using cross-validation to avoid over-fitting.
Another means of correcting for over-fitting is to adjust the model likelihoods using some analytic criterion such as the Akaike information criterion (AIC) or the Bayesian information criterion (BIC).
Scikit-Learn's GMM estimator actually includes built-in methods that compute both of these, and so it is very easy to operate on this approach.
Let's look at the AIC and BIC as a function as the number of GMM components for our moon dataset
Step16: The optimal number of clusters is the value that minimizes the AIC or BIC, depending on which approximation we wish to use. The AIC tells us that our choice of 16 components above was probably too many
Step17: Next let's plot the first 100 of these to recall exactly what we're looking at
Step18: We have nearly 1,800 digits in 64 dimensions, and we can build a GMM on top of these to generate more.
GMMs can have difficulty converging in such a high dimensional space, so we will start with an invertible dimensionality reduction algorithm on the data.
Here we will use a straightforward PCA, asking it to preserve 99% of the variance in the projected data
Step19: The result is 41 dimensions, a reduction of nearly 1/3 with almost no information loss.
Given this projected data, let's use the AIC to get a gauge for the number of GMM components we should use
Step20: It appears that around 110 components minimizes the AIC; we will use this model.
Let's quickly fit this to the data and confirm that it has converged
Step21: Now we can draw samples of 100 new points within this 41-dimensional projected space, using the GMM as a generative model
Step22: Finally, we can use the inverse transform of the PCA object to construct the new digits | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
Explanation: <!--BOOK_INFORMATION-->
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
In Depth: Gaussian Mixture Models
The k-means clustering model explored in the previous section is simple and relatively easy to understand, but its simplicity leads to practical challenges in its application.
In particular, the non-probabilistic nature of k-means and its use of simple distance-from-cluster-center to assign cluster membership leads to poor performance for many real-world situations.
In this section we will take a look at Gaussian mixture models (GMMs), which can be viewed as an extension of the ideas behind k-means, but can also be a powerful tool for estimation beyond simple clustering.
We begin with the standard imports:
End of explanation
# Generate some data
from sklearn.datasets.samples_generator import make_blobs
X, y_true = make_blobs(n_samples=400, centers=4,
cluster_std=0.60, random_state=0)
X = X[:, ::-1] # flip axes for better plotting
# Plot the data with K Means Labels
from sklearn.cluster import KMeans
kmeans = KMeans(4, random_state=0)
labels = kmeans.fit(X).predict(X)
plt.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis');
Explanation: Motivating GMM: Weaknesses of k-Means
Let's take a look at some of the weaknesses of k-means and think about how we might improve the cluster model.
As we saw in the previous section, given simple, well-separated data, k-means finds suitable clustering results.
For example, if we have simple blobs of data, the k-means algorithm can quickly label those clusters in a way that closely matches what we might do by eye:
End of explanation
from sklearn.cluster import KMeans
from scipy.spatial.distance import cdist
def plot_kmeans(kmeans, X, n_clusters=4, rseed=0, ax=None):
labels = kmeans.fit_predict(X)
# plot the input data
ax = ax or plt.gca()
ax.axis('equal')
ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2)
# plot the representation of the KMeans model
centers = kmeans.cluster_centers_
radii = [cdist(X[labels == i], [center]).max()
for i, center in enumerate(centers)]
for c, r in zip(centers, radii):
ax.add_patch(plt.Circle(c, r, fc='#CCCCCC', lw=3, alpha=0.5, zorder=1))
kmeans = KMeans(n_clusters=4, random_state=0)
plot_kmeans(kmeans, X)
Explanation: From an intuitive standpoint, we might expect that the clustering assignment for some points is more certain than others: for example, there appears to be a very slight overlap between the two middle clusters, such that we might not have complete confidence in the cluster assigment of points between them.
Unfortunately, the k-means model has no intrinsic measure of probability or uncertainty of cluster assignments (although it may be possible to use a bootstrap approach to estimate this uncertainty).
For this, we must think about generalizing the model.
One way to think about the k-means model is that it places a circle (or, in higher dimensions, a hyper-sphere) at the center of each cluster, with a radius defined by the most distant point in the cluster.
This radius acts as a hard cutoff for cluster assignment within the training set: any point outside this circle is not considered a member of the cluster.
We can visualize this cluster model with the following function:
End of explanation
rng = np.random.RandomState(13)
X_stretched = np.dot(X, rng.randn(2, 2))
kmeans = KMeans(n_clusters=4, random_state=0)
plot_kmeans(kmeans, X_stretched)
Explanation: An important observation for k-means is that these cluster models must be circular: k-means has no built-in way of accounting for oblong or elliptical clusters.
So, for example, if we take the same data and transform it, the cluster assignments end up becoming muddled:
End of explanation
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(n_components=4).fit(X)
labels = gmm.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis');
Explanation: By eye, we recognize that these transformed clusters are non-circular, and thus circular clusters would be a poor fit.
Nevertheless, k-means is not flexible enough to account for this, and tries to force-fit the data into four circular clusters.
This results in a mixing of cluster assignments where the resulting circles overlap: see especially the bottom-right of this plot.
One might imagine addressing this particular situation by preprocessing the data with PCA (see In Depth: Principal Component Analysis), but in practice there is no guarantee that such a global operation will circularize the individual data.
These two disadvantages of k-means—its lack of flexibility in cluster shape and lack of probabilistic cluster assignment—mean that for many datasets (especially low-dimensional datasets) it may not perform as well as you might hope.
You might imagine addressing these weaknesses by generalizing the k-means model: for example, you could measure uncertainty in cluster assignment by comparing the distances of each point to all cluster centers, rather than focusing on just the closest.
You might also imagine allowing the cluster boundaries to be ellipses rather than circles, so as to account for non-circular clusters.
It turns out these are two essential components of a different type of clustering model, Gaussian mixture models.
Generalizing E–M: Gaussian Mixture Models
A Gaussian mixture model (GMM) attempts to find a mixture of multi-dimensional Gaussian probability distributions that best model any input dataset.
In the simplest case, GMMs can be used for finding clusters in the same manner as k-means:
End of explanation
probs = gmm.predict_proba(X)
print(probs[:5].round(3))
Explanation: But because GMM contains a probabilistic model under the hood, it is also possible to find probabilistic cluster assignments—in Scikit-Learn this is done using the predict_proba method.
This returns a matrix of size [n_samples, n_clusters] which measures the probability that any point belongs to the given cluster:
End of explanation
size = 50 * probs.max(1) ** 2 # square emphasizes differences
plt.scatter(X[:, 0], X[:, 1], c=labels, cmap='viridis', s=size);
Explanation: We can visualize this uncertainty by, for example, making the size of each point proportional to the certainty of its prediction; looking at the following figure, we can see that it is precisely the points at the boundaries between clusters that reflect this uncertainty of cluster assignment:
End of explanation
from matplotlib.patches import Ellipse
def draw_ellipse(position, covariance, ax=None, **kwargs):
Draw an ellipse with a given position and covariance
ax = ax or plt.gca()
# Convert covariance to principal axes
if covariance.shape == (2, 2):
U, s, Vt = np.linalg.svd(covariance)
angle = np.degrees(np.arctan2(U[1, 0], U[0, 0]))
width, height = 2 * np.sqrt(s)
else:
angle = 0
width, height = 2 * np.sqrt(covariance)
# Draw the Ellipse
for nsig in range(1, 4):
ax.add_patch(Ellipse(position, nsig * width, nsig * height,
angle, **kwargs))
def plot_gmm(gmm, X, label=True, ax=None):
ax = ax or plt.gca()
labels = gmm.fit(X).predict(X)
if label:
ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2)
else:
ax.scatter(X[:, 0], X[:, 1], s=40, zorder=2)
ax.axis('equal')
w_factor = 0.2 / gmm.weights_.max()
for pos, covar, w in zip(gmm.means_, gmm.covariances_, gmm.weights_):
draw_ellipse(pos, covar, alpha=w * w_factor)
Explanation: Under the hood, a Gaussian mixture model is very similar to k-means: it uses an expectation–maximization approach which qualitatively does the following:
Choose starting guesses for the location and shape
Repeat until converged:
E-step: for each point, find weights encoding the probability of membership in each cluster
M-step: for each cluster, update its location, normalization, and shape based on all data points, making use of the weights
The result of this is that each cluster is associated not with a hard-edged sphere, but with a smooth Gaussian model.
Just as in the k-means expectation–maximization approach, this algorithm can sometimes miss the globally optimal solution, and thus in practice multiple random initializations are used.
Let's create a function that will help us visualize the locations and shapes of the GMM clusters by drawing ellipses based on the GMM output:
End of explanation
gmm = GaussianMixture(n_components=4, random_state=42)
plot_gmm(gmm, X)
Explanation: With this in place, we can take a look at what the four-component GMM gives us for our initial data:
End of explanation
gmm = GaussianMixture(n_components=4, covariance_type='full', random_state=42)
plot_gmm(gmm, X_stretched)
Explanation: Similarly, we can use the GMM approach to fit our stretched dataset; allowing for a full covariance the model will fit even very oblong, stretched-out clusters:
End of explanation
from sklearn.datasets import make_moons
Xmoon, ymoon = make_moons(200, noise=.05, random_state=0)
plt.scatter(Xmoon[:, 0], Xmoon[:, 1]);
Explanation: This makes clear that GMM addresses the two main practical issues with k-means encountered before.
GMM as Density Estimation
Though GMM is often categorized as a clustering algorithm, fundamentally it is an algorithm for density estimation.
That is to say, the result of a GMM fit to some data is technically not a clustering model, but a generative probabilistic model describing the distribution of the data.
As an example, consider some data generated from Scikit-Learn's make_moons function, which we saw in In Depth: K-Means Clustering:
End of explanation
gmm2 = GaussianMixture(n_components=2, covariance_type='full', random_state=0)
plot_gmm(gmm2, Xmoon)
Explanation: If we try to fit this with a two-component GMM viewed as a clustering model, the results are not particularly useful:
End of explanation
gmm16 = GaussianMixture(n_components=16, covariance_type='full', random_state=0)
plot_gmm(gmm16, Xmoon, label=False)
Explanation: But if we instead use many more components and ignore the cluster labels, we find a fit that is much closer to the input data:
End of explanation
Xnew = gmm16.sample(400)[0]
plt.scatter(Xnew[:, 0], Xnew[:, 1]);
Explanation: Here the mixture of 16 Gaussians serves not to find separated clusters of data, but rather to model the overall distribution of the input data.
This is a generative model of the distribution, meaning that the GMM gives us the recipe to generate new random data distributed similarly to our input.
For example, here are 400 new points drawn from this 16-component GMM fit to our original data:
End of explanation
n_components = np.arange(1, 21)
models = [GaussianMixture(n, covariance_type='full', random_state=0).fit(Xmoon)
for n in n_components]
plt.plot(n_components, [m.bic(Xmoon) for m in models], label='BIC')
plt.plot(n_components, [m.aic(Xmoon) for m in models], label='AIC')
plt.legend(loc='best')
plt.xlabel('n_components');
Explanation: GMM is convenient as a flexible means of modeling an arbitrary multi-dimensional distribution of data.
How many components?
The fact that GMM is a generative model gives us a natural means of determining the optimal number of components for a given dataset.
A generative model is inherently a probability distribution for the dataset, and so we can simply evaluate the likelihood of the data under the model, using cross-validation to avoid over-fitting.
Another means of correcting for over-fitting is to adjust the model likelihoods using some analytic criterion such as the Akaike information criterion (AIC) or the Bayesian information criterion (BIC).
Scikit-Learn's GMM estimator actually includes built-in methods that compute both of these, and so it is very easy to operate on this approach.
Let's look at the AIC and BIC as a function as the number of GMM components for our moon dataset:
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
digits.data.shape
Explanation: The optimal number of clusters is the value that minimizes the AIC or BIC, depending on which approximation we wish to use. The AIC tells us that our choice of 16 components above was probably too many: around 8-12 components would have been a better choice.
As is typical with this sort of problem, the BIC recommends a simpler model.
Notice the important point: this choice of number of components measures how well GMM works as a density estimator, not how well it works as a clustering algorithm.
I'd encourage you to think of GMM primarily as a density estimator, and use it for clustering only when warranted within simple datasets.
Example: GMM for Generating New Data
We just saw a simple example of using GMM as a generative model of data in order to create new samples from the distribution defined by the input data.
Here we will run with this idea and generate new handwritten digits from the standard digits corpus that we have used before.
To start with, let's load the digits data using Scikit-Learn's data tools:
End of explanation
def plot_digits(data):
fig, ax = plt.subplots(10, 10, figsize=(8, 8),
subplot_kw=dict(xticks=[], yticks=[]))
fig.subplots_adjust(hspace=0.05, wspace=0.05)
for i, axi in enumerate(ax.flat):
im = axi.imshow(data[i].reshape(8, 8), cmap='binary')
im.set_clim(0, 16)
plot_digits(digits.data)
Explanation: Next let's plot the first 100 of these to recall exactly what we're looking at:
End of explanation
from sklearn.decomposition import PCA
pca = PCA(0.99, whiten=True) # Float number for n_components defines component number by % var explained
data = pca.fit_transform(digits.data)
data.shape
Explanation: We have nearly 1,800 digits in 64 dimensions, and we can build a GMM on top of these to generate more.
GMMs can have difficulty converging in such a high dimensional space, so we will start with an invertible dimensionality reduction algorithm on the data.
Here we will use a straightforward PCA, asking it to preserve 99% of the variance in the projected data:
End of explanation
n_components = np.arange(50, 210, 10)
models = [GaussianMixture(n, covariance_type='full', random_state=0)
for n in n_components]
aics = [model.fit(data).aic(data) for model in models]
plt.plot(n_components, aics);
Explanation: The result is 41 dimensions, a reduction of nearly 1/3 with almost no information loss.
Given this projected data, let's use the AIC to get a gauge for the number of GMM components we should use:
End of explanation
gmm = GaussianMixture(110, covariance_type='full', random_state=0)
gmm.fit(data)
print(gmm.converged_)
Explanation: It appears that around 110 components minimizes the AIC; we will use this model.
Let's quickly fit this to the data and confirm that it has converged:
End of explanation
data_new = gmm.sample(100)[0]
data_new.shape
Explanation: Now we can draw samples of 100 new points within this 41-dimensional projected space, using the GMM as a generative model:
End of explanation
digits_new = pca.inverse_transform(data_new)
plot_digits(digits_new)
Explanation: Finally, we can use the inverse transform of the PCA object to construct the new digits:
End of explanation |
319 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reproducible visualization
In "The Functional Art
Step1: World Population Prospects
Step2: First problem... The book states on page 8
Step3: Let's make some art
Step4: For one thing, the line for China doesn't look like the one in the book. Concerning. The other issue is that there are some lines that are going lower than Italy or Spain in 1995-2000 and in 2000-2005 (majority in the Balkans) and that were not on the graph in the book, AFAICT | Python Code:
!wget 'http://esa.un.org/unpd/wpp/DVD/Files/1_Indicators%20(Standard)/EXCEL_FILES/2_Fertility/WPP2015_FERT_F04_TOTAL_FERTILITY.XLS'
Explanation: Reproducible visualization
In "The Functional Art: An introduction to information graphics and visualization" by Alberto Cairo, on page 12 we are presented with a visualization of UN data time series of Fertility rate (average number of children per woman) per country:
Figure 1.6 Highlighting the relevant, keeping the secondary in the background.
Let's try to reproduce this.
Getting the data
The visualization was done in 2012, but limited the visualization to 2010. This should make it easy, in theory, to get the data, since it is historical. These are directly available as excel spreadsheets now, we'll just ignore the last bucket (2010-2015).
Pandas allows loading an excel spreadsheet straight from a URL, but here we will download it first so we have a local copy.
End of explanation
df = pd.read_excel('WPP2015_FERT_F04_TOTAL_FERTILITY.XLS', skiprows=16, index_col = 'Country code')
df = df[df.index < 900]
len(df)
df.head()
Explanation: World Population Prospects: The 2015 Revision
File FERT/4: Total fertility by major area, region and country, 1950-2100 (children per woman)
Estimates, 1950 - 2015
POP/DB/WPP/Rev.2015/FERT/F04
July 2015 - Copyright © 2015 by United Nations. All rights reserved
Suggested citation: United Nations, Department of Economic and Social Affairs, Population Division (2015). World Population Prospects: The 2015 Revision, DVD Edition.
End of explanation
df.rename(columns={df.columns[2]:'Description'}, inplace=True)
df.drop(df.columns[[0, 1, 3, 16]], axis=1, inplace=True) # drop what we dont need
df.head()
highlight_countries = ['Niger','Yemen','India',
'Brazil','Norway','France','Sweden','United Kingdom',
'Spain','Italy','Germany','Japan', 'China'
]
# Subset only countries to highlight, transpose for timeseries
df_high = df[df.Description.isin(highlight_countries)].T[1:]
# Subset the rest of the countries, transpose for timeseries
df_bg = df[~df.Description.isin(highlight_countries)].T[1:]
Explanation: First problem... The book states on page 8:
-- <cite>"Using the filters the site offers, I asked for a table that included the more than 150 countries on which the UN has complete research."</cite>
Yet we have 201 countries (codes 900+ are regions) with complete data. We do not have a easy way to identify which countries were added to this. Still, let's move forward and prep our data.
End of explanation
# background
ax = df_bg.plot(legend=False, color='k', alpha=0.02, figsize=(12,12))
ax.xaxis.tick_top()
# highlighted countries
df_high.plot(legend=False, ax=ax)
# replacement level line
ax.hlines(y=2.1, xmin=0, xmax=12, color='k', alpha=1, linestyle='dashed')
# Average over time on all countries
df.mean().plot(ax=ax, color='k', label='World\naverage')
# labels for highlighted countries on the right side
for country in highlight_countries:
ax.text(11.2,df[df.Description==country].values[0][12],country)
# start y axis at 1
ax.set_ylim(ymin=1)
Explanation: Let's make some art
End of explanation
df.describe()
df[df['1995-2000']<1.25]
df[df['2000-2005']<1.25]
Explanation: For one thing, the line for China doesn't look like the one in the book. Concerning. The other issue is that there are some lines that are going lower than Italy or Spain in 1995-2000 and in 2000-2005 (majority in the Balkans) and that were not on the graph in the book, AFAICT:
End of explanation |
320 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jeté de balle – Niveau 1 - Python
TP1
Pour commencer votre programme python devra contenir les lignes de code ci-dessous et le logiciel V-REP devra être lancé.
Dans V-REP (en haut à gauche) utilise les deux icones flèche pour déplacer la vue et regarder poppy sous tous les angles.<br>
Dans notebook, utilise le racourci 'Ctrl+Enter' pour éxécuter les commandes.
Step1: Ajouter un objet
Step2: Quelques exemples de mouvement "utile"
Step3: Solution possible
Step4: Encore buger ? essaie celles-ci
Step5: Tu as fini? coupe la simulation ici | Python Code:
import time
from poppy.creatures import PoppyTorso
poppy = PoppyTorso(simulator='vrep')
Explanation: Jeté de balle – Niveau 1 - Python
TP1
Pour commencer votre programme python devra contenir les lignes de code ci-dessous et le logiciel V-REP devra être lancé.
Dans V-REP (en haut à gauche) utilise les deux icones flèche pour déplacer la vue et regarder poppy sous tous les angles.<br>
Dans notebook, utilise le racourci 'Ctrl+Enter' pour éxécuter les commandes.
End of explanation
io = poppy._controllers[0].io
name = 'cube'
position = [0.2, 0, 1] # X, Y, Z
sizes = [0.15, 0.15, 0.15] # in meters
mass = 0.1 # in kg
io.add_cube(name, position, sizes, mass)
Explanation: Ajouter un objet
End of explanation
#ouvrir
poppy.l_arm_z.goal_position = 20
poppy.r_arm_z.goal_position = -20
#fermer
poppy.l_arm_z.goal_position = -20
poppy.r_arm_z.goal_position = 20
poppy.l_shoulder_y.goal_position = -40
poppy.r_shoulder_y.goal_position = -40
#lever
poppy.l_shoulder_y.goto_position(-180,0.1)
poppy.r_shoulder_y.goto_position(-180,0.1)
#jeter
poppy.l_shoulder_y.goal_position = -40
poppy.r_shoulder_y.goal_position = -40
poppy.l_arm_z.goal_position = 20
poppy.r_arm_z.goal_position = -20
Explanation: Quelques exemples de mouvement "utile":
End of explanation
poppy.reset_simulation()
Explanation: Solution possible:
reprise de volet
catapulte
attrape puis jéte
Aide ajusté l'objet, forme, taille, poid, position...;
Tu as raté? c'est pas grâve, recommmence, essaie ces lignes pour redémarrer :
End of explanation
import pypot
poppy.stop_simulation()
pypot.vrep.close_all_connections()
from poppy.creatures import PoppyTorso
poppy=PoppyTorso(simulator='vrep')
Explanation: Encore buger ? essaie celles-ci :
End of explanation
import pypot
poppy.stop_simulation()
pypot.vrep.close_all_connections()
Explanation: Tu as fini? coupe la simulation ici:
End of explanation |
321 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project Euler
Step1: Certain functions in the itertools module may be useful for computing permutations
Step2: The below is what I think should work however it takes a while to run so I end up interrupting kernel so I don't bog down system. Finding all the values of the key doesn't take long so it must be an error in my method of cycling through the message with each key value that takes too much time. I have been trying to figure out how to possibly use cycle() or repeat() function to run key against encrypted message. I am going to submit now but still attempt to fix problem then resubmit
Step3: Test of lower half of code by using a set key
Step4: However this still takes too long to finish so must be an erro in the key_cycler function | Python Code:
assert 65 ^ 42 == 107
assert 107 ^ 42 == 65
assert ord('a') == 97
assert chr(97) == 'a'
Explanation: Project Euler: Problem 59
https://projecteuler.net/problem=59
Each character on a computer is assigned a unique code and the preferred standard is ASCII (American Standard Code for Information Interchange). For example, uppercase A = 65, asterisk (*) = 42, and lowercase k = 107.
A modern encryption method is to take a text file, convert the bytes to ASCII, then XOR each byte with a given value, taken from a secret key. The advantage with the XOR function is that using the same encryption key on the cipher text, restores the plain text; for example, 65 XOR 42 = 107, then 107 XOR 42 = 65.
For unbreakable encryption, the key is the same length as the plain text message, and the key is made up of random bytes. The user would keep the encrypted message and the encryption key in different locations, and without both "halves", it is impossible to decrypt the message.
Unfortunately, this method is impractical for most users, so the modified method is to use a password as a key. If the password is shorter than the message, which is likely, the key is repeated cyclically throughout the message. The balance for this method is using a sufficiently long password key for security, but short enough to be memorable.
Your task has been made easy, as the encryption key consists of three lower case characters. Using cipher.txt (in this directory), a file containing the encrypted ASCII codes, and the knowledge that the plain text must contain common English words, decrypt the message and find the sum of the ASCII values in the original text.
The following cell shows examples of how to perform XOR in Python and how to go back and forth between characters and integers:
End of explanation
from itertools import *
encrypted=open("cipher.txt","r")
message=encrypted.read().split(",")
encrypted.close()
def key_cycler(cycles):
for n in range(cycles): #will repeat key for every index of message this won't translate very last character since length in 400.33
u1=key[0]^int(message[3*n])
unencrypted.insert(3*n,u1) #inserts into corresponding spot in unencrypted list
u2=key[1]^int(message[(3*n)+1])
unencrypted.insert((3*n)+1,u2)
u3=key[2]^int(message[(3*n)+2]) #XOR each message interger against its corresponding key value
unencrypted.insert((3*n)+2,u3)
Explanation: Certain functions in the itertools module may be useful for computing permutations:
End of explanation
length=len(message)
print(length)
repeat_times=1201/3 #gives me estimate of number of times to cycle through
print(repeat_times)
for a in range(97,123): #the values of lower case letters
for b in range(97,123):
for c in range(97,123):
key=[a,b,c] #iterates through all key values for 3 lowercase letters
unencrypted=[]
key_cycler(400) #cycles key through message and puts into unencrypted
english=[]
for i in unencrypted:
e=chr(i)
english.append(e) #converts from ACSII to character string
english="".join(english) #converts to whole string
if " the " in english: #checks to see if " the " is in message . Like suggested in the Gitter Chat I am assuming this won't appear if not correct key
print(english) # if it does appear for incorrect keys then I can remove the break and print all instance where
print(key) #" the " appears and then select which key produces a completely legible message
break #prints the key that made instance of message and then breaks the for loop so only first message with
# instances of " the " occuring is printed
Explanation: The below is what I think should work however it takes a while to run so I end up interrupting kernel so I don't bog down system. Finding all the values of the key doesn't take long so it must be an error in my method of cycling through the message with each key value that takes too much time. I have been trying to figure out how to possibly use cycle() or repeat() function to run key against encrypted message. I am going to submit now but still attempt to fix problem then resubmit
End of explanation
key=[97,97,97] #iterates through all key values for 3 lowercase letters
unencrypted=[]
key_cycler(400) #cycles key through message and puts into unencrypted
english=[]
for i in unencrypted:
e=chr(i)
english.append(e) #converts from ACSII to character string
english="".join(english) #converts to whole string
print(english)
Explanation: Test of lower half of code by using a set key
End of explanation
# This cell will be used for grading, leave it at the end of the notebook.
Explanation: However this still takes too long to finish so must be an erro in the key_cycler function
End of explanation |
322 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classifying Images With Scikit_Learn
Step1: Naive Bayes Using Scikit_Lerarn
Step2: Pre-Processing The Data
machine learning algorithms can work only on numeric data, so our next step will be to convert our text-based dataset to a numeric dataset | Python Code:
import sklearn as sk
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn .datasets import fetch_olivetti_faces
faces = fetch_olivetti_faces()
faces.DESCR
faces.keys()
faces.images.shape
faces.data.shape
faces.target.shape
np.max(faces.data)
np.min(faces.data)
np.median(faces.data)
def print_faces(images , target , top_n):
fig = plt.figure(figsize=(20,20))
for i in range(top_n):
p = fig.add_subplot(20,20,i+1,xticks=[],yticks=[])
p.imshow(images[i],cmap=plt.cm.bone)
p.text(0,14,str(target[i]))
p.text(0,59,str(i))
print_faces(faces.images,faces.target,20)
plt.show()
from sklearn.svm import SVC
from sklearn.cross_validation import cross_val_score,KFold
from scipy.stats import sem
svc_1 = SVC(kernel='linear')
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(faces.data, faces.target, test_size=0.25, random_state=0)
def evaluate_cross_validation(clf, X, y, K):
cv = KFold(len(y) , K, shuffle =True, random_state = 0)
scores = cross_val_score(clf,X,y,cv=cv)
print scores
evaluate_cross_validation(svc_1,X_train,y_train,5)
from sklearn import metrics
def train_and_test(clf, X_train, X_test, y_train, y_test):
clf.fit(X_train, y_train)
print "Accuracy on training Set"
print clf.score(X_train, y_train)
print "Accuracy on testing Set"
print clf.score(X_test, y_test)
y_pred = clf.predict(X_test)
print "Classification Report"
print metrics.classification_report(y_test, y_pred)
print "Confudion Matrix"
print metrics.confusion_matrix(y_test, y_pred)
train_and_test(svc_1, X_train, X_test, y_train, y_test)
glasses = [
(10, 19), (30, 32), (37, 38), (50, 59), (63, 64),
(69, 69), (120, 121), (124, 129), (130, 139), (160, 161),
(164, 169), (180, 182), (185, 185), (189, 189), (190, 192),
(194, 194), (196, 199), (260, 269), (270, 279), (300, 309),
(330, 339), (358, 359), (360, 369)]
def create_target(segments):
y = np.zeros(faces.target.shape[0])
for (start, end) in segments:
y[start:end+1] = 1
return y
target_glasses = create_target(glasses)
X_train, X_test, y_train, y_test = train_test_split(faces.data, target_glasses, test_size=0.25, random_state=0)
svc_2 = SVC(kernel='linear')
evaluate_cross_validation(svc_2, X_train, y_train, 5)
train_and_test(svc_2, X_train, X_test, y_train, y_test)
X_test = faces.data[30:40]
y_test = target_glasses[30:40]
y_test.shape
select = np.ones(target_glasses.shape[0])
select[30:40] = 0
X_train = faces.data[select == 1]
y_train = target_glasses[select == 1]
y_train.shape
svc_3 = SVC(kernel='linear')
train_and_test(svc_3, X_train, X_test, y_train, y_test)
Explanation: Classifying Images With Scikit_Learn
End of explanation
from sklearn.datasets import fetch_20newsgroups
news = fetch_20newsgroups(subset='all')
print type(news.data), type(news.target), type(news.target_names)
print news.target_names
len(news.data)
len(news.target)
news.data[0] #Content of the data at 0th index
news.target[0], news.target_names[news.target[0]] # Target_Name
Explanation: Naive Bayes Using Scikit_Lerarn
End of explanation
SPLIT_PERC = .75
split_size = int(len(news.data)*SPLIT_PERC)
X_train = news.data[:split_size]
X_test = news.data[split_size:]
y_train = news.target[:split_size]
y_test = news.target[split_size:]
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer, HashingVectorizer, CountVectorizer
clf_1 = Pipeline([('vect', CountVectorizer()), ('clf', MultinomialNB())])
clf_2 = Pipeline([('vect', HashingVectorizer(non_negative=True)), ('clf', MultinomialNB())])
clf_3 = Pipeline([('vect', TfidfVectorizer()), ('clf', MultinomialNB())])
from sklearn.cross_validation import cross_val_score, KFold
from scipy.stats import sem
clfs = [clf_1, clf_2, clf_3]
for clf in clfs:
print clf
evaluate_cross_validation(clf, news.data, news.target, 5)
Explanation: Pre-Processing The Data
machine learning algorithms can work only on numeric data, so our next step will be to convert our text-based dataset to a numeric dataset
End of explanation |
323 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TMY data and diffuse irradiance models
This tutorial explores using TMY data as inputs to different plane of array diffuse irradiance models.
This tutorial has been tested against the following package versions
Step1: Diffuse irradiance models
Make an empty pandas DataFrame for the results.
Step2: Perez
Step3: HayDavies
Step4: Isotropic
Step5: King Diffuse model
Step6: Klucher Model
Step7: Reindl
Step8: Calculate yearly, monthly, daily sums.
Step9: Plot Results
Step10: Daily
Step11: Monthly
Step12: Yearly
Step13: Compute the mean deviation from measured for each model and display as a function of the model | Python Code:
# built-in python modules
import os
import inspect
# scientific python add-ons
import numpy as np
import pandas as pd
# plotting stuff
# first line makes the plots appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
# seaborn makes your plots look better
try:
import seaborn as sns
sns.set(rc={"figure.figsize": (12, 6)})
except ImportError:
print('We suggest you install seaborn using conda or pip and rerun this cell')
# finally, we import the pvlib library
import pvlib
# Find the absolute file path to your pvlib installation
pvlib_abspath = os.path.dirname(os.path.abspath(inspect.getfile(pvlib)))
# absolute path to a data file
datapath = os.path.join(pvlib_abspath, 'data', '703165TY.csv')
# read tmy data with year values coerced to a single year
tmy_data, meta = pvlib.tmy.readtmy3(datapath, coerce_year=2015)
tmy_data.index.name = 'Time'
# TMY data seems to be given as hourly data with time stamp at the end
# shift the index 30 Minutes back for calculation of sun positions
tmy_data = tmy_data.shift(freq='-30Min')
tmy_data.GHI.plot()
plt.ylabel('Irradiance (W/m**2)')
tmy_data.DHI.plot()
plt.ylabel('Irradiance (W/m**2)')
surface_tilt = 30
surface_azimuth = 180 # pvlib uses 0=North, 90=East, 180=South, 270=West convention
albedo = 0.2
# create pvlib Location object based on meta data
sand_point = pvlib.location.Location(meta['latitude'], meta['longitude'], tz='US/Alaska',
altitude=meta['altitude'], name=meta['Name'].replace('"',''))
print(sand_point)
solpos = pvlib.solarposition.get_solarposition(tmy_data.index, sand_point)
solpos.plot()
# the extraradiation function returns a simple numpy array
# instead of a nice pandas series. We will change this
# in a future version
dni_extra = pvlib.irradiance.extraradiation(tmy_data.index)
dni_extra = pd.Series(dni_extra, index=tmy_data.index)
dni_extra.plot()
plt.ylabel('Extra terrestrial radiation (W/m**2)')
airmass = pvlib.atmosphere.relativeairmass(solpos['apparent_zenith'])
airmass.plot()
plt.ylabel('Airmass')
Explanation: TMY data and diffuse irradiance models
This tutorial explores using TMY data as inputs to different plane of array diffuse irradiance models.
This tutorial has been tested against the following package versions:
* pvlib 0.2.0
* Python 2.7.10
* IPython 3.2
* pandas 0.16.2
It should work with other Python and Pandas versions. It requires pvlib > 0.2.0 and IPython > 3.0.
Authors:
* Rob Andrews (@Calama-Consulting), Heliolytics, June 2014
* Will Holmgren (@wholmgren), University of Arizona, July 2015
Setup
See the tmy_to_power tutorial for more detailed explanations for the initial setup
End of explanation
diffuse_irrad = pd.DataFrame(index=tmy_data.index)
models = ['Perez', 'Hay-Davies', 'Isotropic', 'King', 'Klucher', 'Reindl']
Explanation: Diffuse irradiance models
Make an empty pandas DataFrame for the results.
End of explanation
diffuse_irrad['Perez'] = pvlib.irradiance.perez(surface_tilt,
surface_azimuth,
dhi=tmy_data.DHI,
dni=tmy_data.DNI,
dni_extra=dni_extra,
solar_zenith=solpos.apparent_zenith,
solar_azimuth=solpos.azimuth,
airmass=airmass)
Explanation: Perez
End of explanation
diffuse_irrad['Hay-Davies'] = pvlib.irradiance.haydavies(surface_tilt,
surface_azimuth,
dhi=tmy_data.DHI,
dni=tmy_data.DNI,
dni_extra=dni_extra,
solar_zenith=solpos.apparent_zenith,
solar_azimuth=solpos.azimuth)
Explanation: HayDavies
End of explanation
diffuse_irrad['Isotropic'] = pvlib.irradiance.isotropic(surface_tilt,
dhi=tmy_data.DHI)
Explanation: Isotropic
End of explanation
diffuse_irrad['King'] = pvlib.irradiance.king(surface_tilt,
dhi=tmy_data.DHI,
ghi=tmy_data.GHI,
solar_zenith=solpos.apparent_zenith)
Explanation: King Diffuse model
End of explanation
diffuse_irrad['Klucher'] = pvlib.irradiance.klucher(surface_tilt, surface_azimuth,
dhi=tmy_data.DHI,
ghi=tmy_data.GHI,
solar_zenith=solpos.apparent_zenith,
solar_azimuth=solpos.azimuth)
Explanation: Klucher Model
End of explanation
diffuse_irrad['Reindl'] = pvlib.irradiance.reindl(surface_tilt,
surface_azimuth,
dhi=tmy_data.DHI,
dni=tmy_data.DNI,
ghi=tmy_data.GHI,
dni_extra=dni_extra,
solar_zenith=solpos.apparent_zenith,
solar_azimuth=solpos.azimuth)
Explanation: Reindl
End of explanation
yearly = diffuse_irrad.resample('A', how='sum').dropna().squeeze() / 1000.0 # kWh
monthly = diffuse_irrad.resample('M', how='sum', kind='period') / 1000.0
daily = diffuse_irrad.resample('D', how='sum') / 1000.0
Explanation: Calculate yearly, monthly, daily sums.
End of explanation
ax = diffuse_irrad.plot(title='In-plane diffuse irradiance', alpha=.75, lw=1)
ax.set_ylim(0, 800)
ylabel = ax.set_ylabel('Diffuse Irradiance [W]')
plt.legend()
diffuse_irrad.describe()
diffuse_irrad.dropna().plot(kind='density')
Explanation: Plot Results
End of explanation
ax_daily = daily.tz_convert('UTC').plot(title='Daily diffuse irradiation')
ylabel = ax_daily.set_ylabel('Irradiation [kWh]')
Explanation: Daily
End of explanation
ax_monthly = monthly.plot(title='Monthly average diffuse irradiation', kind='bar')
ylabel = ax_monthly.set_ylabel('Irradiation [kWh]')
Explanation: Monthly
End of explanation
yearly.plot(kind='barh')
Explanation: Yearly
End of explanation
mean_yearly = yearly.mean()
yearly_mean_deviation = (yearly - mean_yearly) / yearly * 100.0
yearly_mean_deviation.plot(kind='bar')
Explanation: Compute the mean deviation from measured for each model and display as a function of the model
End of explanation |
324 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Given a 2D set of points spanned by axes $x$ and $y$ axes, we will try to fit a line that best approximates the data. The equation of the line, in slope-intercept form, is defined by
Step1: If $N$ = num_points, then the error in fitting a line to the points (also defined as Cost, $C$) can be defined as | Python Code:
def generate_random_points_along_a_line (slope, intercept, num_points, abs_value, abs_noise):
# randomly select x
x = np.random.uniform(-abs_value, abs_value, num_points)
# y = mx + b + noise
y = slope*x + intercept + np.random.uniform(-abs_noise, abs_noise, num_points)
return x, y
def plot_points(x,y):
plt.scatter(x, y)
plt.title('Scatter plot of x and y')
plt.xlabel('x')
plt.ylabel('y')
slope = 4
intercept = -3
num_points = 20
abs_value = 4
abs_noise = 2
x, y = generate_random_points_along_a_line (slope, intercept, num_points, abs_value, abs_noise)
plot_points(x, y)
Explanation: Given a 2D set of points spanned by axes $x$ and $y$ axes, we will try to fit a line that best approximates the data. The equation of the line, in slope-intercept form, is defined by: $y = mx + b$.
End of explanation
# this function computes gradient with respect to slope m
def grad_m (x, y, m, b):
return np.sum(np.multiply(-2*(y - (m*x + b)), x))
# this function computes gradient with respect to intercept b
def grad_b (x, y, m, b):
return np.sum(-2*(y - (m*x + b)))
# Performs gradient descent
def gradient_descent (x, y, num_iterations, learning_rate):
# Initialize m and b
m = np.random.uniform(-1, 1, 1)
b = np.random.uniform(-1, 1, 1)
# Update m and b in direction opposite to that of the gradient to minimize loss
for i in range(num_iterations):
m = m - learning_rate * grad_m (x, y, m, b)
b = b - learning_rate * grad_b (x, y, m, b)
# Return final slope and intercept
return m, b
# Plot point along with the best fit line
def plot_line (m, b, x, y):
plot_points(x,y)
plt.plot(x, x*m + b, 'r')
plt.show()
# In general, keep num_iterations high and learning_rate low.
num_iterations = 1000
learning_rate = 0.0001
m, b = gradient_descent (x, y, num_iterations, learning_rate)
plot_line (m, b, x, y)
plt.show()
Explanation: If $N$ = num_points, then the error in fitting a line to the points (also defined as Cost, $C$) can be defined as:
$C = \sum_{i=0}^{N} (y-(mx+b))^2$
To perform gradient descent, we need the partial derivatives of Cost $C$ with respect to slope $m$ and intercept $b$.
$\frac{\partial C}{\partial m} = \sum_{i=0}^{N} -2(y-(mx+b)).x$
$\frac{\partial C}{\partial b} = \sum_{i=0}^{N} -2(y-(mx+b))$
End of explanation |
325 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Strings and Stuff in Python
Step1: Strings are just arrays of characters
Step2: Arithmetic with Strings
Step3: You can compare strings
Step4: Python supports Unicode characters
You can enter unicode characters directly from the keyboard (depends on your operating system), or you can use the ASCII encoding.
A list of ASCII encoding can be found here.
For example the ASCII ecoding for the greek capital omega is U+03A9, so you can create the character with \U000003A9
Step5: Emoji are unicode characters, so you can use them a well (not all OSs will show all characters!)
Step6: Emoji can not be used as variable names (at least not yet ...)
Step7: Watch out for variable types!
Step8: Use explicit formatting to avoid these errors
Python string formatting has the form
Step9: Nice trick to convert number to a different base
Step10: Formatting is way better than piecing strings together
Step12: Really long strings
Step13: You can also use the textwrap module
Step14: Working with strings
Step15: Find and Replace
Step16: Justification and Cleaning
Step17: Splitting and Joining
Step18: Line Formatting
Step19: Regular Expression in Python (re)
Step20: Raw strings begin with a special prefix (r) and signal Python not to interpret backslashes and other special metacharacters in the string, allowing you to pass them through directly to the regular expression engine.
Step21: One of the useful tings about regular expressions in Python is using the to search and replace parts of string (re.sub)
Step22: RegEx Golf!
Step23: Working with Files and Directories (OS agnostic)
The os package allows you to do operating system stuff without worrying about what system you are using
Step24: You can also find files with glob | Python Code:
import numpy as np
Explanation: Strings and Stuff in Python
End of explanation
s = 'spam'
s,len(s),s[0],s[0:2]
s[::-1]
Explanation: Strings are just arrays of characters
End of explanation
s = 'spam'
e = "eggs"
s + e
s + " " + e
4 * (s + " ") + e
print(4 * (s + " ") + s + " and\n" + e) # use \n to get a newline with the print function
Explanation: Arithmetic with Strings
End of explanation
"spam" == "good"
"spam" != "good"
"spam" == "spam"
"sp" < "spam"
"spam" < "eggs"
Explanation: You can compare strings
End of explanation
print("This resistor has a value of 100 k\U000003A9")
Ω = 1e3
Ω + np.pi
Explanation: Python supports Unicode characters
You can enter unicode characters directly from the keyboard (depends on your operating system), or you can use the ASCII encoding.
A list of ASCII encoding can be found here.
For example the ASCII ecoding for the greek capital omega is U+03A9, so you can create the character with \U000003A9
End of explanation
radio_active = "\U00002622"
wink = "\U0001F609"
print(radio_active + wink)
Explanation: Emoji are unicode characters, so you can use them a well (not all OSs will show all characters!)
End of explanation
☢ = 2.345
☢ ** 2
Explanation: Emoji can not be used as variable names (at least not yet ...)
End of explanation
n = 4
print("I would like " + n + " orders of spam")
print("I would like " + str(n) + " orders of spam")
Explanation: Watch out for variable types!
End of explanation
A = 42
B = 1.23456
C = 1.23456e10
D = 'Forty Two'
"I like the number {0:d}".format(A)
"I like the number {0:s}".format(D)
"The number {0:f} is fine, but not a cool as {1:d}".format(B,A)
"The number {0:.3f} is fine, but not a cool as {1:d}".format(C,A) # 3 places after decimal
"The number {0:.3e} is fine, but not a cool as {1:d}".format(C,A) # sci notation
"{0:g} and {1:g} are the same format but different results".format(B,C)
Explanation: Use explicit formatting to avoid these errors
Python string formatting has the form:
{Variable Index: Format Type} .format(Variable)
End of explanation
"Representation of the number {1:s} - dec: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(A,D)
Explanation: Nice trick to convert number to a different base
End of explanation
import pandas as pd
planet_table = pd.read_csv('Planets.csv')
for idx,val in enumerate(planet_table['Name']):
a = planet_table['a'][idx]
if (a < 3.0):
Place = "Inner"
else:
Place = "Outer"
my_string = ("The planet {0:s}, at a distance of {1:.1f} AU, is in the {2:s} solar system"
.format(val,a,Place))
print(my_string)
Explanation: Formatting is way better than piecing strings together
End of explanation
long_string = (
The planets {0:s} and {1:s} are at a distance
of {2:.1f} AU and {3:.1f} AU from the Sun.
.format(planet_table['Name'][1],planet_table['Name'][4],
planet_table['a'][1],planet_table['a'][4])
)
print(long_string)
Explanation: Really long strings
End of explanation
import textwrap
lots_of_spam = (s + " ") * 100
print(lots_of_spam)
textwrap.wrap(lots_of_spam, width=70)
Explanation: You can also use the textwrap module
End of explanation
line = "My hovercraft is full of eels"
Explanation: Working with strings
End of explanation
line.replace('eels', 'wheels')
Explanation: Find and Replace
End of explanation
line.center(100)
line.ljust(100)
line.rjust(100, "*")
line2 = " My hovercraft is full of eels "
line2.strip()
line3 = "*$*$*$*$*$*$*$*$My hovercraft is full of eels*$*$*$*$"
line3.strip('*$')
line3.lstrip('*$'), line3.rstrip('*$')
Explanation: Justification and Cleaning
End of explanation
line.split()
'_*_'.join(line.split())
' '.join(line.split()[::-1])
Explanation: Splitting and Joining
End of explanation
anotherline = "mY hoVErCRaft iS fUlL oF eEELS"
anotherline.upper()
anotherline.lower()
anotherline.title()
anotherline.capitalize()
anotherline.swapcase()
Explanation: Line Formatting
End of explanation
import re
myline = "This is a test, this in only a test."
print(myline)
Explanation: Regular Expression in Python (re)
End of explanation
regex1 = r"test"
match1 = re.search(regex1, myline)
match1
myline[10:14]
match3 = re.findall(regex1, myline)
match3
Explanation: Raw strings begin with a special prefix (r) and signal Python not to interpret backslashes and other special metacharacters in the string, allowing you to pass them through directly to the regular expression engine.
End of explanation
mynewline = re.sub(regex1, "*TEST*", myline)
mynewline
Explanation: One of the useful tings about regular expressions in Python is using the to search and replace parts of string (re.sub)
End of explanation
golf_file = open("./GOLF/golf_00").read().splitlines()
golf_file
for i in golf_file:
print(i)
def regex_test_list(mylist, myregex):
for line in mylist:
mytest = re.search(myregex, line)
if (mytest):
print(line + " YES")
else:
print(line + " NOPE")
regex = r"one"
regex_test_list(golf_file, regex)
regex = r"t|n"
regex_test_list(golf_file, regex)
Explanation: RegEx Golf!
End of explanation
import os
os.chdir("./MyData")
my_data_dir = os.listdir()
my_data_dir
for file in my_data_dir:
if file.endswith(".txt"):
print(file)
for file in my_data_dir:
if file.endswith(".txt"):
print(os.path.abspath(file))
Explanation: Working with Files and Directories (OS agnostic)
The os package allows you to do operating system stuff without worrying about what system you are using
End of explanation
import glob
my_files = glob.glob('02_*.fits')
my_files
for file in my_files:
file_size = os.stat(file).st_size
out_string = "The file {0} as a size of {1} bytes".format(file,file_size)
print(out_string)
Explanation: You can also find files with glob
End of explanation |
326 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deploying Tensorflow models on Verta
Within Verta, a "Model" can be any arbitrary function
Step1: 0.1 Verta import and setup
Step2: 1. Model Training
1.1 Load training data
Step3: 1.2 Define network
Step4: 1.3 Train/test code
Step5: 2. Register Model for deployment
Step6: 2.1 Register from the model object
If you are in the same file where you have the model object handy, use the code below to package the model
Step7: 2.2 (OR) Register a serialized version of the model using the VertaModelBase
Step8: 3. Deploy model to endpoint | Python Code:
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda, Compose
import matplotlib.pyplot as plt
Explanation: Deploying Tensorflow models on Verta
Within Verta, a "Model" can be any arbitrary function: a traditional ML model (e.g., sklearn, PyTorch, TF, etc); a function (e.g., squaring a number, making a DB function etc.); or a mixture of the above (e.g., pre-processing code, a DB call, and then a model application.) See more here.
This notebook provides an example of how to deploy a PyTorch model on Verta as a Verta Standard Model either via convenience functions (for Keras) or by extending VertaModelBase.
0. Imports
End of explanation
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
import os
# Ensure credentials are set up, if not, use below
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
# os.environ['VERTA_HOST'] =
from verta import Client
client = Client(os.environ['VERTA_HOST'])
Explanation: 0.1 Verta import and setup
End of explanation
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
)
# Download test data from open datasets.
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor(),
)
batch_size = 64
# Create data loaders.
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)
for X, y in test_dataloader:
print("Shape of X [N, C, H, W]: ", X.shape)
print("Shape of y: ", y.shape, y.dtype)
break
Explanation: 1. Model Training
1.1 Load training data
End of explanation
# Get cpu or gpu device for training.
device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))
# Define model
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
nn.ReLU()
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork().to(device)
print(model)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
Explanation: 1.2 Define network
End of explanation
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
epochs = 5
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_dataloader, model, loss_fn, optimizer)
test(test_dataloader, model, loss_fn)
print("Done!")
torch.save(model.state_dict(), "model.pth")
print("Saved PyTorch Model State to model.pth")
model = NeuralNetwork()
model.load_state_dict(torch.load("model.pth"))
classes = [
"T-shirt/top",
"Trouser",
"Pullover",
"Dress",
"Coat",
"Sandal",
"Shirt",
"Sneaker",
"Bag",
"Ankle boot",
]
model.eval()
x, y = test_data[0][0], test_data[0][1]
with torch.no_grad():
pred = model(x)
predicted, actual = classes[pred[0].argmax(0)], classes[y]
print(f'Predicted: "{predicted}", Actual: "{actual}"')
Explanation: 1.3 Train/test code
End of explanation
registered_model = client.get_or_create_registered_model(
name="fashion-mnist", labels=["computer-vision", "pytorch"])
Explanation: 2. Register Model for deployment
End of explanation
from verta.environment import Python
model_version = registered_model.create_standard_model_from_torch(
model,
environment=Python(requirements=["torch", "torchvision"]),
name="v1",
)
Explanation: 2.1 Register from the model object
If you are in the same file where you have the model object handy, use the code below to package the model
End of explanation
from verta.registry import VertaModelBase
class FashionMNISTClassifier(VertaModelBase):
def __init__(self, artifacts):
self.model = NeuralNetwork()
model.load_state_dict(torch.load(artifacts["model.pth"]))
def predict(self, batch_input):
results = []
for one_input in batch_input:
with torch.no_grad():
pred = model(x)
results.append(pred)
return results
model_version = registered_model.create_standard_model(
model_cls=FashionMNISTClassifier,
environment=Python(requirements=["torch", "torchvision"]),
artifacts={"model.pth" : "model.pth"},
name="v2"
)
Explanation: 2.2 (OR) Register a serialized version of the model using the VertaModelBase
End of explanation
fashion_mnist_endpoint = client.get_or_create_endpoint("fashion-mnist")
fashion_mnist_endpoint.update(model_version, wait=True)
deployed_model = fashion_mnist_endpoint.get_deployed_model()
deployed_model.predict([test_data[0][0]])
Explanation: 3. Deploy model to endpoint
End of explanation |
327 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="static/pybofractal.png" alt="Pybonacci" style="width
Step1: Set Definitions
Sets are created as attributes object of the main model objects and all the information is given as parameter in the constructor function. Specifically, we are passing to the constructor the initial elements of the set and a documentation string to keep track on what our set represents
Step2: Parameters
Parameter objects are created specifying the sets over which they are defined and are initialised with either a python dictionary or a scalar
Step3: A third, powerful way to initialize a parameter is using a user-defined function.
This function will be automatically called by pyomo with any possible (i,j) set. In this case pyomo will actually call c_init() six times in order to initialize the model.c parameter.
Step4: Variables
Similar to parameters, variables are created specifying their domain(s). For variables we can also specify the upper/lower bounds in the constructor.
Differently from GAMS, we don't need to define the variable that is on the left hand side of the objective function.
Step5: Constrains
At this point, it should not be a surprise that constrains are again defined as model objects with the required information passed as parameter in the constructor function.
Step6: The above code take advantage of list comprehensions, a powerful feature of the python language that provides a concise way to loop over a list. If we take the supply_rule as example, this is actually called two times by pyomo (once for each of the elements of i). Without list comprehensions we would have had to write our function using a for loop, like
Step7: Using list comprehension is however quicker to code and more readable.
Objective and Solving
The definition of the objective is similar to those of the constrains, except that most solvers require a scalar objective function, hence a unique function, and we can specify the sense (direction) of the optimisation.
Step8: As we are here looping over two distinct sets, we can see how list comprehension really simplifies the code. The objective function could have being written without list comprehension as
Step9: Retrieving the Output
We use the pyomo_postprocess() function to retrieve the output and do something with it. For example, we could display solution values (see below), plot a graph with matplotlib or save it in a csv file.
This function is called by pyomo after the solver has finished.
Step10: We can print model structure information with model.pprint() (“pprint” stand for “pretty print”).
Results are also by default saved in a results.json file or, if PyYAML is installed in the system, in results.yml.
Editing and Running the Script
Differently from GAMS, you can use whatever editor environment you wish to code a pyomo script. If you don't need debugging features, a simple text editor like Notepad++ (in windows), gedit or kate (in Linux) will suffice. They already have syntax highlight for python.
If you want advanced features and debugging capabilities you can use a dedicated Python IDE, like e.g. Spyder.
You will normally run the script as pyomo solve –solver=glpk transport.py. You can output solver specific output adding the option –stream-output. If you want to run the script as python transport.py add the following lines at the end
Step11: Finally, if you are very lazy and want to run the script with just ./transport.py (and you are in Linux) add the following lines at the top
Step12: Complete script
Here is the complete script
Step13: Solutions
Running the model lead to the following output
Step14: By default, the optimization results are stored in the file results.json | Python Code:
# Import of the pyomo module
from pyomo.environ import *
# Creation of a Concrete Model
model = ConcreteModel()
Explanation: <img src="static/pybofractal.png" alt="Pybonacci" style="width: 200px;"/>
<img src="static/cacheme_logo.png" alt="CAChemE" style="width: 300px;"/>
The Transport Problem
Note: Adapted from https://github.com/Pyomo/PyomoGallery, see LICENSE.BSD
Summary
The goal of the Transport Problem is to select the quantities of an homogeneous good that has several production plants and several punctiform markets as to minimise the transportation costs.
It is the default tutorial for the GAMS language, and GAMS equivalent code is inserted as single-dash comments. The original GAMS code needs slighly different ordering of the commands and it's available at http://www.gams.com/mccarl/trnsport.gms.
Problem Statement
The Transport Problem can be formulated mathematically as a linear programming problem using the following model.
Sets
$I$ = set of canning plants
$J$ = set of markets
Parameters
$a_i$ = capacity of plant $i$ in cases, $\forall i \in I$ <br />
$b_j$ = demand at market $j$ in cases, $\forall j \in J$ <br />
$d_{i,j}$ = distance in thousands of miles, $\forall i \in I, \forall j \in J$ <br />
$f$ = freight in dollars per case per thousand miles <br />
$c_{i,j}$ = transport cost in thousands of dollars per case
$c_{i,j}$ is obtained exougenously to the optimisation problem as $c_{i,j} = f \cdot d_{i,j}$, $\forall i \in I, \forall j \in J$
Variables
$x_{i,j}$ = shipment quantities in cases <br />
z = total transportation costs in thousands of dollars
Objective
Minimize the total cost of the shipments: <br />
$\min_{x} z = \sum_{i \in I} \sum_{j \in J} c_{i,j} x_{i,j}$
Constraints
Observe supply limit at plant i: <br />
$\sum_{i \in I} x_{i,j} \leq a_{i}$, $\forall i \in I$
Satisfy demand at market j: <br />
$\sum_{j \in J} x_{i,j} \geq b_{j}$, $\forall j \in J$
Non-negative transportation quantities <br />
$x_{i,j} \geq 0$, $\forall i \in I, \forall j \in J$
Pyomo Formulation
Creation of the Model
In pyomo everything is an object. The various components of the model (sets, parameters, variables, constraints, objective..) are all attributes of the main model object while being objects themselves.
There are two type of models in pyomo: A ConcreteModel is one where all the data is defined at the model creation. We are going to use this type of model in this tutorial. Pyomo however supports also an AbstractModel, where the model structure is firstly generated and then particular instances of the model are generated with a particular set of data.
The first thing to do in the script is to load the pyomo library and create a new ConcreteModel object. We have little imagination here, and we call our model "model". You can give it whatever name you want. However, if you give your model an other name, you also need to create a model object at the end of your script:
End of explanation
## Define sets ##
# Sets
# i canning plants / seattle, san-diego /
# j markets / new-york, chicago, topeka / ;
model.i = Set(initialize=['seattle','san-diego'], doc='Canning plans')
model.j = Set(initialize=['new-york','chicago', 'topeka'], doc='Markets')
Explanation: Set Definitions
Sets are created as attributes object of the main model objects and all the information is given as parameter in the constructor function. Specifically, we are passing to the constructor the initial elements of the set and a documentation string to keep track on what our set represents:
End of explanation
## Define parameters ##
# Parameters
# a(i) capacity of plant i in cases
# / seattle 350
# san-diego 600 /
# b(j) demand at market j in cases
# / new-york 325
# chicago 300
# topeka 275 / ;
model.a = Param(model.i, initialize={'seattle':350,'san-diego':600}, doc='Capacity of plant i in cases')
model.b = Param(model.j, initialize={'new-york':325,'chicago':300,'topeka':275}, doc='Demand at market j in cases')
# Table d(i,j) distance in thousands of miles
# new-york chicago topeka
# seattle 2.5 1.7 1.8
# san-diego 2.5 1.8 1.4 ;
dtab = {
('seattle', 'new-york') : 2.5,
('seattle', 'chicago') : 1.7,
('seattle', 'topeka') : 1.8,
('san-diego','new-york') : 2.5,
('san-diego','chicago') : 1.8,
('san-diego','topeka') : 1.4,
}
model.d = Param(model.i, model.j, initialize=dtab, doc='Distance in thousands of miles')
# Scalar f freight in dollars per case per thousand miles /90/ ;
model.f = Param(initialize=90, doc='Freight in dollars per case per thousand miles')
Explanation: Parameters
Parameter objects are created specifying the sets over which they are defined and are initialised with either a python dictionary or a scalar:
End of explanation
# Parameter c(i,j) transport cost in thousands of dollars per case ;
# c(i,j) = f * d(i,j) / 1000 ;
def c_init(model, i, j):
return model.f * model.d[i,j] / 1000
model.c = Param(model.i, model.j, initialize=c_init, doc='Transport cost in thousands of dollar per case')
Explanation: A third, powerful way to initialize a parameter is using a user-defined function.
This function will be automatically called by pyomo with any possible (i,j) set. In this case pyomo will actually call c_init() six times in order to initialize the model.c parameter.
End of explanation
## Define variables ##
# Variables
# x(i,j) shipment quantities in cases
# z total transportation costs in thousands of dollars ;
# Positive Variable x ;
model.x = Var(model.i, model.j, bounds=(0.0,None), doc='Shipment quantities in case')
Explanation: Variables
Similar to parameters, variables are created specifying their domain(s). For variables we can also specify the upper/lower bounds in the constructor.
Differently from GAMS, we don't need to define the variable that is on the left hand side of the objective function.
End of explanation
## Define contrains ##
# supply(i) observe supply limit at plant i
# supply(i) .. sum (j, x(i,j)) =l= a(i)
def supply_rule(model, i):
return sum(model.x[i,j] for j in model.j) <= model.a[i]
model.supply = Constraint(model.i, rule=supply_rule, doc='Observe supply limit at plant i')
# demand(j) satisfy demand at market j ;
# demand(j) .. sum(i, x(i,j)) =g= b(j);
def demand_rule(model, j):
return sum(model.x[i,j] for i in model.i) >= model.b[j]
model.demand = Constraint(model.j, rule=demand_rule, doc='Satisfy demand at market j')
Explanation: Constrains
At this point, it should not be a surprise that constrains are again defined as model objects with the required information passed as parameter in the constructor function.
End of explanation
def supply_rule(model, i):
supply = 0.0
for j in model.j:
supply += model.x[i,j]
return supply <= model.a[i]
Explanation: The above code take advantage of list comprehensions, a powerful feature of the python language that provides a concise way to loop over a list. If we take the supply_rule as example, this is actually called two times by pyomo (once for each of the elements of i). Without list comprehensions we would have had to write our function using a for loop, like:
End of explanation
## Define Objective and solve ##
# cost define objective function
# cost .. z =e= sum((i,j), c(i,j)*x(i,j)) ;
# Model transport /all/ ;
# Solve transport using lp minimizing z ;
#
# itertools.product() returns the Cartesian product of two or more iterables
import itertools
def objective_rule(model):
return sum(model.c[i,j]*model.x[i,j] for i, j in itertools.product(model.i, model.j))
model.objective = Objective(rule=objective_rule, sense=minimize, doc='Define objective function')
Explanation: Using list comprehension is however quicker to code and more readable.
Objective and Solving
The definition of the objective is similar to those of the constrains, except that most solvers require a scalar objective function, hence a unique function, and we can specify the sense (direction) of the optimisation.
End of explanation
def objective_rule(model):
obj = 0.0
for ki in model.i:
for kj in model.j:
obj += model.c[ki,kj]*model.x[ki,kj]
return obj
Explanation: As we are here looping over two distinct sets, we can see how list comprehension really simplifies the code. The objective function could have being written without list comprehension as:
End of explanation
## Display of the output ##
# Display x.l, x.m ;
def pyomo_postprocess(options=None, instance=None, results=None):
model.x.display()
Explanation: Retrieving the Output
We use the pyomo_postprocess() function to retrieve the output and do something with it. For example, we could display solution values (see below), plot a graph with matplotlib or save it in a csv file.
This function is called by pyomo after the solver has finished.
End of explanation
# This emulates what the pyomo command-line tools does
from pyomo.opt import SolverFactory
import pyomo.environ
opt = SolverFactory("glpk")
results = opt.solve(model)
# sends results to stdout
results.write()
print("\nDisplaying Solution\n" + '-'*60)
pyomo_postprocess(None, None, results)
Explanation: We can print model structure information with model.pprint() (“pprint” stand for “pretty print”).
Results are also by default saved in a results.json file or, if PyYAML is installed in the system, in results.yml.
Editing and Running the Script
Differently from GAMS, you can use whatever editor environment you wish to code a pyomo script. If you don't need debugging features, a simple text editor like Notepad++ (in windows), gedit or kate (in Linux) will suffice. They already have syntax highlight for python.
If you want advanced features and debugging capabilities you can use a dedicated Python IDE, like e.g. Spyder.
You will normally run the script as pyomo solve –solver=glpk transport.py. You can output solver specific output adding the option –stream-output. If you want to run the script as python transport.py add the following lines at the end:
End of explanation
#!/usr/bin/env python
# -*- coding: utf-8 -*-
Explanation: Finally, if you are very lazy and want to run the script with just ./transport.py (and you are in Linux) add the following lines at the top:
End of explanation
!cat transport.py
Explanation: Complete script
Here is the complete script:
End of explanation
!pyomo solve --solver=glpk transport.py
Explanation: Solutions
Running the model lead to the following output:
End of explanation
!cat results.json
Explanation: By default, the optimization results are stored in the file results.json:
End of explanation |
328 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview of MEG/EEG analysis with MNE-Python
This tutorial covers the basic EEG/MEG pipeline for event-related analysis
Step1: Loading data
MNE-Python data structures are based around the FIF file format from
Neuromag, but there are reader functions for a wide variety of other
data formats <data-formats>. MNE-Python also has interfaces to a
variety of publicly available datasets <datasets>,
which MNE-Python can download and manage for you.
We'll start this tutorial by loading one of the example datasets (called
"sample-dataset"), which contains EEG and MEG data from one subject
performing an audiovisual experiment, along with structural MRI scans for
that subject. The
Step2: By default,
Step3:
Step4: Preprocessing
MNE-Python supports a variety of preprocessing approaches and techniques
(maxwell filtering, signal-space projection, independent components analysis,
filtering, downsampling, etc); see the full list of capabilities in the
Step5: Once we're confident about which component(s) we want to remove, we pass them
as the exclude parameter and then apply the ICA to the raw signal. The
Step6: Detecting experimental events
The sample dataset includes several
Step7: The resulting events array is an ordinary 3-column
Step8: Event dictionaries like this one are used when extracting epochs from
continuous data; the / character in the dictionary keys allows pooling
across conditions by requesting partial condition descriptors (i.e.,
requesting 'auditory' will select all epochs with Event IDs 1 and 2;
requesting 'left' will select all epochs with Event IDs 1 and 3). An
example of this is shown in the next section. There is also a convenient
Step9: For paradigms that are not event-related (e.g., analysis of resting-state
data), you can extract regularly spaced (possibly overlapping) spans of data
by creating events using
Step10: We'll also pass the event dictionary as the event_id parameter (so we can
work with easy-to-pool event labels instead of the integer event IDs), and
specify tmin and tmax (the time relative to each event at which to
start and end each epoch). As mentioned above, by default
Step11: Next we'll pool across left/right stimulus presentations so we can compare
auditory versus visual responses. To avoid biasing our signals to the
left or right, we'll use
Step12: Like
Step13: <div class="alert alert-info"><h4>Note</h4><p>Both
Step14: Estimating evoked responses
Now that we have our conditions in aud_epochs and vis_epochs, we can
get an estimate of evoked responses to auditory versus visual stimuli by
averaging together the epochs in each condition. This is as simple as calling
the
Step15: We can also get a more detailed view of each
Step16: Evoked objects can also be combined to show contrasts between conditions,
using the mne.combine_evoked function. A simple difference can be
generated by passing weights=[1, -1]. We'll then plot the difference wave
at each sensor using ~mne.Evoked.plot_topo
Step17: Inverse modeling
Finally, we can estimate the origins of the evoked activity by projecting the
sensor data into this subject's
Step18: Finally, in order to plot the source estimate on the subject's cortical
surface we'll also need the path to the sample subject's structural MRI files
(the subjects_dir) | Python Code:
import os
import numpy as np
import mne
Explanation: Overview of MEG/EEG analysis with MNE-Python
This tutorial covers the basic EEG/MEG pipeline for event-related analysis:
loading data, epoching, averaging, plotting, and estimating cortical activity
from sensor data. It introduces the core MNE-Python data structures
:class:~mne.io.Raw, :class:~mne.Epochs, :class:~mne.Evoked, and
:class:~mne.SourceEstimate, and covers a lot of ground fairly quickly (at the
expense of depth). Subsequent tutorials address each of these topics in greater
detail.
:depth: 1
We begin by importing the necessary Python modules:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
Explanation: Loading data
MNE-Python data structures are based around the FIF file format from
Neuromag, but there are reader functions for a wide variety of other
data formats <data-formats>. MNE-Python also has interfaces to a
variety of publicly available datasets <datasets>,
which MNE-Python can download and manage for you.
We'll start this tutorial by loading one of the example datasets (called
"sample-dataset"), which contains EEG and MEG data from one subject
performing an audiovisual experiment, along with structural MRI scans for
that subject. The :func:mne.datasets.sample.data_path function will
automatically download the dataset if it isn't found in one of the expected
locations, then return the directory path to the dataset (see the
documentation of :func:~mne.datasets.sample.data_path for a list of places
it checks before downloading). Note also that for this tutorial to run
smoothly on our servers, we're using a filtered and downsampled version of
the data (:file:sample_audvis_filt-0-40_raw.fif), but an unfiltered version
(:file:sample_audvis_raw.fif) is also included in the sample dataset and
could be substituted here when running the tutorial locally.
End of explanation
print(raw)
print(raw.info)
Explanation: By default, :func:~mne.io.read_raw_fif displays some information about the
file it's loading; for example, here it tells us that there are four
"projection items" in the file along with the recorded data; those are
:term:SSP projectors <projector> calculated to remove environmental noise
from the MEG signals, plus a projector to mean-reference the EEG channels;
these are discussed in the tutorial tut-projectors-background.
In addition to the information displayed during loading,
you can get a glimpse of the basic details of a :class:~mne.io.Raw object
by printing it; even more is available by printing its info attribute
(a :class:dictionary-like object <mne.Info> that is preserved across
:class:~mne.io.Raw, :class:~mne.Epochs, and :class:~mne.Evoked
objects). The info data structure keeps track of channel locations,
applied filters, projectors, etc. Notice especially the chs entry,
showing that MNE-Python detects different sensor types and handles each
appropriately. See tut-info-class for more on the :class:~mne.Info
class.
End of explanation
raw.plot_psd(fmax=50)
raw.plot(duration=5, n_channels=30)
Explanation: :class:~mne.io.Raw objects also have several built-in plotting methods;
here we show the power spectral density (PSD) for each sensor type with
:meth:~mne.io.Raw.plot_psd, as well as a plot of the raw sensor traces with
:meth:~mne.io.Raw.plot. In the PSD plot, we'll only plot frequencies below
50 Hz (since our data are low-pass filtered at 40 Hz). In interactive Python
sessions, :meth:~mne.io.Raw.plot is interactive and allows scrolling,
scaling, bad channel marking, annotation, projector toggling, etc.
End of explanation
# set up and fit the ICA
ica = mne.preprocessing.ICA(n_components=20, random_state=97, max_iter=800)
ica.fit(raw)
ica.exclude = [1, 2] # details on how we picked these are omitted here
ica.plot_properties(raw, picks=ica.exclude)
Explanation: Preprocessing
MNE-Python supports a variety of preprocessing approaches and techniques
(maxwell filtering, signal-space projection, independent components analysis,
filtering, downsampling, etc); see the full list of capabilities in the
:mod:mne.preprocessing and :mod:mne.filter submodules. Here we'll clean
up our data by performing independent components analysis
(:class:~mne.preprocessing.ICA); for brevity we'll skip the steps that
helped us determined which components best capture the artifacts (see
tut-artifact-ica for a detailed walk-through of that process).
End of explanation
orig_raw = raw.copy()
raw.load_data()
ica.apply(raw)
# show some frontal channels to clearly illustrate the artifact removal
chs = ['MEG 0111', 'MEG 0121', 'MEG 0131', 'MEG 0211', 'MEG 0221', 'MEG 0231',
'MEG 0311', 'MEG 0321', 'MEG 0331', 'MEG 1511', 'MEG 1521', 'MEG 1531',
'EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006',
'EEG 007', 'EEG 008']
chan_idxs = [raw.ch_names.index(ch) for ch in chs]
orig_raw.plot(order=chan_idxs, start=12, duration=4)
raw.plot(order=chan_idxs, start=12, duration=4)
Explanation: Once we're confident about which component(s) we want to remove, we pass them
as the exclude parameter and then apply the ICA to the raw signal. The
:meth:~mne.preprocessing.ICA.apply method requires the raw data to be
loaded into memory (by default it's only read from disk as-needed), so we'll
use :meth:~mne.io.Raw.load_data first. We'll also make a copy of the
:class:~mne.io.Raw object so we can compare the signal before and after
artifact removal side-by-side:
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5]) # show the first 5
Explanation: Detecting experimental events
The sample dataset includes several :term:"STIM" channels <stim channel>
that recorded electrical
signals sent from the stimulus delivery computer (as brief DC shifts /
squarewave pulses). These pulses (often called "triggers") are used in this
dataset to mark experimental events: stimulus onset, stimulus type, and
participant response (button press). The individual STIM channels are
combined onto a single channel, in such a way that voltage
levels on that channel can be unambiguously decoded as a particular event
type. On older Neuromag systems (such as that used to record the sample data)
this summation channel was called STI 014, so we can pass that channel
name to the :func:mne.find_events function to recover the timing and
identity of the stimulus events.
End of explanation
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'buttonpress': 32}
Explanation: The resulting events array is an ordinary 3-column :class:NumPy array
<numpy.ndarray>, with sample number in the first column and integer event ID
in the last column; the middle column is usually ignored. Rather than keeping
track of integer event IDs, we can provide an event dictionary that maps
the integer IDs to experimental conditions or events. In this dataset, the
mapping looks like this:
+----------+----------------------------------------------------------+
| Event ID | Condition |
+==========+==========================================================+
| 1 | auditory stimulus (tone) to the left ear |
+----------+----------------------------------------------------------+
| 2 | auditory stimulus (tone) to the right ear |
+----------+----------------------------------------------------------+
| 3 | visual stimulus (checkerboard) to the left visual field |
+----------+----------------------------------------------------------+
| 4 | visual stimulus (checkerboard) to the right visual field |
+----------+----------------------------------------------------------+
| 5 | smiley face (catch trial) |
+----------+----------------------------------------------------------+
| 32 | subject button press |
+----------+----------------------------------------------------------+
End of explanation
fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw.info['sfreq'],
first_samp=raw.first_samp)
Explanation: Event dictionaries like this one are used when extracting epochs from
continuous data; the / character in the dictionary keys allows pooling
across conditions by requesting partial condition descriptors (i.e.,
requesting 'auditory' will select all epochs with Event IDs 1 and 2;
requesting 'left' will select all epochs with Event IDs 1 and 3). An
example of this is shown in the next section. There is also a convenient
:func:~mne.viz.plot_events function for visualizing the distribution of
events across the duration of the recording (to make sure event detection
worked as expected). Here we'll also make use of the :class:~mne.Info
attribute to get the sampling frequency of the recording (so our x-axis will
be in seconds instead of in samples).
End of explanation
reject_criteria = dict(mag=4000e-15, # 4000 fT
grad=4000e-13, # 4000 fT/cm
eeg=150e-6, # 150 µV
eog=250e-6) # 250 µV
Explanation: For paradigms that are not event-related (e.g., analysis of resting-state
data), you can extract regularly spaced (possibly overlapping) spans of data
by creating events using :func:mne.make_fixed_length_events and then
proceeding with epoching as described in the next section.
Epoching continuous data
The :class:~mne.io.Raw object and the events array are the bare minimum
needed to create an :class:~mne.Epochs object, which we create with the
:class:~mne.Epochs class constructor. Here we'll also specify some data
quality constraints: we'll reject any epoch where peak-to-peak signal
amplitude is beyond reasonable limits for that channel type. This is done
with a rejection dictionary; you may include or omit thresholds for any of
the channel types present in your data. The values given here are reasonable
for this particular dataset, but may need to be adapted for different
hardware or recording conditions. For a more automated approach, consider
using the autoreject package_.
End of explanation
epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.2, tmax=0.5,
reject=reject_criteria, preload=True)
Explanation: We'll also pass the event dictionary as the event_id parameter (so we can
work with easy-to-pool event labels instead of the integer event IDs), and
specify tmin and tmax (the time relative to each event at which to
start and end each epoch). As mentioned above, by default
:class:~mne.io.Raw and :class:~mne.Epochs data aren't loaded into memory
(they're accessed from disk only when needed), but here we'll force loading
into memory using the preload=True parameter so that we can see the
results of the rejection criteria being applied:
End of explanation
conds_we_care_about = ['auditory/left', 'auditory/right',
'visual/left', 'visual/right']
epochs.equalize_event_counts(conds_we_care_about) # this operates in-place
aud_epochs = epochs['auditory']
vis_epochs = epochs['visual']
del raw, epochs # free up memory
Explanation: Next we'll pool across left/right stimulus presentations so we can compare
auditory versus visual responses. To avoid biasing our signals to the
left or right, we'll use :meth:~mne.Epochs.equalize_event_counts first to
randomly sample epochs from each condition to match the number of epochs
present in the condition with the fewest good epochs.
End of explanation
aud_epochs.plot_image(picks=['MEG 1332', 'EEG 021'])
Explanation: Like :class:~mne.io.Raw objects, :class:~mne.Epochs objects also have a
number of built-in plotting methods. One is :meth:~mne.Epochs.plot_image,
which shows each epoch as one row of an image map, with color representing
signal magnitude; the average evoked response and the sensor location are
shown below the image:
End of explanation
frequencies = np.arange(7, 30, 3)
power = mne.time_frequency.tfr_morlet(aud_epochs, n_cycles=2, return_itc=False,
freqs=frequencies, decim=3)
power.plot(['MEG 1332'])
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Both :class:`~mne.io.Raw` and :class:`~mne.Epochs` objects have
:meth:`~mne.Epochs.get_data` methods that return the underlying data
as a :class:`NumPy array <numpy.ndarray>`. Both methods have a ``picks``
parameter for subselecting which channel(s) to return; ``raw.get_data()``
has additional parameters for restricting the time domain. The resulting
matrices have dimension ``(n_channels, n_times)`` for
:class:`~mne.io.Raw` and ``(n_epochs, n_channels, n_times)`` for
:class:`~mne.Epochs`.</p></div>
Time-frequency analysis
The :mod:mne.time_frequency submodule provides implementations of several
algorithms to compute time-frequency representations, power spectral density,
and cross-spectral density. Here, for example, we'll compute for the auditory
epochs the induced power at different frequencies and times, using Morlet
wavelets. On this dataset the result is not especially informative (it just
shows the evoked "auditory N100" response); see here
<inter-trial-coherence> for a more extended example on a dataset with richer
frequency content.
End of explanation
aud_evoked = aud_epochs.average()
vis_evoked = vis_epochs.average()
mne.viz.plot_compare_evokeds(dict(auditory=aud_evoked, visual=vis_evoked),
legend='upper left', show_sensors='upper right')
Explanation: Estimating evoked responses
Now that we have our conditions in aud_epochs and vis_epochs, we can
get an estimate of evoked responses to auditory versus visual stimuli by
averaging together the epochs in each condition. This is as simple as calling
the :meth:~mne.Epochs.average method on the :class:~mne.Epochs object,
and then using a function from the :mod:mne.viz module to compare the
global field power for each sensor type of the two :class:~mne.Evoked
objects:
End of explanation
aud_evoked.plot_joint(picks='eeg')
aud_evoked.plot_topomap(times=[0., 0.08, 0.1, 0.12, 0.2], ch_type='eeg')
Explanation: We can also get a more detailed view of each :class:~mne.Evoked object
using other plotting methods such as :meth:~mne.Evoked.plot_joint or
:meth:~mne.Evoked.plot_topomap. Here we'll examine just the EEG channels,
and see the classic auditory evoked N100-P200 pattern over dorso-frontal
electrodes, then plot scalp topographies at some additional arbitrary times:
End of explanation
evoked_diff = mne.combine_evoked([aud_evoked, vis_evoked], weights=[1, -1])
evoked_diff.pick_types(meg='mag').plot_topo(color='r', legend=False)
Explanation: Evoked objects can also be combined to show contrasts between conditions,
using the mne.combine_evoked function. A simple difference can be
generated by passing weights=[1, -1]. We'll then plot the difference wave
at each sensor using ~mne.Evoked.plot_topo:
End of explanation
# load inverse operator
inverse_operator_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-inv.fif')
inv_operator = mne.minimum_norm.read_inverse_operator(inverse_operator_file)
# set signal-to-noise ratio (SNR) to compute regularization parameter (λ²)
snr = 3.
lambda2 = 1. / snr ** 2
# generate the source time course (STC)
stc = mne.minimum_norm.apply_inverse(vis_evoked, inv_operator,
lambda2=lambda2,
method='MNE') # or dSPM, sLORETA, eLORETA
Explanation: Inverse modeling
Finally, we can estimate the origins of the evoked activity by projecting the
sensor data into this subject's :term:source space (a set of points either
on the cortical surface or within the cortical volume of that subject, as
estimated by structural MRI scans). MNE-Python supports lots of ways of doing
this (dynamic statistical parametric mapping, dipole fitting, beamformers,
etc.); here we'll use minimum-norm estimation (MNE) to generate a continuous
map of activation constrained to the cortical surface. MNE uses a linear
:term:inverse operator to project EEG+MEG sensor measurements into the
source space. The inverse operator is computed from the
:term:forward solution for this subject and an estimate of the
covariance of sensor measurements <tut_compute_covariance>. For this
tutorial we'll skip those computational steps and load a pre-computed inverse
operator from disk (it's included with the sample data
<sample-dataset>). Because this "inverse problem" is underdetermined (there
is no unique solution), here we further constrain the solution by providing a
regularization parameter specifying the relative smoothness of the current
estimates in terms of a signal-to-noise ratio (where "noise" here is akin to
baseline activity level across all of cortex).
End of explanation
# path to subjects' MRI files
subjects_dir = os.path.join(sample_data_folder, 'subjects')
# plot
stc.plot(initial_time=0.1, hemi='split', views=['lat', 'med'],
subjects_dir=subjects_dir)
Explanation: Finally, in order to plot the source estimate on the subject's cortical
surface we'll also need the path to the sample subject's structural MRI files
(the subjects_dir):
End of explanation |
329 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reshaping data with stack and unstack
Pivoting
Data is often stored in CSV files or databases in so-called “stacked” or “record” format
Step1: A better representation might be one where the different subjects are in rows, the applied treatments are in columns and outcomes are in the data frame values.
<img src="img/stack.png" width=70%>
You can achieve this by pivot function
Step2: If there is more that one record for each pair of "subject" and "treatment" (for example, the subject was tested twice with the same treatment at different times) you can use pivot_table. It works just like pivot but it allows to specify additionally an aggregation function ('mean' by default).
To take another example, we will use some data from expeditions to the Pole of Inaccessibility. We will read the data from SQL database.
Step3: <div class="alert alert-success">
<b>EXERCISE</b>
Step4: Note how the two indexes are nested
Step5: Note that it creates a standard data frame with "flat" index.
Step6: Indexing on the second index only may be slightly involved
Step7: Consult the documentation for other methods.
To return to orginal format with columns insted of indexes use reset_index
Step8: <div class="alert alert-success">
<b>EXERCISE</b>
Step9: unstack reverses the operation
Step10: We can "stack" it even further
Step11: <div class="alert alert-success">
<b>EXERCISE</b>
Step12: Just reading the tab-delimited data
Step13: The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data.
Lets replace the negative numbers by missing values and give columns proper names.
Step14: For now, we disregard the 'flag' columns | Python Code:
df = pd.DataFrame({'subject':['A', 'A', 'B', 'B'],
'treatment':['CH', 'DT', 'CH', 'DT'],
'concentration':range(4)},
columns=['subject', 'treatment', 'concentration'])
df
Explanation: Reshaping data with stack and unstack
Pivoting
Data is often stored in CSV files or databases in so-called “stacked” or “record” format:
End of explanation
pivoted = df.pivot(index='subject', columns='treatment', values='concentration')
pivoted
Explanation: A better representation might be one where the different subjects are in rows, the applied treatments are in columns and outcomes are in the data frame values.
<img src="img/stack.png" width=70%>
You can achieve this by pivot function:
End of explanation
from sqlalchemy import create_engine
engine = create_engine('sqlite:///data/survey.db')
visited = pd.read_sql('Visited', engine, index_col='ident', parse_dates=['dated'])
visited
readings = pd.read_sql('Survey', engine).dropna()
readings = readings.drop_duplicates()
readings
Explanation: If there is more that one record for each pair of "subject" and "treatment" (for example, the subject was tested twice with the same treatment at different times) you can use pivot_table. It works just like pivot but it allows to specify additionally an aggregation function ('mean' by default).
To take another example, we will use some data from expeditions to the Pole of Inaccessibility. We will read the data from SQL database.
End of explanation
multi = df.set_index(['subject', 'treatment'])
multi
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Join the `readings` and `visited` tables.
</div>
<div class="alert alert-success">
<b>EXERCISE</b>: Pivot the table such that we have sites in rows and different quantities in columns.
</div>
Hierarchical index
Hierarchical index of pandas is a way of introducing another dimension to a (two-dimensional) data frame. This is implemented by having multiple levels of the index. Let's look at an example.
End of explanation
multi.loc['A'] # first level only
Explanation: Note how the two indexes are nested: 2nd level index ('treatment') is grouped under the first level index ('subject'). To access the two levels you can use labels from the first level or both levels using a tuple.
End of explanation
multi.loc[('A', 'CH')] # two level
Explanation: Note that it creates a standard data frame with "flat" index.
End of explanation
multi.loc[(slice(None), 'CH'), :]
Explanation: Indexing on the second index only may be slightly involved:
End of explanation
multi.reset_index()
Explanation: Consult the documentation for other methods.
To return to orginal format with columns insted of indexes use reset_index:
End of explanation
result = multi['concentration'].unstack()
result
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Group the survey data by sites, date of measurement on each site and the quantity measured. List all readings for `site` DR-1; all readings of radiation using the hierchical index.
</div>
stack/unstack
stack — shifts last level of hierarchical rows to columns
unstack — does the opposite, i.e. shifts last level of hierarchical columns to rows
End of explanation
result.stack()
Explanation: unstack reverses the operation:
End of explanation
df = multi.stack()
df
Explanation: We can "stack" it even further:
End of explanation
!head -1 ./data/BETR8010000800100hour.1-1-1990.31-12-2012
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Rearange the data frame from last exercise, such that rows contain sites and dates (hierchical index) and columns different quantities. List all readings of radiation.
</div>
Formatting data — Case study
Going further with the time series case study test on the AirBase (The European Air quality dataBase) data.
One of the actual downloaded raw data files of AirBase is included in the repo:
End of explanation
data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None)
data.head()
Explanation: Just reading the tab-delimited data:
End of explanation
hours = map(str, range(24))
flags = ['flag'] * 24
col_names = ['date'] + list(sum(zip(hours, flags), ()))
col_names[:5]
data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t',
na_values=['-999', '-9999'],
names=col_names,
index_col='date')#, header=None)
Explanation: The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data.
Lets replace the negative numbers by missing values and give columns proper names.
End of explanation
data = data.drop('flag', axis=1)
data.head()
Explanation: For now, we disregard the 'flag' columns
End of explanation |
330 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assignment Jan 2017. Water company uses ASR system to prevent extraction from tiver during summer
Step1: Because the river is regarded as a straight fixed-head boundary along the y-axis at x=0, it can be simulated using a well and a mirror wel. The well is at (-xw, 0) and the mirror well ast `(xw, 0)
Step2: First show the heads and the specific discharge to gain overview
Step3: Compute the infiltration from the river if the extration were continuous after $t=0$
$$Q_{r,t} = Q_0 \, e^{-u}\text{, where }u=\frac{r^2 S}{4 kD t}$$
Hence,
$$q_{r,t} = \frac{Q_0}{2 \pi r} \, e^{-u}\text{, where }u=\frac{r^2 S}{4 kD t}$$
Separating out the components in x- and y-direction
Step4: The next step is to integrate the flow along the river. We do that numerically. One can use the function quad() or just apply the Simpson rule by hand.
Using both quad and the hand method to compute
Step5: Simulating the actual flow regime, with winter injection and summer extraction.
The infiltration into the aquifer is -0.5Q from Oct to Mar, i.e during 6 month every year.
The groundwater extraction is equals $Q$ from Jun to Aug, .e. during 3 months every year.
We may set up a monthly regime, telling the fraction of the total drinking water capacity that is extracted (negative values) or injected (just positive values) every month.
The regime is repeated 5 times to cover the required simulation period of 5 years.
Set up the regime
Step6: Now simulate the pumping regime by superposition
Flow changes need to be suerimposed | Python Code:
# import the necessary fucntionality
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import exp1 as W # Theis well function
def newfig(title='?', xlabel='?', ylabel='?', xlim=None, ylim=None, xscale=None, yscale=None, figsize=(10, 8),
fontsize=16):
sizes = ['xx-small', 'x-small', 'small', 'medium', 'large', 'x-large', 'xx-large']
assert isinstance(fontsize, int) or fontsize in sizes, \
"fontsize not int and not in [{}]".format(', '.join(sizes))
_, ax = plt.subplots(figsize=figsize)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
if xlim: ax.set_xlim(xlim)
if ylim: ax.set_ylim(ylim)
if xscale: ax.set_xscale(xscale)
if yscale: ax.set_yscale(yscale)
ax.grid()
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(fontsize)
return ax
def newfigs(layout, titles=['?'], xlabels=['?'], ylabels=['?'], xscales=None, yscales=None,
xlim=None, ylim=None, sharex=None, sharey=None, fontsize=16, figsize=(10, 8)):
sizes = ['xx-small', 'x-small', 'small', 'medium', 'large', 'x-large', 'xx-large']
assert isinstance(fontsize, int) or fontsize in sizes, \
"fontsize not int and not in [{}]".format(', '.join(sizes))
fig, axs = plt.subplots(*layout, sharex=sharex, sharey=sharey)
fig.set_size_inches(figsize)
assert isinstance(layout, tuple) and len(layout) == 2, 'layout must be a 2-tuple (nrows, ncols) not {}'.format(str(layout))
n_axes = np.prod(layout)
if xscales is None: xscales = [None for _ in range(n_axes)]
if yscales is None: yscales = [None for _ in range(n_axes)]
for items_name, items in zip(['titles', 'xlabels', 'ylabels', 'xscales', 'yscales'], [titles, xlabels, ylabels, xscales, yscales]):
assert len(items) == np.prod(layout), 'len({}) == {} != len(layout) == {}'.format(items_name, len(items), len(np.prod(layout)))
for ax, title, xlabel, ylabel, xscale, yscale in zip(axs, titles, xlabels, ylabels, xscales, yscales):
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
if xlim: ax.set_xlim(xlim)
if ylim: ax.set_ylim(ylim)
if xscale: ax.set_xscale(xscale)
if yscale: ax.set_yscale(yscale)
ax.grid()
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(fontsize)
return axs
[None for k in range(5)]
Explanation: Assignment Jan 2017. Water company uses ASR system to prevent extraction from tiver during summer
:author: T.N.Olsthoorn, Jan 2017, updated Jun 2022
Problem statement
A water company extracts water from a small river to treat and distribute it as drinking water for the population of a small town nearby. This is not a problem in winter. However, due to growing demand for drinking water and growing environmental concern, extraction has become more and more problematic during summers when the discharge of this small river is at its lowest. The environmental agency has recently even forbidden to further extract water from the river during the summer months.
In order to solve the problem that this causes for the drinking water supply, the drinking water company has suggested an "Aquifer Storage and Recovery" system (these so-called ASR systems are becoming more and more popular). It wants to take in more river water during winter and inject it through a well (or wells) at some distance from the river into the local water-table aquifer, so that this water can be extracted during the next summer. This way, no water-intake will be necessary during the summer months.
The debate that took off between the water company and the environmental agency focuses on whether, or to what extent, this ASR system could really makes sense, i.e. could really prevent infiltratioin from the river during the summer season. The question to be aswered is this: Can you really store the water in winter and extract it during summer without substantially affecting the already low summer-discharge of the river? Will not much of the stored water flow back to the river through the aquifer during winter? And would the extraction not induce an infiltration from the river into the aquifer during summer, so that there is still a water intake, the only differece now being that it will be invisible?
It's your task as hydrologist of the environmental agency to answer this question and illustrate it quantitatively and also explain it clearly. Your explanation should include why and how you derived your answer. It should show the math and the code.
It is obvious that ASR will work if the distance between the well and the river is large enough. But how large is large enough, and on what does it depend? The water company suggested a distance of 500 m from the river. Should the environmental agency agree?
Hints and further information:
To analyse this system, at least coarsely, simplify the injection and extraction regime and apply superposition. Superposition allows to only consider the well and treat the river as a fixed-head boundary using a mirror well. Simplify the river to a straight line along the y-axis and place the well at distance $x_w$ from this line at coordinate $(x_w, 0)$. That is, the x-axis is perpendicular to the river.
The drinking water demand, $Q$, is considered constant year-round at 150 L/d per inhabitant for the 10000 inhabitants of the town.
Assume the following injection and recovery regime:
The water company well extracts during 3 summer months (June, July, Aug) its full demand
It compensates that during 6 winter months (October-March) by injecting half its daily capacity.
There is neither injection nor extraction during the months April, May and September.
The idea is to analyse this ASR system by computing the exchange between the aquifer and the river due to the well.
When you have coded the problem for this particular distance of 500 m, experiment with this distance to come up with a distance that realy makes sense in terms of not inducing loss of river water during the summer months. That is, try 1000 and 2000 m.
Use kD = 900 m2/d, S = 0.2
Steps to take:
Use Theis' well function to compute drawdowns:
$$ s(t, t) =\frac Q {4 \pi kD} W(u), \,\,\,\,\ W(u) = -scipy.special.expi(-u),\,\,\,\,\, u \frac {r^2 S} {4 kD t} $$
From Theis' well function
$$ W(u) = \intop_u^\infty \frac {e^{-y}} y dy $$
derive the flow $Q(r, t)$ [L3/T] and the specific discharge $q(r,t)$ [L2/T] in the aquifer at distance $r$ from the well.
To simulate the river, apply a mirror well.
Compute the exchange between aquifer and river, derive the specific discharge $q$ [L2/T] perpendicular to the river at an arbitrary point $y$.
Compute this specific discharge for a large number of points between $-a x_w<y<a x_w$, where $a$ is sufficiently large, thus covering a large enough track of the river to capture about the full induced exchange between aquifer and the river.
Generating appropriate y-coordinates this way, can be done as follows:
y = np.hstack(( -np.logspace(0, np.log10(a * xw), Np)[::-1], np.logspace(0, np.log10(a * xw), Np))
Where $a$ may be taken 10 and $Np$ 500 for example.
Numerically integrate this specific discharge along the river to obtain the total exchange between river and aquifer.
>Numerical integration is easy when uing the Simpson rule (trapezium rule).
Having the code to do this for a single time, it can readily be extended for a large number of times.
Check that for large times, the total flow between aquifer and river should be about the total discharge of the well if the well is continuously injecting.
Finally simulate the actual flow regime with 6 month injection and 3 months extraction as explained above. Simulate for a period of 5 years. This simulation requires superposition in time.
Suggestions:
It's probably easiest to analyze everything in time units of months instead of days, 5 years time then runs from 0 to 60 months.
Learn to define and use functions in Python to keep overview and prevent repeating code.
If you consider this too complicated, then anlyse at least the situation for only the river point closest to the well.
Start by plotting the head.
When this works focus on the discharge.
Don't hesitate to ask questions and for help.
End of explanation
# aquifer
kD, S, Qd = 900., 0.2, 0.15 * 1000 # m2/d, (-), m3/d
show_times = [10, 30, 60, 120, 240, 480, 1060, 2120, 4240]
# Coordinates
L = 500.
wells = {1: {'x':+L, 'y':0., 'Q':-Qd},
2: {'x':-L, 'y':0., 'Q':+Qd}
}
a = 25.
Explanation: Because the river is regarded as a straight fixed-head boundary along the y-axis at x=0, it can be simulated using a well and a mirror wel. The well is at (-xw, 0) and the mirror well ast `(xw, 0)
End of explanation
N, a = 100, 2.
x = np.linspace(-a * L, a * L, N + 1)
y = np.linspace(-a * L, a * L, N + 1)
X, Y = np.meshgrid(x, y)
show_times = [10, 30, 90]
titles = [f'Head contours and stream vectors at t = {t:.0f} d' for t in show_times]
xlabels = ['x [m]' for _ in show_times]
ylabels = ['y [m[' for _ in show_times]
axs = newfigs((1, 3), titles, xlabels, ylabels, sharex=True, sharey=True, figsize=(12, 3), fontsize=8)
for ax, t in zip(axs.ravel(), show_times):
s = np.zeros_like(X)
for k in wells:
well = wells[k]
r2 = (X - well['x']) ** 2 + (Y - well['y']) ** 2
u = r2 * S / (4 * kD * t)
s += well['Q'] / (4 * np.pi * kD) * W(u)
cs = ax.contour(X, Y, s, levels=20)
ax.clabel(cs)
plt.show()
# Quiver plot to show the flow field
titles = [f'Stream vextors at t = {t:.0f} d' for t in show_times]
axs = newfigs((1, 3), titles, xlabels, ylabels, sharex=True, sharey=True, figsize=(12, 3), fontsize=8)
xq = np.linspace(-1000, 1000, 21)
yq = np.linspace(-1000, 1000, 20)
Xq, Yq = np.meshgrid(xq, yq)
for ax, t in zip(axs.ravel(), show_times):
qx = np.zeros_like(Xq)
qy = np.zeros_like(Yq)
for k in wells:
well = wells[k]
r2 = (Xq - well['x']) ** 2 + (Yq - well['y']) ** 2
u = r2 * S / (4 * kD * t)
qx += well['Q'] * np.exp(-u) * (Xq - well['x']) / r2
qy += well['Q'] * np.exp(-u) * (Yq - well['y']) / r2
ax.quiver(Xq, Yq, qx, qy)
plt.show()
Explanation: First show the heads and the specific discharge to gain overview
End of explanation
a = 15 # (-), a scale factor for y-coordinates, further no meaning.
Y = np.hstack(( -np.logspace(0, np.log10(a * L))[::-1],
np.logspace(0, np.log10(a * L))))
X = np.zeros_like(Y)
ax = newfig(f"Inflow from the river due to well at x={L:.0f} m, Qd={Qd:.0f} m3/d", "qx = m2/d", "y [m]")
for t in show_times:
qx = np.zeros_like(Y)
qy = np.zeros_like(Y)
for k in wells:
well = wells[k]
R2 = (X - well['x']) ** 2 + (Y - well['y']) ** 2
u = R2 * S / (4 * kD * t)
qx += well['Q'] / (2 * np.pi) * (X - well['x']) / R2 * np.exp(-u)
qy += well['Q'] / (2 * np.pi) * (Y - well['y']) / R2 * np.exp(-u)
ax.plot(qx, Y, label=f'{t:.0f} d')
ax.legend()
Explanation: Compute the infiltration from the river if the extration were continuous after $t=0$
$$Q_{r,t} = Q_0 \, e^{-u}\text{, where }u=\frac{r^2 S}{4 kD t}$$
Hence,
$$q_{r,t} = \frac{Q_0}{2 \pi r} \, e^{-u}\text{, where }u=\frac{r^2 S}{4 kD t}$$
Separating out the components in x- and y-direction:
$$q_x(x, y,t) = \frac{Q_0}{2 \pi r} \, e^{-u} \cos(\alpha)$$
$$q_y(x, y,t) = \frac{Q_0}{2 \pi r} \, e^{-u} \sin(\alpha)$$
And, with $Q < 0$ meaning extration, the specific discharge due to a single well becomes
$$q_x(x, y,t) = -\frac{Q_0}{2 \pi} \frac{x - x_w}{r^2}\,e^{-u}$$
$$q_y(x, y,t) = -\frac{Q_0}{2 \pi} \frac{y - y_w}{r^2}\,e^{-u}$$
Let the river coincide with the y axis and x=0.
Then we generate points along the river where we'll comopute the exchange between river and aquifer.
End of explanation
ax = newfig(f"Total inflow from the river due to well at x={L:.0f} m, Qd={Qd:.0f} m3/d", "time [d]", "Q_total [m3/d]")
dy = np.diff(Y)
times = np.arange(0, 4001, 10.)
Qrin = np.zeros_like(times)
for it, t in enumerate(times):
qx = np.zeros_like(Y)
qy = np.zeros_like(Y)
for k in wells:
well = wells[k]
R2 = (X - well['x']) ** 2 + (Y - well['y']) ** 2
u = R2 * S / (4 * kD * t)
qx += well['Q'] / (2 * np.pi) * (X - well['x']) / R2 * np.exp(-u)
qy += well['Q'] / (2 * np.pi) * (Y - well['y']) / R2 * np.exp(-u)
Qrin[it] = np.sum(0.5 * (qx[:-1] + qx[1:]) * dy)
ax.plot(times, Qrin, label=f'well extraction = {Qd:.0f} m3/d')
ax.hlines([Qd], xmin=0., xmax=times[-1], colors=['r'], label=f'Qd={Qd:.0f} m3/d')
ax.legend()
Explanation: The next step is to integrate the flow along the river. We do that numerically. One can use the function quad() or just apply the Simpson rule by hand.
Using both quad and the hand method to compute:
$$ Q_{0,y} = \intop_{-\infty}^{+\infty} q_x dy $$
End of explanation
Ny = 5 # Number of simulation years
regime = dict() # Hold the regime (startdates and corresponding flow of the well)
yyyy = np.ones((12, 1), dtype=int) * np.arange(2018, 2024, dtype=int)[np.newaxis, :]
mm = np.arange(1, 13, dtype=int)[:, np.newaxis] * np.ones((1, Ny), dtype=int)
regime['dates'] = np.array([np.datetime64(f'{y:04d}-{m:02d}-01') for y, m in zip(yyyy.T.ravel(), mm.T.ravel())])
regime['Qw'] = wells[1]['Q'] * np.array(Ny * [-0.5, -0.5, -0.5, 0, 0, 1, 1, 1, 0, -0.5, -0.5, -0.5]) # m3/d in each month
# Let's set dQ separtely for well and mirror
regime['dQw'] = np.diff(np.hstack((0., regime['Qw']))) # The chage of flow every month needed for superpostioin.
regime
# We may filter out only the months when the regime change differs from zero
for k in regime.keys():
regime[k] = regime[k][regime['dQw'] != 0]
regime
# Simulation ties as true dates over the simulation period (days)
tsim = np.arange(regime['dates'][0], np.datetime64(f'{yyyy[0, -1] + 1:04d}-01-01'))
tsim
Explanation: Simulating the actual flow regime, with winter injection and summer extraction.
The infiltration into the aquifer is -0.5Q from Oct to Mar, i.e during 6 month every year.
The groundwater extraction is equals $Q$ from Jun to Aug, .e. during 3 months every year.
We may set up a monthly regime, telling the fraction of the total drinking water capacity that is extracted (negative values) or injected (just positive values) every month.
The regime is repeated 5 times to cover the required simulation period of 5 years.
Set up the regime
End of explanation
# Carry out superposition in time
# Coordinates
L = 1500.
wells = {1: {'x':+L, 'y':0., 'Q':-Qd},
2: {'x':-L, 'y':0., 'Q':+Qd}
}
a = 10
yy = np.logspace(2, np.log10(L * a), 50)
Yr = np.hstack((-yy[::-1], 0, yy))[:, np.newaxis]
Xr = np.zeros_like(Yr)
qx = np.zeros((len(Yr), len(tsim)))
qy = np.zeros((len(Yr), len(tsim)))
for dQw, change_date in zip(regime['dQw'], regime['dates']):
t = (tsim[tsim > change_date] - change_date) / np.timedelta64(1, 'D')
for k in wells:
well = wells[k]
dx_, dy_ = Xr - well['x'], Yr - well['y']
r2 = dx_ ** 2 + dy_ ** 2
u = r2 * S / (4 * kD * t) # (ny x nt)
sign = well['Q'] / abs(well['Q']) # Make sure the sign for the mirror well is reversed, its -dQw
qx[:, tsim > change_date] += sign * dQw / (2 * np.pi) * np.exp(-u) * dx_ / r2
qy[:, tsim > change_date] += sign * dQw / (2 * np.pi) * np.exp(-u) * dy_ / r2
Qx = np.sum(0.5 * (qx[:-1, :] + qx[1:, :]) * np.diff(Yr, axis=0), axis=0)
Qy = np.sum(0.5 * (qy[:-1, :] + qy[1:, :]) * np.diff(Yr, axis=0), axis=0)
ax = newfig("Infiltration/exfiltration rates at change_dates [m/d]", "infiltration/exfiltration rate [m/d", "y [m]")
for change_date in regime['dates']:
it = np.sum(tsim <= change_date)
ax.plot(qx[:, it], Yr, label=str(change_date))
ax.legend()
plt.show()
ax = newfig("Pumping regime and inflow into aquifer", "time", "Q[m3/d")
ax.plot(tsim, Qx, label='Qx [m3/d]')
ax.plot(tsim, Qy, label='Qy [m3/d]')
ax.step(regime['dates'], regime['Qw'], where='post', label='regime')
ax.set_ylim((-200, 200))
ax.legend()
plt.show()
Explanation: Now simulate the pumping regime by superposition
Flow changes need to be suerimposed:
End of explanation |
331 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cargue de datos s SciDB
1) Verificar Prerequisitos
Python
SciDB-Py requires Python 2.6-2.7 or 3.3
Step1: NumPy
tested with version 1.9 (1.13.1)
Step2: Requests
tested with version 2.7 (2.18.1) Required for using the Shim interface to SciDB.
Step3: Pandas (optional)
tested with version 0.15. (0.20.3) Required only for importing/exporting SciDB arrays as Pandas Dataframe objects.
Step4: SciPy (optional)
tested with versions 0.10-0.12. (0.19.0) Required only for importing/exporting SciDB arrays as SciPy sparse matrices.
Step5: 2) Importar scidb-py
pip install git+http
Step6: conectarse al servidor de Base de datos
Step7: 3) Leer archivo con cada una de las ondas
Step8: Quitarle caracteres especiales
Step9: 4) Importar WFDB para conectarse a physionet
Step10: Busca la ubicacion de la señal tipo II
Step11: Normaliza la señal y le quita los valores en null
Step12: Cambiar los guiones "-" por raya al piso "_" porque por algun motivo SciDB tiene problemas con estos caracteres
Si el arreglo sin valores nulos no queda vacio lo sube al SciDB
Step13: Check de list of arrays in SciDB | Python Code:
import sys
sys.version_info
Explanation: Cargue de datos s SciDB
1) Verificar Prerequisitos
Python
SciDB-Py requires Python 2.6-2.7 or 3.3
End of explanation
import numpy as np
np.__version__
Explanation: NumPy
tested with version 1.9 (1.13.1)
End of explanation
import requests
requests.__version__
Explanation: Requests
tested with version 2.7 (2.18.1) Required for using the Shim interface to SciDB.
End of explanation
import pandas as pd
pd.__version__
Explanation: Pandas (optional)
tested with version 0.15. (0.20.3) Required only for importing/exporting SciDB arrays as Pandas Dataframe objects.
End of explanation
import scipy
scipy.__version__
Explanation: SciPy (optional)
tested with versions 0.10-0.12. (0.19.0) Required only for importing/exporting SciDB arrays as SciPy sparse matrices.
End of explanation
import scidbpy
scidbpy.__version__
from scidbpy import connect
Explanation: 2) Importar scidb-py
pip install git+http://github.com/paradigm4/scidb-py.git
End of explanation
sdb = connect('http://localhost:8080')
Explanation: conectarse al servidor de Base de datos
End of explanation
import urllib.request # urllib2 in python2 the lib that handles the url stuff
target_url = "https://physionet.org/physiobank/database/mimic3wdb/matched/RECORDS-waveforms"
data = urllib.request.urlopen(target_url) # it's a file like object and works just like a file
lines = data.readlines();
line = str(lines[2])
line
Explanation: 3) Leer archivo con cada una de las ondas
End of explanation
line = line.replace('b\'','').replace('\'','').replace('\\n','')
splited = line.split("/")
splited
carpeta,subCarpeta,onda = line.split("/")
carpeta = carpeta+"/"+subCarpeta
onda
Explanation: Quitarle caracteres especiales
End of explanation
import wfdb
carpeta = "p05/p050140"
onda = "p050140-2188-07-26-05-51"
sig, fields = wfdb.srdsamp(onda,pbdir='mimic3wdb/matched/'+carpeta, sampfrom=10000)
print(sig)
print("signame: " + str(fields['signame']))
print("units: " + str(fields['units']))
print("fs: " + str(fields['fs']))
print("comments: " + str(fields['comments']))
print("fields: " + str(fields))
Explanation: 4) Importar WFDB para conectarse a physionet
End of explanation
signalII = None
try:
signalII = fields['signame'].index("II")
except ValueError:
print("List does not contain value")
if(signalII!=None):
print("List contain value")
Explanation: Busca la ubicacion de la señal tipo II
End of explanation
#array = wfdb.processing.normalize(x=sig[:, signalII], lb=-2, ub=2)
array = sig[:, signalII]
array = array[~np.isnan(sig[:, signalII])]
arrayNun = np.trim_zeros(array)
array
Explanation: Normaliza la señal y le quita los valores en null
End of explanation
ondaName = onda.replace("-", "_")
if arrayNun.size>0 :
sdb.input(upload_data=array).store(ondaName,gc=False)
# sdb.iquery("store(input(<x:int64>[i], '{fn}', 0, '{fmt}'), "+ondaName+")", upload_data=array)
Explanation: Cambiar los guiones "-" por raya al piso "_" porque por algun motivo SciDB tiene problemas con estos caracteres
Si el arreglo sin valores nulos no queda vacio lo sube al SciDB
End of explanation
dir(sdb.arrays)
Explanation: Check de list of arrays in SciDB
End of explanation |
332 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LAB 5a
Step1: If the above command resulted in an installation, please restart the notebook kernel and re-run the notebook.
Import necessary libraries.
Step2: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket.
Step3: Create the bucket if does not exist, and confirm below that the bucket is regional and its region equals to the specified region
Step4: Check data exists
Verify that you previously created CSV files we'll be using for training and evaluation. If not, go back to lab 1b_prepare_data_babyweight to create them.
Step5: Now that we have the Keras wide-and-deep code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Vertex AI.
Train on Vertex AI
Training on Vertex AI requires
Step8: We then use the %%writefile magic to write the contents of the cell below to a file called task.py in the babyweight/trainer folder.
Create trainer module's task.py to hold hyperparameter argparsing code.
The cell below writes the file babyweight/trainer/task.py which sets up our training job. Here is where we determine which parameters of our model to pass as flags during training using the parser module. Look at how batch_size is passed to the model in the code below. Use this as an example to parse arguements for the following variables
- nnsize which represents the hidden layer sizes to use for DNN feature columns
- nembeds which represents the embedding size of a cross of n key real-valued parameters
- train_examples which represents the number of examples (in thousands) to run the training job
- eval_steps which represents the positive number of steps for which to evaluate model
Be sure to include a default value for the parsed arguments above and specfy the type if necessary.
Step17: In the same way we can write to the file model.py the model that we developed in the previous notebooks.
Create trainer module's model.py to hold Keras model code.
To create our model.py, we'll use the code we wrote for the Wide & Deep model. Look back at your 9_keras_wide_and_deep_babyweight notebook and copy/paste the necessary code from that notebook into its place in the cell below.
Step18: Train locally
After moving the code to a package, make sure it works as a standalone. Note, we incorporated the --train_examples flag so that we don't try to train on the entire dataset while we are developing our pipeline. Once we are sure that everything is working on a subset, we can change it so that we can train on all the data. Even for this subset, this takes about 3 minutes in which you won't see any output ...
Run trainer module package locally.
We can run a very small training job over a single file with a small batch size, 1 epoch, 1 train example, and 1 eval step.
Step19: Training on Vertex AI
Now that we see everything is working locally, it's time to train on the cloud! First, we need to package our code as a source distribution. For this, we can use setuptools.
Step20: We will store our package in the Cloud Storage bucket.
Step21: To submit to the Cloud we use gcloud custom-jobs create and simply specify some additional parameters for the Vertex AI Training Service
Step22: The training job should complete within 10 to 15 minutes. You will need a trained model to complete our next lab.
Hyperparameter tuning
To do hyperparameter tuning, create a YAML file and and pass its name with --config.
This step could take <b>hours</b> -- you can increase --parallel-trial-count or reduce --max-trial-count to get it done faster. Since --parallel-trial-count is the number of initial seeds to start searching from, you don't want it to be too large; otherwise, all you have is a random search.
Step23: Repeat training
This time with tuned parameters for batch_size and nembeds. Note that your best results may differ from below. So be sure to fill yours in! | Python Code:
try:
import hypertune
except ImportError:
!pip3 install -U cloudml-hypertune --user
print("Please restart the kernel and re-run the notebook.")
Explanation: LAB 5a: Training Keras model on Vertex AI
Learning Objectives
Setup up the environment
Create trainer module's task.py to hold hyperparameter argparsing code
Create trainer module's model.py to hold Keras model code
Run trainer module package locally
Submit training job to Vertex AI
Submit hyperparameter tuning job to Vertex AI
Introduction
After having testing our training pipeline both locally and in the cloud on a susbset of the data, we can submit another (much larger) training job to the cloud. It is also a good idea to run a hyperparameter tuning job to make sure we have optimized the hyperparameters of our model.
In this notebook, we'll be training our Keras model at scale using Vertex AI.
In this lab, we will set up the environment, create the trainer module's task.py to hold hyperparameter argparsing code, create the trainer module's model.py to hold Keras model code, run the trainer module package locally, submit a training job to Vertex AI, and submit a hyperparameter tuning job to Vertex AI.
Set up environment variables and load necessary libraries
First we will install the cloudml-hypertune package on our local machine. This is the package which we will use to report hyperparameter tuning metrics to Vertex AI. Installing the package will allow us to test our trainer package locally.
End of explanation
import os
Explanation: If the above command resulted in an installation, please restart the notebook kernel and re-run the notebook.
Import necessary libraries.
End of explanation
PROJECT = !gcloud config list --format 'value(core.project)'
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
Explanation: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket.
End of explanation
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
gsutil ls -Lb gs://$BUCKET | grep "gs://\|Location"
echo $REGION
%%bash
gcloud config set project ${PROJECT}
gcloud config set ai/region ${REGION}
Explanation: Create the bucket if does not exist, and confirm below that the bucket is regional and its region equals to the specified region:
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*000000000000.csv
Explanation: Check data exists
Verify that you previously created CSV files we'll be using for training and evaluation. If not, go back to lab 1b_prepare_data_babyweight to create them.
End of explanation
%%bash
mkdir -p babyweight/trainer
touch babyweight/trainer/__init__.py
Explanation: Now that we have the Keras wide-and-deep code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Vertex AI.
Train on Vertex AI
Training on Vertex AI requires:
* Making the code a Python source distribution
* Using gcloud to submit the training code to Vertex AI
Ensure that the Vertex AI API is enabled by going to this link.
Move code into a Python package
A Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices.
The bash command touch creates an empty file in the specified location, the directory babyweight should already exist.
End of explanation
%%writefile babyweight/trainer/task.py
import argparse
import json
import os
from trainer import model
import tensorflow as tf
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--train_data_path",
help="GCS location of training data",
required=True
)
parser.add_argument(
"--eval_data_path",
help="GCS location of evaluation data",
required=True
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
default = os.getenv("AIP_MODEL_DIR")
)
parser.add_argument(
"--batch_size",
help="Number of examples to compute gradient over.",
type=int,
default=512
)
parser.add_argument(
"--nnsize",
help="Hidden layer sizes for DNN -- provide space-separated layers",
default="128 32 4"
)
parser.add_argument(
"--nembeds",
help="Embedding size of a cross of n key real-valued parameters",
type=int,
default=3
)
parser.add_argument(
"--num_epochs",
help="Number of epochs to train the model.",
type=int,
default=10
)
parser.add_argument(
"--train_examples",
help=Number of examples (in thousands) to run the training job over.
If this is more than actual # of examples available, it cycles through
them. So specifying 1000 here when you have only 100k examples makes
this 10 epochs.,
type=int,
default=5000
)
parser.add_argument(
"--eval_steps",
help=Positive number of steps for which to evaluate model. Default
to None, which means to evaluate until input_fn raises an end-of-input
exception,
type=int,
default=None
)
# Parse all arguments
args = parser.parse_args()
arguments = args.__dict__
# Modify some arguments
arguments["train_examples"] *= 1000
# Run the training job
model.train_and_evaluate(arguments)
Explanation: We then use the %%writefile magic to write the contents of the cell below to a file called task.py in the babyweight/trainer folder.
Create trainer module's task.py to hold hyperparameter argparsing code.
The cell below writes the file babyweight/trainer/task.py which sets up our training job. Here is where we determine which parameters of our model to pass as flags during training using the parser module. Look at how batch_size is passed to the model in the code below. Use this as an example to parse arguements for the following variables
- nnsize which represents the hidden layer sizes to use for DNN feature columns
- nembeds which represents the embedding size of a cross of n key real-valued parameters
- train_examples which represents the number of examples (in thousands) to run the training job
- eval_steps which represents the positive number of steps for which to evaluate model
Be sure to include a default value for the parsed arguments above and specfy the type if necessary.
End of explanation
%%writefile babyweight/trainer/model.py
import datetime
import os
import shutil
import numpy as np
import tensorflow as tf
import hypertune
# Determine CSV, label, and key columns
CSV_COLUMNS = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column.
# Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]]
def features_and_labels(row_data):
Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
print("mode = {}".format(mode))
# Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS)
# Map dataset to features and label
dataset = dataset.map(map_func=features_and_labels) # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
def create_input_layers():
Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
deep_inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]
}
wide_inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="string")
for colname in ["is_male", "plurality"]
}
inputs = {**wide_inputs, **deep_inputs}
return inputs
def categorical_fc(name, values):
Helper function to wrap categorical feature by indicator column.
Args:
name: str, name of feature.
values: list, list of strings of categorical values.
Returns:
Categorical and indicator column of categorical feature.
cat_column = tf.feature_column.categorical_column_with_vocabulary_list(
key=name, vocabulary_list=values)
ind_column = tf.feature_column.indicator_column(
categorical_column=cat_column)
return cat_column, ind_column
def create_feature_columns(nembeds):
Creates wide and deep dictionaries of feature columns from inputs.
Args:
nembeds: int, number of dimensions to embed categorical column down to.
Returns:
Wide and deep dictionaries of feature columns.
deep_fc = {
colname: tf.feature_column.numeric_column(key=colname)
for colname in ["mother_age", "gestation_weeks"]
}
wide_fc = {}
is_male, wide_fc["is_male"] = categorical_fc(
"is_male", ["True", "False", "Unknown"])
plurality, wide_fc["plurality"] = categorical_fc(
"plurality", ["Single(1)", "Twins(2)", "Triplets(3)",
"Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"])
# Bucketize the float fields. This makes them wide
age_buckets = tf.feature_column.bucketized_column(
source_column=deep_fc["mother_age"],
boundaries=np.arange(15, 45, 1).tolist())
wide_fc["age_buckets"] = tf.feature_column.indicator_column(
categorical_column=age_buckets)
gestation_buckets = tf.feature_column.bucketized_column(
source_column=deep_fc["gestation_weeks"],
boundaries=np.arange(17, 47, 1).tolist())
wide_fc["gestation_buckets"] = tf.feature_column.indicator_column(
categorical_column=gestation_buckets)
# Cross all the wide columns, have to do the crossing before we one-hot
crossed = tf.feature_column.crossed_column(
keys=[age_buckets, gestation_buckets],
hash_bucket_size=1000)
deep_fc["crossed_embeds"] = tf.feature_column.embedding_column(
categorical_column=crossed, dimension=nembeds)
return wide_fc, deep_fc
def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units):
Creates model architecture and returns outputs.
Args:
wide_inputs: Dense tensor used as inputs to wide side of model.
deep_inputs: Dense tensor used as inputs to deep side of model.
dnn_hidden_units: List of integers where length is number of hidden
layers and ith element is the number of neurons at ith layer.
Returns:
Dense tensor output from the model.
# Hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units.split()]
deep = deep_inputs
for layerno, numnodes in enumerate(layers):
deep = tf.keras.layers.Dense(
units=numnodes,
activation="relu",
name="dnn_{}".format(layerno+1))(deep)
deep_out = deep
# Linear model for the wide side
wide_out = tf.keras.layers.Dense(
units=10, activation="relu", name="linear")(wide_inputs)
# Concatenate the two sides
both = tf.keras.layers.concatenate(
inputs=[deep_out, wide_out], name="both")
# Final output is a linear activation because this is regression
output = tf.keras.layers.Dense(
units=1, activation="linear", name="weight")(both)
return output
def rmse(y_true, y_pred):
Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3):
Builds wide and deep model using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
# Create input layers
inputs = create_input_layers()
# Create feature columns for both wide and deep
wide_fc, deep_fc = create_feature_columns(nembeds)
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
wide_inputs = tf.keras.layers.DenseFeatures(
feature_columns=wide_fc.values(), name="wide_inputs")(inputs)
deep_inputs = tf.keras.layers.DenseFeatures(
feature_columns=deep_fc.values(), name="deep_inputs")(inputs)
# Get output of model given inputs
output = get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
# Instantiate the HyperTune reporting object
hpt = hypertune.HyperTune()
# Reporting callback
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
global hpt
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_rmse',
metric_value=logs['val_rmse'],
global_step=epoch)
def train_and_evaluate(args):
model = build_wide_deep_model(args["nnsize"], args["nembeds"])
print("Here is our Wide-and-Deep architecture so far:\n")
print(model.summary())
trainds = load_dataset(
args["train_data_path"],
args["batch_size"],
tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset(
args["eval_data_path"], 1000, tf.estimator.ModeKeys.EVAL)
if args["eval_steps"]:
evalds = evalds.take(count=args["eval_steps"])
num_batches = args["batch_size"] * args["num_epochs"]
steps_per_epoch = args["train_examples"] // num_batches
checkpoint_path = os.path.join(args["output_dir"], "checkpoints/babyweight")
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path, verbose=1, save_weights_only=True)
history = model.fit(
trainds,
validation_data=evalds,
epochs=args["num_epochs"],
steps_per_epoch=steps_per_epoch,
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[cp_callback, HPTCallback()])
EXPORT_PATH = os.path.join(
args["output_dir"], datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
Explanation: In the same way we can write to the file model.py the model that we developed in the previous notebooks.
Create trainer module's model.py to hold Keras model code.
To create our model.py, we'll use the code we wrote for the Wide & Deep model. Look back at your 9_keras_wide_and_deep_babyweight notebook and copy/paste the necessary code from that notebook into its place in the cell below.
End of explanation
%%bash
OUTDIR=babyweight_trained
rm -rf ${OUTDIR}
export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight
python3 -m trainer.task \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--batch_size=10 \
--num_epochs=1 \
--train_examples=1 \
--eval_steps=1
Explanation: Train locally
After moving the code to a package, make sure it works as a standalone. Note, we incorporated the --train_examples flag so that we don't try to train on the entire dataset while we are developing our pipeline. Once we are sure that everything is working on a subset, we can change it so that we can train on all the data. Even for this subset, this takes about 3 minutes in which you won't see any output ...
Run trainer module package locally.
We can run a very small training job over a single file with a small batch size, 1 epoch, 1 train example, and 1 eval step.
End of explanation
%%writefile babyweight/setup.py
from setuptools import find_packages
from setuptools import setup
setup(
name='babyweight_trainer',
version='0.1',
packages=find_packages(),
include_package_data=True,
description='Babyweight model training application.'
)
%%bash
cd babyweight
python ./setup.py sdist --formats=gztar
cd ..
Explanation: Training on Vertex AI
Now that we see everything is working locally, it's time to train on the cloud! First, we need to package our code as a source distribution. For this, we can use setuptools.
End of explanation
%%bash
gsutil cp babyweight/dist/babyweight_trainer-0.1.tar.gz gs://${BUCKET}/babyweight/
Explanation: We will store our package in the Cloud Storage bucket.
End of explanation
%%bash
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
OUTDIR=gs://${BUCKET}/babyweight/trained_model_$TIMESTAMP
JOB_NAME=babyweight_$TIMESTAMP
PYTHON_PACKAGE_URI=gs://${BUCKET}/babyweight/babyweight_trainer-0.1.tar.gz
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest"
PYTHON_MODULE=trainer.task
echo > ./config.yaml "workerPoolSpecs:
machineSpec:
machineType: n1-standard-4
replicaCount: 1
pythonPackageSpec:
executorImageUri: $PYTHON_PACKAGE_EXECUTOR_IMAGE_URI
packageUris: $PYTHON_PACKAGE_URI
pythonModule: $PYTHON_MODULE
args:
- --train_data_path=gs://${BUCKET}/babyweight/data/train*.csv
- --eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv
- --output_dir=$OUTDIR
- --num_epochs=10
- --train_examples=10000
- --eval_steps=100
- --batch_size=32
- --nembeds=8"
gcloud ai custom-jobs create \
--region=${REGION} \
--display-name=$JOB_NAME \
--config=config.yaml
Explanation: To submit to the Cloud we use gcloud custom-jobs create and simply specify some additional parameters for the Vertex AI Training Service:
- display-name: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness
- region: Cloud region to train in. See here for supported Vertex AI Training Service regions
You might have earlier seen gcloud ai custom-jobs create executed with the worker pool spec and pass-through Python arguments specified directly in the command call, here we will use a YAML file, this will make it easier to transition to hyperparameter tuning.
Through the args: argument we add in the passed-through arguments for our task.py file.
End of explanation
%%bash
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
BASE_OUTPUT_DIR=gs://${BUCKET}/babyweight/hp_tuning_$TIMESTAMP
JOB_NAME=babyweight_hpt_$TIMESTAMP
PYTHON_PACKAGE_URI=gs://${BUCKET}/babyweight/babyweight_trainer-0.1.tar.gz
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest"
PYTHON_MODULE=trainer.task
echo > ./hyperparam.yaml "displayName: $JOB_NAME
studySpec:
metrics:
- metricId: val_rmse
goal: MINIMIZE
parameters:
- parameterId: batch_size
integerValueSpec:
minValue: 8
maxValue: 512
scaleType: UNIT_LOG_SCALE
- parameterId: nembeds
integerValueSpec:
minValue: 3
maxValue: 30
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
baseOutputDirectory:
outputUriPrefix: $BASE_OUTPUT_DIR
workerPoolSpecs:
- machineSpec:
machineType: n1-standard-8
pythonPackageSpec:
executorImageUri: $PYTHON_PACKAGE_EXECUTOR_IMAGE_URI
packageUris:
- $PYTHON_PACKAGE_URI
pythonModule: $PYTHON_MODULE
args:
- --train_data_path=gs://${BUCKET}/babyweight/data/train*.csv
- --eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv
- --num_epochs=10
- --train_examples=5000
- --eval_steps=100
- --batch_size=32
- --nembeds=8
replicaCount: 1"
gcloud beta ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=hyperparam.yaml \
--max-trial-count=20 \
--parallel-trial-count=5
Explanation: The training job should complete within 10 to 15 minutes. You will need a trained model to complete our next lab.
Hyperparameter tuning
To do hyperparameter tuning, create a YAML file and and pass its name with --config.
This step could take <b>hours</b> -- you can increase --parallel-trial-count or reduce --max-trial-count to get it done faster. Since --parallel-trial-count is the number of initial seeds to start searching from, you don't want it to be too large; otherwise, all you have is a random search.
End of explanation
%%bash
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
OUTDIR=gs://${BUCKET}/babyweight/tuned_$TIMESTAMP
JOB_NAME=babyweight_tuned_$TIMESTAMP
PYTHON_PACKAGE_URI=gs://${BUCKET}/babyweight/babyweight_trainer-0.1.tar.gz
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest"
PYTHON_MODULE=trainer.task
echo > ./tuned_config.yaml "workerPoolSpecs:
machineSpec:
machineType: n1-standard-8
replicaCount: 1
pythonPackageSpec:
executorImageUri: $PYTHON_PACKAGE_EXECUTOR_IMAGE_URI
packageUris: $PYTHON_PACKAGE_URI
pythonModule: $PYTHON_MODULE
args:
- --train_data_path=gs://${BUCKET}/babyweight/data/train*.csv
- --eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv
- --output_dir=$OUTDIR
- --num_epochs=10
- --train_examples=20000
- --eval_steps=100
- --batch_size=32
- --nembeds=8"
gcloud ai custom-jobs create \
--region=${REGION} \
--display-name=$JOB_NAME \
--config=tuned_config.yaml
Explanation: Repeat training
This time with tuned parameters for batch_size and nembeds. Note that your best results may differ from below. So be sure to fill yours in!
End of explanation |
333 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Gradient-Boosting-Machine-(GBM)" data-toc-modified-id="Gradient-Boosting-Machine-(GBM)-1"><span class="toc-item-num">1 </span>Gradient Boosting Machine (GBM)</a></span><ul class="toc-item"><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.1"><span class="toc-item-num">1.1 </span>Implementation</a></span></li><li><span><a href="#Classification" data-toc-modified-id="Classification-1.2"><span class="toc-item-num">1.2 </span>Classification</a></span><ul class="toc-item"><li><span><a href="#Softmax" data-toc-modified-id="Softmax-1.2.1"><span class="toc-item-num">1.2.1 </span>Softmax</a></span></li></ul></li><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.3"><span class="toc-item-num">1.3 </span>Implementation</a></span></li><li><span><a href="#Understanding-Model-Complexity" data-toc-modified-id="Understanding-Model-Complexity-1.4"><span class="toc-item-num">1.4 </span>Understanding Model Complexity</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step2: Gradient Boosting Machine (GBM)
Just like Random Forest and Extra Trees, Gradient Boosting Machine is also a type of Ensemble Tree method, the only difference is it is stemmed from the the boosting framework. The idea of boosting is to add a weak classifier to the ensemble at a time, and this newly added weak classifier is trained to improve upon the already trained ensemble. Meaning it will pay higher attention on examples which are misclassified or have higher errors and focus on mitigating those errors. Boosting is a general framework can be applied to any sort of weak learner, although Decision Tree models is by far the commonly used due to the fact that they have the flexibility to be weak learners by simply restricting their depth and they are quite fast to train.
Suppose we are given some dataset $(x_1, y_1), (x_2, y_2), ...,(x_n, y_n)$, and the task is to fit a model $F(x)$ to minimize square loss. After training the model, we discovered the model is good but not perfect.
There are some mistakes
Step4: Clearly, Gradient Boosting has some similarities to Random Forests and Extra Trees
Step6: But the way the ensembles are constructed differs substantially between each model. In Random Forests and Extra Trees, all trees are created independently and each tree contributes equally to the final model. The trees in Gradient Boosting, however, are dependent on past trees and contribute unequally to the final model. Despite these differences, Random Forests, Extra Trees and Gradient Boosting all offer competitive predictive performance (Gradient Boosting often wins when carefully tuned). As for computation time, Gradient Boosting is often greater than for Random Forests, Extra Trees, since the two former models' procedure can be easily parallel processed given that their individual trees are created independently.
Classification
Gradient Boosting Machine can also be extended to handle classification tasks, as we'll soon see, even in the classification context, the underlying algorithm is still a regression tree. To adapt the algorithm to a classification process, we start by defining a new loss function, cross entropy (also known as multinomial deviance), denoted as
Step10: Next, we wish to compute the derivative of this function with respect to the input $o_i$ so we can use it later when computing the derivative of the loss function. To be explicit we wish to find
Step14: Understanding Model Complexity
In the following section, we generate a Sinoide function + random gaussian noise, with 80 training samples (blue points) and 20 test samples (red points).
Step15: Recall that in a single regression tree, we can use the max_depth parameter to control how deep to grow the tree and the deeper the tree the more variance can be explained.
Step16: The plot above shows that the decision boundaries made by decision trees are always perpendicular to $x$ and $y$ axis (due to the fact that they consists of nested if-else statements). Let's see what happens when we use gradient boosting without tuning the parameters (by specifying a fix max_depth).
Step17: Hopefully, it should be clear that compared with decision trees, gradient boosting machine is far more susceptible to overfitting the training data, hence it is common to tune parameters including max_depth, max_features, min_samples_leaf, subsample (explained below) to reduce the overfitting phenomenon from occurring.
The parameter subsample (technically called stochastic gradient boosting) borrows some idea from bagging techniques. What it does is | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style = 'custom2.css', plot_style = False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
%watermark -d -t -v -p numpy,pandas,matplotlib,sklearn
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Gradient-Boosting-Machine-(GBM)" data-toc-modified-id="Gradient-Boosting-Machine-(GBM)-1"><span class="toc-item-num">1 </span>Gradient Boosting Machine (GBM)</a></span><ul class="toc-item"><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.1"><span class="toc-item-num">1.1 </span>Implementation</a></span></li><li><span><a href="#Classification" data-toc-modified-id="Classification-1.2"><span class="toc-item-num">1.2 </span>Classification</a></span><ul class="toc-item"><li><span><a href="#Softmax" data-toc-modified-id="Softmax-1.2.1"><span class="toc-item-num">1.2.1 </span>Softmax</a></span></li></ul></li><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.3"><span class="toc-item-num">1.3 </span>Implementation</a></span></li><li><span><a href="#Understanding-Model-Complexity" data-toc-modified-id="Understanding-Model-Complexity-1.4"><span class="toc-item-num">1.4 </span>Understanding Model Complexity</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
# read in the data and shuffle the row order for model stability
np.random.seed(4321)
wine_path = os.path.join('..', 'winequality-white.csv')
wine = pd.read_csv(wine_path, sep = ';')
wine = wine.sample(frac = 1)
# train/test split the features and response column
y = wine['quality'].values
X = wine.drop('quality', axis = 1).values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 1234)
print('dimension of the dataset: ', wine.shape)
wine.head()
class GBMReg:
Regression gradient boosting machine using scikit learn's
decision tree as the base tree
Parameters
----------
n_estimators: int
number of trees to train
learning_rate: float
learning rate, some calls it shrinkage,
shrinks the contribution of each tree
to prevent overfitting
max_depth: int
controls how deep to grow the tree;
this is more of a decision tree parameter,
it is tune here to make later comparison fair
all the other parameters for a decision tree like
max_features or min_sample_split also applies to GBM,
it is just not used here as that is more
related to a single decision tree
def __init__(self, n_estimators, learning_rate, max_depth):
self.max_depth = max_depth
self.n_estimators = n_estimators
self.learning_rate = learning_rate
def fit(self, X, y):
self.estimators = []
# simply use the response as the original residuals
# and covert it to float type to prevent error warning
# that it's converting from int to float
residual = y.astype(np.float)
for i in range(self.n_estimators):
tree = DecisionTreeRegressor(max_depth = self.max_depth)
tree.fit(X, residual)
y_pred = tree.predict(X)
self.estimators.append(tree)
residual -= self.learning_rate * y_pred
return self
def predict(self, X):
y_pred = np.zeros(X.shape[0])
for tree in self.estimators:
y_pred += self.learning_rate * tree.predict(X)
return y_pred
# compare the results between a single decision tree,
# gradient boosting, the lower the mean square
# error, the better
tree = DecisionTreeRegressor(max_depth = 6)
tree.fit(X_train, y_train)
tree_y_pred = tree.predict(X_test)
print('tree: ', mean_squared_error(y_test, tree_y_pred))
# library to confirm result
gbm_reg = GBMReg(n_estimators = 100, learning_rate = 0.1, max_depth = 6)
gbm_reg.fit(X_train, y_train)
gbm_reg_y_pred = gbm_reg.predict(X_test)
print('gbm: ', mean_squared_error(y_test, gbm_reg_y_pred))
# gradient boosting for 100 trees and learning rate of 0.1
gbm = GradientBoostingRegressor(n_estimators = 100, learning_rate = 0.1, max_depth = 6)
gbm.fit(X_train, y_train)
gbm_y_pred = gbm.predict(X_test)
print('gbm library: ', mean_squared_error(y_test, gbm_y_pred))
Explanation: Gradient Boosting Machine (GBM)
Just like Random Forest and Extra Trees, Gradient Boosting Machine is also a type of Ensemble Tree method, the only difference is it is stemmed from the the boosting framework. The idea of boosting is to add a weak classifier to the ensemble at a time, and this newly added weak classifier is trained to improve upon the already trained ensemble. Meaning it will pay higher attention on examples which are misclassified or have higher errors and focus on mitigating those errors. Boosting is a general framework can be applied to any sort of weak learner, although Decision Tree models is by far the commonly used due to the fact that they have the flexibility to be weak learners by simply restricting their depth and they are quite fast to train.
Suppose we are given some dataset $(x_1, y_1), (x_2, y_2), ...,(x_n, y_n)$, and the task is to fit a model $F(x)$ to minimize square loss. After training the model, we discovered the model is good but not perfect.
There are some mistakes: $F(x_1) = 0.8$, while $y_1 = 0.9$, and $F(x_2) = 1.4$ while $y_2 = 1.3$ .... Now the question is, how can we improve this model without changing anything from $F(x)$?
How about we simply add an additional model (e.g. regression tree) $h$ to the already existing $F$, so the new prediction becomes $F(x) + h(x)$. In other words, we wish to improve upon the existing model so that $F(x_1) + h(x_1) = y_1, F(x_2) + h(x_2) = y_2 ...$ or equivalent we wish to find a new model $h$ such that $h(x_1) = y_1 - F(x_1), h(x_2) = y_2 - F(x_2) ...$. The idea is all well and good, but the bad news is probably no model $h$ (e.g. regression tree) will be able to do this perfectly. Fortunately, the good news is, some $h$ might be able to do this approximately.
The idea is, we fit the model $h$ to the data using $y_1 - F(x_1), y_2 - F(x_2)$ as the response variable. And the intuition for this is: the $y_i - F(x_i)$s are the residuals. These are the areas that the existing
model $F$ cannot do well, so now the role of $h$ is to compensate the shortcoming of existing model $F$. And if the model after adding the new model $h$, $F + h$ is still unsatisfactory, we will just add another new one.
To make sure we're actually learning the residuals, we'll employ the idea of gradient descent. Say our goal is to minimize $J$, an overall loss function additively calculated from all observations with regard to $F$, a classifier with some parameters. More formally, we're given the formula:
$$J(y, F) = \sum_i^n L\big(y_i, F(x_i)\big)$$
Where:
$L$ is a cost/loss function comparing the response variable's value and the prediction of the model for each observation
Instead of trying to solve it directly, gradient descent is an iterative technique that allows us to approach the solution of an optimization problem. At each step of the algorithm, it will perform the following operations:
$$F_b(x_i) = F_{b-1}(x_i) - \eta \times \nabla L\big(y_i, F(x_i)\big)$$
Where:
$F_b$ is the version of classifier at step/iteration $b$
$\eta$ is the learning rate which controls the size of the learning process
$\nabla$ is the gradient i.e. the first order partial derivative of the cost function with respect to the classifier
The formula above actually refers to stochastic gradient descent as we are only computing the function for a single observation, $x_i$
For example, say we're given, sum of squares errors, a well-known quality indicator for regression model as our loss function. So now our loss function $L\big(y_i, F(x_i)\big)$ is defined as: $\frac{1}{2} \big( y_i - F(x_i) \big)^2$ (the 1/2 is simply to make the notation cleaner later). Taking the gradient of this loss function we get:
$$\frac{ \partial L\big(y_i, F(x_i)\big) }{ \partial F(x_i) } = \frac{ \partial \frac{1}{2} \big( y_i - F(x_i) \big)^2 }{ \partial F(x_i) } = F(x_i) - y_i$$
Tying this back to our original problem, we wish to update our function $F$ at iteration $b$ with a new model $h$:
\begin{align}
F_b(x_i) &= F_{b-1}(x_i) + h(x_i) \nonumber \
&= F_{b-1}(x_i) + y_i - F_{b-1}(x_i) \nonumber \
&= F_{b-1}(x_i) - 1 \times \frac{ \partial L\big(y_i, F_{b-1}(x_i)\big) }{ \partial F_{b-1}(x_i) }
\nonumber \
\end{align}
As we can see, the formula above is 99% the same as as the gradient descent formula, $F_b(x_i) = F_{b-1}(x_i) - \eta \times \nabla L\big(y_i, F(x_i)\big)$. The only difference is that the learning rate $\eta$ is 1. Thus, we now have an iterative process constructing the additive model that minimizes our loss function (residuals).
In practice though, Gradient Boosting Machine is more prone to overfitting, since the week learner is tasked with optimally fitting the gradient. This means that boosting will select the optimal learner at each stage of the algorithm, although this strategy generates an optimal solution at the current stage, it has the drawbacks of not finding the optimal global model as well as overfitting the training data. A remedy for greediness is to constrain the learning process by setting the learning rate $\eta$ (also known as shrinkage). In the above algorithm, instead of directly adding the predicted value for a sample to next iteration's predicted value, so that only a fraction of the current predicted value is added to the previous iteration's predicted value. This parameter can take values between 0 and 1 and becomes another tuning parameter for the model. Small values of the learning parameter such as 0.1 tends to work better, but the value of the parameter is inversely proportional to the computation time required to find an optimal model, because more iterations is required.
To sum it all up, the process of training a GBM for regression is:
Initialize a predicted value for each observation (e.g. the original response or the average response or a value that minimizes the loss function). This will be our initial "residuals", $r$. It can be called the residuals because we're dealing with a regression task, but this quantity is more often referred to as the negative gradient, this terminology makes the $- \nabla \times L\big(y_i, F(x_i) \big)$ part generalizes to any loss function we might wish to employ. In short, GBM is fitting to the gradient of the loss function
For step = 1 to $B$ (number of iterations that we specify) do:
Fit a regression tree $F_b$ to the training data $(X, r)$, where we use the residuals as the response variable
Update model $F$ by adding a shrunken version of the newly fitted regression tree. Translating it to code, this means we append the new tree to the array of trees we've already stored:
$F(X) = F(X) + \eta F_{b}(X)$
Update each observation's residual by adding the predicted value to it:
$r_{b + 1} = r_b - \eta F_b(X)$
In the end, our final output boosted model becomes $F(x) = \sum_{b = 1}^B \eta F_b(x)$, where we sum the values that each individual tree gives (times the learning rate)
To hit the notion home, let's conside an example using made up numbers. Suppose we have 5 observations, with responses 10, 20, 30, 40, 50. The first tree is built and gives predictions of 12, 18, 27, 39, 54 (these predictions are made up numbers). If our learning rate $\eta$ = 0.1, all trees will have their predictions scaled down by $\eta$, so the first tree will instead "predict" 1.2, 1.8, 2.7, 3.9, 5.4. The response variable passed to the next tree will then have values 8.8, 18.2, 27.3, 36.1, 44.6 (the difference between the prediction that was scaled down by the prediction and the true response). The second round then uses these response values to build another tree - and again the predictions are scaled down by the learning rate $\eta$. So tree 2 predicts say, 7, 18, 25, 40, 40, which, once scaled, become 0.7, 1.8, 2.5, 4.0, 4.0. As before, the third tree will be passed the difference between these values and the previous tree's response variable (so 8.1, 16.4, 24.8, 32.1. 40.6). And we keep iterating this process until we finished training all the trees (a parameter that we specify), in the end, the sum of the predictions from all trees will give the final prediction.
Implementation
Here, we will use the Wine Quality Data Set to test our implementation. This link should download the .csv file. The task is to predict the quality of the wine (a scale of 1 ~ 10) given some of its features.
End of explanation
def viz_importance(model, feature_names, n_features):
Visualize the relative importance of predictors
# sort the importance in decreasing order
importances = model.feature_importances_
idx = np.argsort(importances)[-n_features:]
names = feature_names[idx]
scores = importances[idx]
y_pos = np.arange(1, n_features + 1)
plt.barh(y_pos, scores, color = 'lightskyblue', align = 'center')
plt.yticks(y_pos, names)
plt.xlabel('Importance')
plt.title('Feature Importance Plot')
# change default figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
viz_importance(gbm, wine.columns[:-1], X.shape[1])
Explanation: Clearly, Gradient Boosting has some similarities to Random Forests and Extra Trees: the final prediction is based on an ensemble of models, and trees are used as the base learner, so all the tuning parameters for the tree model also controls the variability of Gradient Boosting. And for interpretability we can also access the feature importance attribute.
End of explanation
def compute_softmax(x):
compute the softmax of vector
exp_x = np.exp(x)
softmax = exp_x / np.sum(exp_x)
return softmax
# this can be interpreted as the probability
# of belonging to the three classes
compute_softmax([1, 2, 3])
Explanation: But the way the ensembles are constructed differs substantially between each model. In Random Forests and Extra Trees, all trees are created independently and each tree contributes equally to the final model. The trees in Gradient Boosting, however, are dependent on past trees and contribute unequally to the final model. Despite these differences, Random Forests, Extra Trees and Gradient Boosting all offer competitive predictive performance (Gradient Boosting often wins when carefully tuned). As for computation time, Gradient Boosting is often greater than for Random Forests, Extra Trees, since the two former models' procedure can be easily parallel processed given that their individual trees are created independently.
Classification
Gradient Boosting Machine can also be extended to handle classification tasks, as we'll soon see, even in the classification context, the underlying algorithm is still a regression tree. To adapt the algorithm to a classification process, we start by defining a new loss function, cross entropy (also known as multinomial deviance), denoted as:
$$L\big(y_i, F(x_i)\big) = -\sum_k ^ K y_k(x_i) \log p_k(x_i)$$
The notation above says:
We have a total of $K$ output class (categorical response variable) that ranges from $1, ..., K$
$y_k(x_i)$ is a dummy indicator of the response variable that takes the value of 1 if the $i_{th}$ observation belongs to class $k$ and 0 otherwise
$p_k(x_i)$ is the predicted probability of the $i_{th}$ observation belonging to class $k$
So the next question is how do we get $p_k(x_i)$?
Softmax
Softmax function takes an $N$-dimensional vector of arbitrary real values and produces another $N$-dimensional vector with real values in the range (0, 1) that add up to 1. The function's formula can be written as:
$$p_i = \frac{e^{o_i}}{\sum_k^K e^{o_k}}$$
For example, in the following code chunk, we see that how the softmax function transforms a 3-element vector 1.0, 2.0, 3.0 into probabilities that sums up to 1, while still preserving the relative size of the original elements.
End of explanation
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import LabelEncoder
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import GradientBoostingClassifier
class GBMClass:
Classification gradient boosting machine using scikit learn's
decision tree as the base tree
Parameters
----------
n_estimators: int
number of trees to train
learning_rate: float
learning rate, some calls it shrinkage,
shrinks the contribution of each tree
to prevent overfitting
max_depth: int
controls how deep to grow the tree;
this is more of a decision tree parameter,
it is tune here to make later comparison fair
all the other parameters for a decision tree like
max_features or min_sample_split also applies to GBM,
it is just not used here as that is more
related to a single decision tree
def __init__(self, n_estimators, learning_rate, max_depth):
self.max_depth = max_depth
self.n_estimators = n_estimators
self.learning_rate = learning_rate
def fit(self, X, y):
# encode labels with value between 0 and n_classes - 1,
# so we can easily one-hot encode them
self.le = LabelEncoder()
labels = self.le.fit_transform(y)
Y = self._to_categorical(labels)
del labels
# the predicted probability starts out with
# a value that's uniform over all classes;
# then we compute the residuals (negative gradient),
# which is the difference between the predicted
# probability and the class label
y_proba = np.full(Y.shape, 1 / Y.shape[1])
residuals = Y - y_proba
# train a base decision tree on the residuals
# for every single class, hence we end up with
# n_estimators * n_classes base tree models
self.estimators = []
for i in range(self.n_estimators):
for j in range(self.n_classes):
tree = DecisionTreeRegressor(max_depth = self.max_depth)
tree.fit(X, residuals[:, j])
y_pred = tree.predict(X)
self.estimators.append(tree)
residuals[:, j] -= self.learning_rate * y_pred
return self
def _to_categorical(self, y):
one hot encode class vector y
self.n_classes = np.amax(y) + 1
Y = np.zeros((y.shape[0], self.n_classes))
for i in range(y.shape[0]):
Y[i, y[i]] = 1.0
return Y
def predict(self, X):
# after predicting the class remember to
# transform it back to the actual class label
y_prob = self.predict_proba(X)
y_pred = np.argmax(y_prob, axis = 1)
y_pred = self.le.inverse_transform(y_pred)
return y_pred
def predict_proba(self, X):
# add up raw score for every class and convert
# it to probability using softmax
y_raw = np.zeros((X.shape[0], self.n_classes))
# obtain the tree for each class and add up the prediction
for c in range(self.n_classes):
class_tree = self.estimators[c::self.n_classes]
for tree in class_tree:
y_raw[:, c] += self.learning_rate * tree.predict(X)
y_proba = self._compute_softmax(y_raw)
return y_proba
def _compute_softmax(self, z):
compute the softmax of matrix z in a numerically stable way,
by substracting each row with the max of each row. For more
information refer to the following link:
https://nolanbconaway.github.io/blog/2017/softmax-numpy
shift_z = z - np.amax(z, axis = 1, keepdims = 1)
exp_z = np.exp(shift_z)
softmax = exp_z / np.sum(exp_z, axis = 1, keepdims = 1)
return softmax
# compare the results between a single decision tree,
# gradient boosting, the higher the accuracy, the better
tree = DecisionTreeClassifier(max_depth = 6)
tree.fit(X_train, y_train)
tree_y_pred = tree.predict(X_test)
print('tree: ', accuracy_score(y_test, tree_y_pred))
# gradient boosting for 150 trees and learning rate of 0.2
# unlike random forest, gradient boosting's base tree can be shallower
# meaning that there depth can be smaller
gbm_class = GBMClass(n_estimators = 150, learning_rate = 0.2, max_depth = 3)
gbm_class.fit(X_train, y_train)
gbm_class_y_pred = gbm_class.predict(X_test)
print('gbm: ', accuracy_score(y_test, gbm_class_y_pred))
# library to confirm results are comparable
gbm = GradientBoostingClassifier(n_estimators = 150, learning_rate = 0.2, max_depth = 3)
gbm.fit(X_train, y_train)
gbm_y_pred = gbm.predict(X_test)
print('gbm library: ', accuracy_score(y_test, gbm_y_pred))
Explanation: Next, we wish to compute the derivative of this function with respect to the input $o_i$ so we can use it later when computing the derivative of the loss function. To be explicit we wish to find:
$$\frac{\partial p_i}{\partial o_j} = \frac{\partial \frac{e^{o_i}}{\sum_{k=1}^{N}e^{o_k}}}{\partial o_j}$$
For any arbitrary output $i$ and input $j$. To do so, We'll be using the quotient rule of derivatives. The rule tells us that for a function $f(x) = \frac{g(x)}{h(x)}$:
$$f'(x) = \frac{g'(x)h(x) - h'(x)g(x)}{[h(x)]^2}$$
In our case, we have:
$$
\begin{align}
g &= e^{o_i} \nonumber \
h &= \sum_{k=1}^{K}e^{o_k} \nonumber
\end{align}
$$
It's important to notice that no matter which $o_j$ we compute the derivative of $h$ for the output will always be $e^{o_j}$. However, this is not the case for $g$. It's derivative will be $e^{o_j}$ only if $i = j$, because only then will it have the term $e^{o_j}$. Otherwise, the derivative is simply 0 (because it's simply taking the derivative of a constant).
So going back to using our quotient rule, we start with the $i = j$ case. In the following derivation we'll use the $\Sigma$ (Sigma) sign to represent $\sum_{k=1}^{K}e^{o_k}$ for simplicity and to prevent cluttering up the notation.
$$
\begin{align}
\frac{\partial \frac{e^{o_i}}{\sum_{k = 1}^{N} e^{o_k}}}{\partial o_j}
&= \frac{e^{o_i}\Sigma-e^{o_j}e^{o_i}}{\Sigma^2} \nonumber \
&= \frac{e^{o_i}}{\Sigma}\frac{\Sigma - e^{o_j}}{\Sigma} \nonumber \
&= p_i(1 - p_j) \nonumber \
&= p_i(1 - p_i) \nonumber
\end{align}
$$
The reason we can perform the operation in the last line is because we're considering the scenario where $i = j$. Similarly we can do the case where $i \neq j$.
$$
\begin{align}
\frac{\partial \frac{e^{o_i}}{\sum_{k = 1}^{N} e^{o_k}}}{\partial o_j}
&= \frac{0-e^{o_j}e^{o_i}}{\Sigma^2} \nonumber \
&= -\frac{e^{o_j}}{\Sigma}\frac{e^{o_i}}{\Sigma} \nonumber \
&= -p_j p_i \nonumber \
&= -p_i p_j \nonumber
\end{align}
$$
Just to sum it up, we now have:
$$\frac{\partial p_i}{\partial o_j} = p_i(1 - p_i),\quad i = j$$
$$\frac{\partial p_i}{\partial o_j} = -p_i p_j,\quad i \neq j$$
Now, we can tie this back to the original loss function $-\sum_k^K y_k \log p_k$ and compute its negative gradient.
$$
\begin{align}
\frac{\partial L}{\partial o_i}
&= -\sum_k y_k\frac{\partial \log p_k}{\partial o_i} \nonumber \
&= -\sum_k y_k\frac{1}{p_k}\frac{\partial p_k}{\partial o_i} \nonumber \
&= -y_i(1-p_i) - \sum_{k \neq i}y_k\frac{1}{p_k}(-p_kp_i) \nonumber \
&= -y_i(1 - p_i) + \sum_{k \neq i}y_k(p_i) \nonumber \
&= -y_i + y_i p_i + \sum_{k \neq i}y_k(p_i) \nonumber \
&= p_i\left(\sum_ky_k\right) - y_i \nonumber \
&= p_i - y_i \nonumber
\end{align}
$$
Remember $\sum_ky_k=1$ (as $y$ is a vector with only one non-zero element, which is $1$ when the indicating the observation belongs to the $k_{th}$ class.
After a long journey, we now see, for every class $k$, the gradient is the difference between the associated dummy variable and the predicted probability of belonging to that class. This is essentially the "residuals" from the classification gradient boosting. Given this, we can now implement the algorithm, the overall process of training a regression tree has still not changed, only now we must deal with the dummy variables, $y_k$ and fit a regression tree on the negative gradient for each dummy variable.
Implementation
For the dataset, we'll still use the Wine Quality Data Set that was used for the regression task, except we now treat the quality of the wine (a scale of 1 ~ 10) as categorical instead of numeric.
End of explanation
def ground_truth(x):
Ground truth -- function to approximate
return x * np.sin(x) + np.sin(2 * x)
def gen_data(low, high, n_samples):
generate training and testing data from the ground truth function
np.random.seed(15)
X = np.random.uniform(low, high, size = n_samples)
# generate the response from the ground truth function and add
# some random noise to it
y = ground_truth(X) + np.random.normal(scale = 2, size = n_samples)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size = 0.2, random_state = 3)
return X_train, X_test, y_train, y_test
def plot_data(x_plot, X_train, X_test, y_train, y_test):
plot training and testing data
s = 20
alpha = 0.4
plt.plot(x_plot, ground_truth(x_plot), alpha = alpha, label = 'ground truth')
plt.scatter(X_train, y_train, s = s, alpha = alpha)
plt.scatter(X_test, y_test, s = s, alpha = alpha, color = 'red')
plt.xlim(( 0, 10 ))
plt.ylabel('y')
plt.xlabel('x')
plt.legend(loc = 'upper left')
plt.show()
low = 0
high = 10
x_plot = np.linspace(low, high, 500)
X_train, X_test, y_train, y_test = gen_data(low = low, high = high, n_samples = 100)
plot_data(x_plot, X_train, X_test, y_train, y_test)
Explanation: Understanding Model Complexity
In the following section, we generate a Sinoide function + random gaussian noise, with 80 training samples (blue points) and 20 test samples (red points).
End of explanation
# when using scikit-learn, the training data has to be
# a 2d-array even if it only has 1 features
tree1 = DecisionTreeRegressor(max_depth = 1)
tree1.fit(X_train[:, np.newaxis], y_train)
tree2 = DecisionTreeRegressor(max_depth = 3)
tree2.fit(X_train[:, np.newaxis], y_train)
plt.plot(x_plot, tree1.predict(x_plot[:, np.newaxis]),
label = 'RT max_depth=1', color = 'g', alpha = 0.9, linewidth = 2)
plt.plot(x_plot, tree2.predict(x_plot[:, np.newaxis]),
label = 'RT max_depth=3', color = 'g', alpha = 0.7, linewidth = 1)
plot_data(x_plot, X_train, X_test, y_train, y_test)
Explanation: Recall that in a single regression tree, we can use the max_depth parameter to control how deep to grow the tree and the deeper the tree the more variance can be explained.
End of explanation
gbm = GradientBoostingRegressor(n_estimators = 300, max_depth = 6, learning_rate = 0.1)
gbm.fit(X_train[:, np.newaxis], y_train)
plt.plot(x_plot, gbm.predict(x_plot[:, np.newaxis]),
label = 'GBM max_depth=6', color = 'r', alpha = 0.9, linewidth = 2)
plot_data(x_plot, X_train, X_test, y_train, y_test)
Explanation: The plot above shows that the decision boundaries made by decision trees are always perpendicular to $x$ and $y$ axis (due to the fact that they consists of nested if-else statements). Let's see what happens when we use gradient boosting without tuning the parameters (by specifying a fix max_depth).
End of explanation
param_grid = {
'max_depth': [4, 6],
'min_samples_leaf': [3, 5, 8],
'subsample': [0.9, 1]
# 'max_features': [1.0, 0.3, 0.1] # not possible in this example (there's only 1)
}
gs_gbm = GridSearchCV(gbm, param_grid, scoring = 'neg_mean_squared_error', n_jobs = 4)
gs_gbm.fit(X_train[:, np.newaxis], y_train)
print('Best hyperparameters: %r' % gs_gbm.best_params_)
plt.plot(x_plot, gs_gbm.predict(x_plot[:, np.newaxis]),
label = 'GBM tuned', color = 'r', alpha = 0.9, linewidth = 2)
plot_data(x_plot, X_train, X_test, y_train, y_test)
Explanation: Hopefully, it should be clear that compared with decision trees, gradient boosting machine is far more susceptible to overfitting the training data, hence it is common to tune parameters including max_depth, max_features, min_samples_leaf, subsample (explained below) to reduce the overfitting phenomenon from occurring.
The parameter subsample (technically called stochastic gradient boosting) borrows some idea from bagging techniques. What it does is: while iterating through each individual tree building process, it randomly select a fraction of the training data. Then the residuals and models in the remaining steps of the current iteration are based only on that sample of data. It turns out that this simple modification improved the predictive accuracy of boosting while also reducing the required computational resources (of course, this is based on the fact that you have enough observations to subsample).
The following section tunes the commonly tuned parameter and find the best one and draws the decision boundary. The resulting plot should be self-explanatory.
End of explanation |
334 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Don't forget to delete the hdmi_out and hdmi_in when finished
Mirror Filter Example
In this notebook, we will demonstrate how to use the mirror filter. We utilize Pynq’s ability to buffer HDMI signals in order to perform a filter. The mirror filter is relatively simple. The image is flip horizontally, this mimics the reflection of a mirror
<img src="data/mirror.jpeg"/>
In order to perform this function, we need to buffer a row of RBG values. During the first row the HDMI signals are stalled. During the rest of the frame the previous row is displayed backwards while the current row is buffered. The delay cause by this buffering is very small and not noticeable to the human eye.
1. Download base overlay to the board
Ensure that the camera is not connected to the board. Run the following script to provide the PYNQ with its base overlay.
Step1: 2. Connect camera
Physically connect the camera to the HDMI-in port of the PYNQ. Run the following code to instruct the PYNQ to capture the video from the camera and to begin streaming video to your monitor (connected to the HDMI-out port).
Step2: 3. Program board
Run the following script to download the Mirror Filter to the PYNQ.
Step3: 4. User interface
Do to the simplisity of this filter there is no need for a user interface.
5. Exploration
As you can see the image has been flipped. This demostrates that the pynq is able to do image processing in real time.
6. Clean up
When you are done with the mirror filter, run the following code to stop the video stream | Python Code:
from pynq.drivers.video import HDMI
from pynq import Bitstream_Part
from pynq.board import Register
from pynq import Overlay
Overlay("demo.bit").download()
Explanation: Don't forget to delete the hdmi_out and hdmi_in when finished
Mirror Filter Example
In this notebook, we will demonstrate how to use the mirror filter. We utilize Pynq’s ability to buffer HDMI signals in order to perform a filter. The mirror filter is relatively simple. The image is flip horizontally, this mimics the reflection of a mirror
<img src="data/mirror.jpeg"/>
In order to perform this function, we need to buffer a row of RBG values. During the first row the HDMI signals are stalled. During the rest of the frame the previous row is displayed backwards while the current row is buffered. The delay cause by this buffering is very small and not noticeable to the human eye.
1. Download base overlay to the board
Ensure that the camera is not connected to the board. Run the following script to provide the PYNQ with its base overlay.
End of explanation
hdmi_in = HDMI('in')
hdmi_out = HDMI('out', frame_list=hdmi_in.frame_list)
hdmi_out.mode(3)
hdmi_out.start()
hdmi_in.start()
Explanation: 2. Connect camera
Physically connect the camera to the HDMI-in port of the PYNQ. Run the following code to instruct the PYNQ to capture the video from the camera and to begin streaming video to your monitor (connected to the HDMI-out port).
End of explanation
Bitstream_Part("mirror_p.bit").download()
import ipywidgets as widgets
from ipywidgets import Button, HBox, VBox, Label
words = ['HDMI Reset']
items = [Button(description=w) for w in words]
def on_hdmi_clicked(b):
hdmi_out.stop()
hdmi_in.stop()
hdmi_out.start()
hdmi_in.start()
items[0].on_click(on_hdmi_clicked)
widgets.VBox([items[0]])
Explanation: 3. Program board
Run the following script to download the Mirror Filter to the PYNQ.
End of explanation
hdmi_out.stop()
hdmi_in.stop()
del hdmi_out
del hdmi_in
Explanation: 4. User interface
Do to the simplisity of this filter there is no need for a user interface.
5. Exploration
As you can see the image has been flipped. This demostrates that the pynq is able to do image processing in real time.
6. Clean up
When you are done with the mirror filter, run the following code to stop the video stream
End of explanation |
335 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Station Plot with Layout
Make a station plot, complete with sky cover and weather symbols, using a
station plot layout built into MetPy.
The station plot itself is straightforward, but there is a bit of code to perform the
data-wrangling (hopefully that situation will improve in the future). Certainly, if you have
existing point data in a format you can work with trivially, the station plot will be simple.
The StationPlotLayout class is used to standardize the plotting various parameters
(i.e. temperature), keeping track of the location, formatting, and even the units for use in
the station plot. This makes it easy (if using standardized names) to re-use a given layout
of a station plot.
Step1: The setup
First read in the data. We use numpy.loadtxt to read in the data and use a structured
numpy.dtype to allow different types for the various columns. This allows us to handle
the columns with string data.
Step2: This sample data has way too many stations to plot all of them. Instead, we just select
a few from around the U.S. and pull those out of the data file.
Step3: Next grab the simple variables out of the data we have (attaching correct units), and
put them into a dictionary that we will hand the plotting function later
Step4: Notice that the names (the keys) in the dictionary are the same as those that the
layout is expecting.
Now perform a few conversions
Step5: All the data wrangling is finished, just need to set up plotting and go
Step6: The payoff
Step7: or instead, a custom layout can be used | Python Code:
import cartopy.crs as ccrs
import cartopy.feature as feat
import matplotlib.pyplot as plt
import numpy as np
from metpy.calc import get_wind_components
from metpy.cbook import get_test_data
from metpy.plots import simple_layout, StationPlot, StationPlotLayout
from metpy.units import units
Explanation: Station Plot with Layout
Make a station plot, complete with sky cover and weather symbols, using a
station plot layout built into MetPy.
The station plot itself is straightforward, but there is a bit of code to perform the
data-wrangling (hopefully that situation will improve in the future). Certainly, if you have
existing point data in a format you can work with trivially, the station plot will be simple.
The StationPlotLayout class is used to standardize the plotting various parameters
(i.e. temperature), keeping track of the location, formatting, and even the units for use in
the station plot. This makes it easy (if using standardized names) to re-use a given layout
of a station plot.
End of explanation
f = get_test_data('station_data.txt')
all_data = np.loadtxt(f, skiprows=1, delimiter=',',
usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19),
dtype=np.dtype([('stid', '3S'), ('lat', 'f'), ('lon', 'f'),
('slp', 'f'), ('air_temperature', 'f'),
('cloud_fraction', 'f'), ('dew_point_temperature', 'f'),
('weather', '16S'),
('wind_dir', 'f'), ('wind_speed', 'f')]))
Explanation: The setup
First read in the data. We use numpy.loadtxt to read in the data and use a structured
numpy.dtype to allow different types for the various columns. This allows us to handle
the columns with string data.
End of explanation
# Get the full list of stations in the data
all_stids = [s.decode('ascii') for s in all_data['stid']]
# Pull out these specific stations
whitelist = ['OKC', 'ICT', 'GLD', 'MEM', 'BOS', 'MIA', 'MOB', 'ABQ', 'PHX', 'TTF',
'ORD', 'BIL', 'BIS', 'CPR', 'LAX', 'ATL', 'MSP', 'SLC', 'DFW', 'NYC', 'PHL',
'PIT', 'IND', 'OLY', 'SYR', 'LEX', 'CHS', 'TLH', 'HOU', 'GJT', 'LBB', 'LSV',
'GRB', 'CLT', 'LNK', 'DSM', 'BOI', 'FSD', 'RAP', 'RIC', 'JAN', 'HSV', 'CRW',
'SAT', 'BUY', '0CO', 'ZPC', 'VIH']
# Loop over all the whitelisted sites, grab the first data, and concatenate them
data_arr = np.concatenate([all_data[all_stids.index(site)].reshape(1,) for site in whitelist])
# First, look at the names of variables that the layout is expecting:
simple_layout.names()
Explanation: This sample data has way too many stations to plot all of them. Instead, we just select
a few from around the U.S. and pull those out of the data file.
End of explanation
# This is our container for the data
data = dict()
# Copy out to stage everything together. In an ideal world, this would happen on
# the data reading side of things, but we're not there yet.
data['longitude'] = data_arr['lon']
data['latitude'] = data_arr['lat']
data['air_temperature'] = data_arr['air_temperature'] * units.degC
data['dew_point_temperature'] = data_arr['dew_point_temperature'] * units.degC
data['air_pressure_at_sea_level'] = data_arr['slp'] * units('mbar')
Explanation: Next grab the simple variables out of the data we have (attaching correct units), and
put them into a dictionary that we will hand the plotting function later:
End of explanation
# Get the wind components, converting from m/s to knots as will be appropriate
# for the station plot
u, v = get_wind_components(data_arr['wind_speed'] * units('m/s'),
data_arr['wind_dir'] * units.degree)
data['eastward_wind'], data['northward_wind'] = u, v
# Convert the fraction value into a code of 0-8, which can be used to pull out
# the appropriate symbol
data['cloud_coverage'] = (8 * data_arr['cloud_fraction']).astype(int)
# Map weather strings to WMO codes, which we can use to convert to symbols
# Only use the first symbol if there are multiple
wx_text = [s.decode('ascii') for s in data_arr['weather']]
wx_codes = {'': 0, 'HZ': 5, 'BR': 10, '-DZ': 51, 'DZ': 53, '+DZ': 55,
'-RA': 61, 'RA': 63, '+RA': 65, '-SN': 71, 'SN': 73, '+SN': 75}
data['present_weather'] = [wx_codes[s.split()[0] if ' ' in s else s] for s in wx_text]
Explanation: Notice that the names (the keys) in the dictionary are the same as those that the
layout is expecting.
Now perform a few conversions:
Get wind components from speed and direction
Convert cloud fraction values to integer codes [0 - 8]
Map METAR weather codes to WMO codes for weather symbols
End of explanation
proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=35,
standard_parallels=[35])
state_boundaries = feat.NaturalEarthFeature(category='cultural',
name='admin_1_states_provinces_lines',
scale='110m', facecolor='none')
Explanation: All the data wrangling is finished, just need to set up plotting and go:
Set up the map projection and set up a cartopy feature for state borders
End of explanation
# Change the DPI of the resulting figure. Higher DPI drastically improves the
# look of the text rendering
plt.rcParams['savefig.dpi'] = 255
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(feat.LAND, zorder=-1)
ax.add_feature(feat.OCEAN, zorder=-1)
ax.add_feature(feat.LAKES, zorder=-1)
ax.coastlines(resolution='110m', zorder=2, color='black')
ax.add_feature(state_boundaries, edgecolor='black')
ax.add_feature(feat.BORDERS, linewidth='2', edgecolor='black')
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'], data['latitude'],
transform=ccrs.PlateCarree(), fontsize=12)
# The layout knows where everything should go, and things are standardized using
# the names of variables. So the layout pulls arrays out of `data` and plots them
# using `stationplot`.
simple_layout.plot(stationplot, data)
plt.show()
Explanation: The payoff
End of explanation
# Just winds, temps, and dewpoint, with colors. Dewpoint and temp will be plotted
# out to Farenheit tenths. Extra data will be ignored
custom_layout = StationPlotLayout()
custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots')
custom_layout.add_value('NW', 'air_temperature', fmt='.1f', units='degF', color='darkred')
custom_layout.add_value('SW', 'dew_point_temperature', fmt='.1f', units='degF',
color='darkgreen')
# Also, we'll add a field that we don't have in our dataset. This will be ignored
custom_layout.add_value('E', 'precipitation', fmt='0.2f', units='inch', color='blue')
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(feat.LAND, zorder=-1)
ax.add_feature(feat.OCEAN, zorder=-1)
ax.add_feature(feat.LAKES, zorder=-1)
ax.coastlines(resolution='110m', zorder=2, color='black')
ax.add_feature(state_boundaries, edgecolor='black')
ax.add_feature(feat.BORDERS, linewidth='2', edgecolor='black')
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'], data['latitude'],
transform=ccrs.PlateCarree(), fontsize=12)
# The layout knows where everything should go, and things are standardized using
# the names of variables. So the layout pulls arrays out of `data` and plots them
# using `stationplot`.
custom_layout.plot(stationplot, data)
plt.show()
Explanation: or instead, a custom layout can be used:
End of explanation |
336 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training a better model
Step1: Are we underfitting?
Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions
Step2: ...and load our fine-tuned weights.
Step3: We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer
Step4: Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way!
Step5: For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.
Step6: And fit the model in the usual way
Step7: Reducing overfitting
Now that we've gotten the model to overfit, we can take a number of steps to reduce this.
Approaches to reducing overfitting
We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment)
Step8: Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested).
Step9: As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches.
Step10: Adding data augmentation
Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it
Step11: When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.
Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable
Step12: Now we can compile, train, and save our model as usual - note that we use fit_generator() since we want to pull random images from the directories on every batch.
Step13: Batch normalization
About batch normalization
Batch normalization (batchnorm) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called normalization. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.
Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights.
Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that all modern networks should use batchnorm, or something equivalent. There are two reasons for this | Python Code:
#from theano.sandbox import cuda
%matplotlib inline
import utils
import importlib
importlib.reload(utils)
from utils import *
from __future__ import division, print_function
#path = "data/dogscats/sample/"
path = "data/dogscats/"
model_path = path + 'models/'
if not os.path.exists(model_path): os.mkdir(model_path)
batch_size=64
Explanation: Training a better model
End of explanation
model = vgg_ft(2)
Explanation: Are we underfitting?
Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:
How is this possible?
Is this desirable?
The answer to (1) is that this is happening because of dropout. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability p (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.
The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.
So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!
(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.)
Removing dropout
Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:
- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)
- Split the model between the convolutional (conv) layers and the dense layers
- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch
- Create a new model with just the dense layers, and dropout p set to zero
- Train this new model using the output of the conv layers as training data.
As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent...
End of explanation
model.load_weights(model_path+'finetune3.h5')
Explanation: ...and load our fine-tuned weights.
End of explanation
layers = model.layers
last_conv_idx = [index for index,layer in enumerate(layers)
if type(layer) is Convolution2D][-1]
last_conv_idx
layers[last_conv_idx]
conv_layers = layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
# Dense layers - also known as fully connected or 'FC' layers
fc_layers = layers[last_conv_idx+1:]
Explanation: We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer:
End of explanation
batches = get_batches(path+'train', shuffle=False, batch_size=batch_size)
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
val_classes = val_batches.classes
trn_classes = batches.classes
val_labels = onehot(val_classes)
trn_labels = onehot(trn_classes)
val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample)
trn_features = conv_model.predict_generator(batches, batches.nb_sample)
save_array(model_path + 'train_convlayer_features.bc', trn_features)
save_array(model_path + 'valid_convlayer_features.bc', val_features)
trn_features = load_array(model_path+'train_convlayer_features.bc')
val_features = load_array(model_path+'valid_convlayer_features.bc')
trn_features.shape
Explanation: Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way!
End of explanation
# Copy the weights from the pre-trained model.
# NB: Since we're removing dropout, we want to half the weights
def proc_wgts(layer): return [o/2 for o in layer.get_weights()]
# Such a finely tuned model needs to be updated very slowly!
opt = RMSprop(lr=0.00001, rho=0.7)
def get_fc_model():
model = Sequential([
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(2, activation='softmax')
])
for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2))
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
fc_model = get_fc_model()
Explanation: For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.
End of explanation
fc_model.fit(trn_features, trn_labels, nb_epoch=8,
batch_size=batch_size, validation_data=(val_features, val_labels))
fc_model.save_weights(model_path+'no_dropout.h5')
fc_model.load_weights(model_path+'no_dropout.h5')
Explanation: And fit the model in the usual way:
End of explanation
# dim_ordering='tf' uses tensorflow dimension ordering,
# which is the same order as matplotlib uses for display.
# Therefore when just using for display purposes, this is more convenient
gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1,
height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1,
channel_shift_range=10., horizontal_flip=True, dim_ordering='tf')
Explanation: Reducing overfitting
Now that we've gotten the model to overfit, we can take a number of steps to reduce this.
Approaches to reducing overfitting
We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):
Add more data
Use data augmentation
Use architectures that generalize well
Add regularization
Reduce architecture complexity.
We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.
Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)
We recommend always using at least some light data augmentation, unless you have so much data that your model will never see the same input twice.
About data augmentation
Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation:
End of explanation
# Create a 'batch' of a single image
img = np.expand_dims(ndimage.imread('cat.jpg'),0)
# Request the generator to create batches from this image
aug_iter = gen.flow(img)
# Get eight examples of these augmented images
aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)]
# The original
plt.imshow(img[0])
Explanation: Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested).
End of explanation
# Augmented data
plots(aug_imgs, (20,7), 2)
# Ensure that we return to theano dimension ordering
K.set_image_dim_ordering('th')
Explanation: As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches.
End of explanation
gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1,
height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True)
batches = get_batches(path+'train', gen, batch_size=batch_size)
# NB: We don't want to augment or shuffle the validation set
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
Explanation: Adding data augmentation
Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it:
End of explanation
fc_model = get_fc_model()
for layer in conv_model.layers: layer.trainable = False
# Look how easy it is to connect two models together!
conv_model.add(fc_model)
Explanation: When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.
Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable:
End of explanation
conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.save_weights(model_path + 'aug1.h5')
conv_model.load_weights(model_path + 'aug1.h5')
Explanation: Now we can compile, train, and save our model as usual - note that we use fit_generator() since we want to pull random images from the directories on every batch.
End of explanation
conv_layers[-1].output_shape[1:]
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(p),
BatchNormalization(),
Dense(4096, activation='relu'),
Dropout(p),
BatchNormalization(),
Dense(1000, activation='softmax')
]
p=0.6
bn_model = Sequential(get_bn_layers(0.6))
bn_model.load_weights('/data/jhoward/ILSVRC2012_img/bn_do3_1.h5')
def proc_wgts(layer, prev_p, new_p):
scal = (1-prev_p)/(1-new_p)
return [o*scal for o in layer.get_weights()]
for l in bn_model.layers:
if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6))
bn_model.pop()
for layer in bn_model.layers: layer.trainable=False
bn_model.add(Dense(2,activation='softmax'))
bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels))
bn_model.save_weights(model_path+'bn.h5')
bn_model.load_weights(model_path+'bn.h5')
bn_layers = get_bn_layers(0.6)
bn_layers.pop()
bn_layers.append(Dense(2,activation='softmax'))
final_model = Sequential(conv_layers)
for layer in final_model.layers: layer.trainable = False
for layer in bn_layers: final_model.add(layer)
for l1,l2 in zip(bn_model.layers, bn_layers):
l2.set_weights(l1.get_weights())
final_model.compile(optimizer=Adam(),
loss='categorical_crossentropy', metrics=['accuracy'])
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final1.h5')
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final2.h5')
final_model.optimizer.lr=0.001
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
bn_model.save_weights(model_path + 'final3.h5')
Explanation: Batch normalization
About batch normalization
Batch normalization (batchnorm) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called normalization. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.
Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights.
Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that all modern networks should use batchnorm, or something equivalent. There are two reasons for this:
1. Adding batchnorm to a model can result in 10x or more improvements in training speed
2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to reduce overfitting.
As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:
1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean
2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.
This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so.
Adding batchnorm to the model
We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers):
End of explanation |
337 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Attention Basics
In this notebook, we look at how attention is implemented. We will focus on implementing attention in isolation from a larger model. That's because when implementing attention in a real-world model, a lot of the focus goes into piping the data and juggling the various vectors rather than the concepts of attention themselves.
We will implement attention scoring as well as calculating an attention context vector.
Attention Scoring
Inputs to the scoring function
Let's start by looking at the inputs we'll give to the scoring function. We will assume we're in the first step in the decoding phase. The first input to the scoring function is the hidden state of decoder (assuming a toy RNN with three hidden nodes -- not usable in real life, but easier to illustrate)
Step1: Let's visualize this vector
Step2: Our first scoring function will score a single annotation (encoder hidden state), which looks like this
Step3: IMPLEMENT
Step4: Annotations Matrix
Let's now look at scoring all the annotations at once. To do that, here's our annotation matrix
Step5: And it can be visualized like this (each column is a hidden state of an encoder time step)
Step6: IMPLEMENT
Step7: Looking at these scores, can you guess which of the four vectors will get the most attention from the decoder at this time step?
Softmax
Now that we have our scores, let's apply softmax
Step8: Even when knowing which annotation will get the most focus, it's interesting to see how drastic softmax makes the end score become. The first and last annotation had the respective scores of 927 and 929. But after softmax, the attention they'll get is 0.12 and 0.88 respectively.
Applying the scores back on the annotations
Now that we have our scores, let's multiply each annotation by its score to proceed closer to the attention context vector. This is the multiplication part of this formula (we'll tackle the summation part in the latter cells)
<img src="images/Context_vector.png" />
Step9: Let's visualize how the context vector looks now that we've applied the attention scores back on it
Step10: Contrast this with the raw annotations visualized earlier in the notebook, and we can see that the second and third annotations (columns) have been nearly wiped out. The first annotation maintains some of its value, and the fourth annotation is the most pronounced.
Calculating the Attention Context Vector
All that remains to produce our attention context vector now is to sum up the four columns to produce a single attention context vector | Python Code:
dec_hidden_state = [5,1,20]
Explanation: Attention Basics
In this notebook, we look at how attention is implemented. We will focus on implementing attention in isolation from a larger model. That's because when implementing attention in a real-world model, a lot of the focus goes into piping the data and juggling the various vectors rather than the concepts of attention themselves.
We will implement attention scoring as well as calculating an attention context vector.
Attention Scoring
Inputs to the scoring function
Let's start by looking at the inputs we'll give to the scoring function. We will assume we're in the first step in the decoding phase. The first input to the scoring function is the hidden state of decoder (assuming a toy RNN with three hidden nodes -- not usable in real life, but easier to illustrate):
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Let's visualize our decoder hidden state
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(dec_hidden_state)), annot=True, cmap=sns.light_palette("purple", as_cmap=True), linewidths=1)
Explanation: Let's visualize this vector:
End of explanation
annotation = [3,12,45] #e.g. Encoder hidden state
# Let's visualize the single annotation
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(annotation)), annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
Explanation: Our first scoring function will score a single annotation (encoder hidden state), which looks like this:
End of explanation
def single_dot_attention_score(dec_hidden_state, enc_hidden_state):
# TODO: return the dot product of the two vectors
return
single_dot_attention_score(dec_hidden_state, annotation)
Explanation: IMPLEMENT: Scoring a Single Annotation
Let's calculate the dot product of a single annotation. NumPy's dot() is a good candidate for this operation
End of explanation
annotations = np.transpose([[3,12,45], [59,2,5], [1,43,5], [4,3,45.3]])
Explanation: Annotations Matrix
Let's now look at scoring all the annotations at once. To do that, here's our annotation matrix:
End of explanation
# Let's visualize our annotation (each column is an annotation)
ax = sns.heatmap(annotations, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
Explanation: And it can be visualized like this (each column is a hidden state of an encoder time step):
End of explanation
def dot_attention_score(dec_hidden_state, annotations):
# TODO: return the product of dec_hidden_state transpose and enc_hidden_states
return
attention_weights_raw = dot_attention_score(dec_hidden_state, annotations)
attention_weights_raw
Explanation: IMPLEMENT: Scoring All Annotations at Once
Let's calculate the scores of all the annotations in one step using matrix multiplication. Let's continue to us the dot scoring method
<img src="images/scoring_functions.png" />
To do that, we'll have to transpose dec_hidden_state and matrix multiply it with annotations.
End of explanation
def softmax(x):
x = np.array(x, dtype=np.float128)
e_x = np.exp(x)
return e_x / e_x.sum(axis=0)
attention_weights = softmax(attention_weights_raw)
attention_weights
Explanation: Looking at these scores, can you guess which of the four vectors will get the most attention from the decoder at this time step?
Softmax
Now that we have our scores, let's apply softmax:
<img src="images/softmax.png" />
End of explanation
def apply_attention_scores(attention_weights, annotations):
# TODO: Multiple the annotations by their weights
return
applied_attention = apply_attention_scores(attention_weights, annotations)
applied_attention
Explanation: Even when knowing which annotation will get the most focus, it's interesting to see how drastic softmax makes the end score become. The first and last annotation had the respective scores of 927 and 929. But after softmax, the attention they'll get is 0.12 and 0.88 respectively.
Applying the scores back on the annotations
Now that we have our scores, let's multiply each annotation by its score to proceed closer to the attention context vector. This is the multiplication part of this formula (we'll tackle the summation part in the latter cells)
<img src="images/Context_vector.png" />
End of explanation
# Let's visualize our annotations after applying attention to them
ax = sns.heatmap(applied_attention, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
Explanation: Let's visualize how the context vector looks now that we've applied the attention scores back on it:
End of explanation
def calculate_attention_vector(applied_attention):
return np.sum(applied_attention, axis=1)
attention_vector = calculate_attention_vector(applied_attention)
attention_vector
# Let's visualize the attention context vector
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(attention_vector)), annot=True, cmap=sns.light_palette("Blue", as_cmap=True), linewidths=1)
Explanation: Contrast this with the raw annotations visualized earlier in the notebook, and we can see that the second and third annotations (columns) have been nearly wiped out. The first annotation maintains some of its value, and the fourth annotation is the most pronounced.
Calculating the Attention Context Vector
All that remains to produce our attention context vector now is to sum up the four columns to produce a single attention context vector
End of explanation |
338 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: A Tour of Oryx
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Layer 0
Step3: Harvest
oryx.core.harvest enables tagging values in functions along with the ability to collect them, or "reap" them, and the ability to inject values in their place, or "planting" them. We tag values using the sow function.
Step4: Layer 1
Step5: Modules are registered as JAX pytrees and can be used as inputs to JAX transformed functions. Oryx provides a convenient call function that executes a Module.
Step6: The state API also enables writing stateful updates (like running averages) using the assign function. The resulting Module has an update function with an input signature that is the same as the Module's __call__ but creates a new copy of the Module with an updated state.
Step7: Probabilistic programming
In oryx.core.ppl, Oryx provides a set of tools built on top of harvest and inverse which aim to make writing and transforming probabilistic programs intuitive and easy.
In Oryx, a probabilistic program is a JAX function that takes a source of randomness as its first argument and returns a sample from a distribution, i.e, f
Step8: What can we do with probabilistic programs? The simplest thing would be to take a probabilistic program (i.e. a sampling function) and convert it into one that provides the log-density of a sample.
Step9: The new log-probability function is compatible with other JAX transformations like vmap and grad.
Step10: Using the ildj transformation, we can compute log_prob of programs that invertibly transform samples.
Step11: We can tag intermediate values in a probabilistic program with names and obtain joint sampling and joint log-prob functions.
Step12: Oryx also has a joint_log_prob function that composes log_prob with joint_sample.
Step13: To learn more, see the documentation.
Layer 2
Step14: A Layer has a call method that runs its forward pass.
Step15: Oryx also provides a Serial combinator.
Step16: We can interleave functions and combinators to create a flexible neural network "meta language".
Step17: Optimizers
In oryx.experimental.optimizers, Oryx provides a set of first-order optimizers, built using the state API. Their design is based off of JAX's optix library, where optimizers maintain state about a set of gradient updates. Oryx's version manages state using the state API.
Step18: Markov chain Monte Carlo
In oryx.experimental.mcmc, Oryx provides a set of Markov Chain Monte Carlo (MCMC) kernels. MCMC is an approach to approximate Bayesian inference where we draw samples from a Markov chain whose stationary distribution is the posterior distribution of interest.
Oryx's MCMC library builds on both the state and ppl API.
Step19: Random walk Metropolis
Step20: Hamiltonian Monte Carlo | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip install oryx 1>/dev/null
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='whitegrid')
import jax
import jax.numpy as jnp
from jax import random
from jax import vmap
from jax import jit
from jax import grad
import oryx
tfd = oryx.distributions
state = oryx.core.state
ppl = oryx.core.ppl
inverse = oryx.core.inverse
ildj = oryx.core.ildj
plant = oryx.core.plant
reap = oryx.core.reap
sow = oryx.core.sow
nn = oryx.experimental.nn
mcmc = oryx.experimental.mcmc
optimizers = oryx.experimental.optimizers
Explanation: A Tour of Oryx
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/oryx/examples/a_tour_of_oryx"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/spinoffs/oryx/examples/notebooks/a_tour_of_oryx.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/spinoffs/oryx/examples/notebooks/a_tour_of_oryx.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/spinoffs/oryx/examples/notebooks/a_tour_of_oryx.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
What is Oryx?
Oryx is an experimental library that extends JAX to applications ranging from building and training complex neural networks to approximate Bayesian inference in deep generative models. Like JAX provides jit, vmap, and grad, Oryx provides a set of composable function transformations that enable writing simple code and transforming it to build complexity while staying completely interoperable with JAX.
JAX can only safely transform pure, functional code (i.e. code without side-effects). While pure code can be easier to write and reason about, "impure" code can often be more concise and more easily expressive.
At its core, Oryx is a library that enables "augmenting" pure functional code to accomplish tasks like defining state or pulling out intermediate values. Its goal is to be as thin of a layer on top of JAX as possible, leveraging JAX's minimalist approach to numerical computing. Oryx is conceptually divided into several "layers", each building on the one below it.
The source code for Oryx can be found on GitHub.
Setup
End of explanation
def f(x):
return jnp.exp(x) + 2.
print(inverse(f)(4.)) # ln(2)
print(ildj(f)(4.)) # -ln(2)
Explanation: Layer 0: Base function transformations
At its base, Oryx defines several new function transformations. These transformations are implemented using JAX's tracing machinery and are interoperable with existing JAX transformations like jit, grad, vmap, etc.
Automatic function inversion
oryx.core.inverse and oryx.core.ildj are function transformations that can programatically invert a function and compute its inverse log-det Jacobian (ILDJ) respectively. These transformations are useful in probabilistic modeling for computing log-probabilities using the change-of-variable formula. There are limitations on the types of functions they are compatible with, however (see the documentation for more details).
End of explanation
def f(x):
y = sow(x + 1., name='y', tag='intermediate')
return y ** 2
print('Reap:', reap(f, tag='intermediate')(1.)) # Pulls out 'y'
print('Plant:', plant(f, tag='intermediate')(dict(y=5.), 1.)) # Injects 5. for 'y'
Explanation: Harvest
oryx.core.harvest enables tagging values in functions along with the ability to collect them, or "reap" them, and the ability to inject values in their place, or "planting" them. We tag values using the sow function.
End of explanation
def make_dense(dim_out):
def forward(x, init_key=None):
w_key, b_key = random.split(init_key)
dim_in = x.shape[0]
w = state.variable(random.normal(w_key, (dim_in, dim_out)), name='w')
b = state.variable(random.normal(w_key, (dim_out,)), name='b')
return jnp.dot(x, w) + b
return forward
layer = state.init(make_dense(5))(random.PRNGKey(0), jnp.zeros(2))
print('layer:', layer)
print('layer.w:', layer.w)
print('layer.b:', layer.b)
Explanation: Layer 1: Higher level transformations
Oryx builds off the low-level inverse, harvest, and unzip function transformations to offer several higher-level transformations for writing stateful computations and for probabilistic programming.
Stateful functions (core.state)
We're often interested in expressing stateful computations where we initialize a set of parameters and express a computation in terms of the parameters. In oryx.core.state, Oryx provides an init transformation that converts a function into one that initializes a Module, a container for state.
Modules resemble Pytorch and TensorFlow Modules except that they are immutable.
End of explanation
vmap(state.call, in_axes=(None, 0))(layer, jnp.ones((5, 2)))
Explanation: Modules are registered as JAX pytrees and can be used as inputs to JAX transformed functions. Oryx provides a convenient call function that executes a Module.
End of explanation
def counter(x, init_key=None):
count = state.variable(0., key=init_key, name='count')
count = state.assign(count + 1., name='count')
return x + count
layer = state.init(counter)(random.PRNGKey(0), 0.)
print(layer.count)
updated_layer = layer.update(0.)
print(updated_layer.count) # Count has advanced!
print(updated_layer.call(1.))
Explanation: The state API also enables writing stateful updates (like running averages) using the assign function. The resulting Module has an update function with an input signature that is the same as the Module's __call__ but creates a new copy of the Module with an updated state.
End of explanation
def sample(key):
return ppl.random_variable(tfd.Normal(0., 1.))(key)
sample(random.PRNGKey(0))
Explanation: Probabilistic programming
In oryx.core.ppl, Oryx provides a set of tools built on top of harvest and inverse which aim to make writing and transforming probabilistic programs intuitive and easy.
In Oryx, a probabilistic program is a JAX function that takes a source of randomness as its first argument and returns a sample from a distribution, i.e, f :: Key -> Sample. In order to write these programs, Oryx wraps TensorFlow Probability distributions and provides a simple function random_variable that converts a distribution into a probabilistic program.
End of explanation
ppl.log_prob(sample)(1.)
Explanation: What can we do with probabilistic programs? The simplest thing would be to take a probabilistic program (i.e. a sampling function) and convert it into one that provides the log-density of a sample.
End of explanation
grad(lambda s: vmap(ppl.log_prob(sample))(s).sum())(jnp.arange(10.))
Explanation: The new log-probability function is compatible with other JAX transformations like vmap and grad.
End of explanation
def sample(key):
x = ppl.random_variable(tfd.Normal(0., 1.))(key)
return jnp.exp(x / 2.) + 2.
_, ax = plt.subplots(2)
ax[0].hist(jit(vmap(sample))(random.split(random.PRNGKey(0), 1000)),
bins='auto')
x = jnp.linspace(0, 8, 100)
ax[1].plot(x, jnp.exp(jit(vmap(ppl.log_prob(sample)))(x)))
plt.show()
Explanation: Using the ildj transformation, we can compute log_prob of programs that invertibly transform samples.
End of explanation
def sample(key):
z_key, x_key = random.split(key)
z = ppl.random_variable(tfd.Normal(0., 1.), name='z')(z_key)
x = ppl.random_variable(tfd.Normal(z, 1.), name='x')(x_key)
return x
ppl.joint_sample(sample)(random.PRNGKey(0))
Explanation: We can tag intermediate values in a probabilistic program with names and obtain joint sampling and joint log-prob functions.
End of explanation
ppl.joint_log_prob(sample)(dict(x=0., z=0.))
Explanation: Oryx also has a joint_log_prob function that composes log_prob with joint_sample.
End of explanation
layer = state.init(nn.Dense(200))(random.PRNGKey(0), jnp.zeros(50))
print(layer, layer.params.kernel.shape, layer.params.bias.shape)
Explanation: To learn more, see the documentation.
Layer 2: Mini-libraries
Building further on top of the layers that handle state and probabilistic programming, Oryx provides experimental mini-libraries tailored for specific applications like deep learning and Bayesian inference.
Neural networks
In oryx.experimental.nn, Oryx provides a set of common neural network Layers that fit neatly into the state API. These layers are built for single examples (not batches) but override batch behaviors to handle patterns like running averages in batch normalization. They also enable passing keyword arguments like training=True/False into modules.
Layers are initialized from a Template like nn.Dense(200) using state.init.
End of explanation
layer.call(jnp.ones(50)).shape
Explanation: A Layer has a call method that runs its forward pass.
End of explanation
mlp_template = nn.Serial([
nn.Dense(200), nn.Relu(),
nn.Dense(200), nn.Relu(),
nn.Dense(10), nn.Softmax()
])
# OR
mlp_template = (
nn.Dense(200) >> nn.Relu()
>> nn.Dense(200) >> nn.Relu()
>> nn.Dense(10) >> nn.Softmax())
mlp = state.init(mlp_template)(random.PRNGKey(0), jnp.ones(784))
mlp(jnp.ones(784))
Explanation: Oryx also provides a Serial combinator.
End of explanation
def resnet(template):
def forward(x, init_key=None):
layer = state.init(template, name='layer')(init_key, x)
return x + layer(x)
return forward
big_resnet_template = nn.Serial([
nn.Dense(50)
>> resnet(nn.Dense(50) >> nn.Relu())
>> resnet(nn.Dense(50) >> nn.Relu())
>> nn.Dense(10)
])
network = state.init(big_resnet_template)(random.PRNGKey(0), jnp.ones(784))
network(jnp.ones(784))
Explanation: We can interleave functions and combinators to create a flexible neural network "meta language".
End of explanation
network_key, opt_key = random.split(random.PRNGKey(0))
def autoencoder_loss(network, x):
return jnp.square(network.call(x) - x).mean()
network = state.init(nn.Dense(200) >> nn.Relu() >> nn.Dense(2))(network_key, jnp.zeros(2))
opt = state.init(optimizers.adam(1e-4))(opt_key, network, network)
g = grad(autoencoder_loss)(network, jnp.zeros(2))
g, opt = opt.call_and_update(network, g)
network = optimizers.optix.apply_updates(network, g)
Explanation: Optimizers
In oryx.experimental.optimizers, Oryx provides a set of first-order optimizers, built using the state API. Their design is based off of JAX's optix library, where optimizers maintain state about a set of gradient updates. Oryx's version manages state using the state API.
End of explanation
def model(key):
return jnp.exp(ppl.random_variable(tfd.MultivariateNormalDiag(
jnp.zeros(2), jnp.ones(2)))(key))
Explanation: Markov chain Monte Carlo
In oryx.experimental.mcmc, Oryx provides a set of Markov Chain Monte Carlo (MCMC) kernels. MCMC is an approach to approximate Bayesian inference where we draw samples from a Markov chain whose stationary distribution is the posterior distribution of interest.
Oryx's MCMC library builds on both the state and ppl API.
End of explanation
samples = jit(mcmc.sample_chain(mcmc.metropolis(
ppl.log_prob(model),
mcmc.random_walk()), 1000))(random.PRNGKey(0), jnp.ones(2))
plt.scatter(samples[:, 0], samples[:, 1], alpha=0.5)
plt.show()
Explanation: Random walk Metropolis
End of explanation
samples = jit(mcmc.sample_chain(mcmc.hmc(
ppl.log_prob(model)), 1000))(random.PRNGKey(0), jnp.ones(2))
plt.scatter(samples[:, 0], samples[:, 1], alpha=0.5)
plt.show()
Explanation: Hamiltonian Monte Carlo
End of explanation |
339 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Seizure — Kaggle competition 2016
Introduction
Work in progress
An interesting article to start working with. It hasn't many details on implementation, but gives some ideas of what to do.
First steps
Load the data scientist weapons
Step2: Load a balanced set of Interictal (Non-Seizure, 0) and Preictal (Pre-Seizure, 1) examples.
Step4: Loading data from .mat files
Load data for each file and get the definitive training data.
Then create the y based on the classes of the files.
Note that because each file contains 16 channels, it is neccesary to
repeat each class in data_files['class'] 16 times, respecting the order of appearence.
Step5: Pre-processing
Resamblation and normalization
Step6: Model selection and predictions | Python Code:
import scipy.io
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
Explanation: Predicting Seizure — Kaggle competition 2016
Introduction
Work in progress
An interesting article to start working with. It hasn't many details on implementation, but gives some ideas of what to do.
First steps
Load the data scientist weapons
End of explanation
import os
from collections import Counter
base_dir_train = u'/train_1/'
base_dir_tests = u'/test_1/'
INTERICTAL = 0
PREICTAL = 1
def get_class_from_name(name):
Gets the class from the file name.
The class is defined by the last number written in the file name.
For example:
Input: ".../1_1_1.mat"
Output: 1.0
Input: ".../1_1_0.mat"
Output: 0.0
try:
return float(name[-5])
except:
return 0.0
assert get_class_from_name('/train_1/1_1_0.mat') == 0.0
assert get_class_from_name('/train_1/1_1_1.mat') == 1.0
def get_file_names_and_classes(base_dir, train_samples=600):
ignored_files = ['.DS_Store', '1_45_1.mat']
return np.array(
[
(file, get_class_from_name(file))
for file in os.listdir(base_dir) if file not in ignored_files
],
dtype=[('file', '|S16'), ('class', 'float32')]
)
data_files_all = get_file_names_and_classes(base_dir_train)
# Count the occurrences of Interictal and Preictal classes
unique, counts = np.unique(data_files_all['class'], return_counts=True)
occurrences = dict(zip(unique, counts))
print('Interictal samples:', occurrences.get(INTERICTAL))
print('Preictal samples:', occurrences.get(PREICTAL))
set_size = 149
# Randomly select an equal-size set of Interictal and Preictal samples
data_random_interictal = np.random.choice(data_files_all[data_files_all['class'] == 0], size=set_size)
data_random_preictal = np.random.choice(data_files_all[data_files_all['class'] == 1], size=set_size)
# Merge the data sets and shufle the collection
data_files = np.concatenate([data_random_interictal, data_random_preictal])
data_files.dtype = data_files_all.dtype # Sets the same dtype than the original collection
np.random.shuffle(data_files)
print(data_files.shape, data_files.size)
Explanation: Load a balanced set of Interictal (Non-Seizure, 0) and Preictal (Pre-Seizure, 1) examples.
End of explanation
import itertools
from scipy.signal import correlate, resample
def get_X_from_files(base_dir, files, show_progress=True):
Given a list of filenames, returns the final data we want to train the models.
X = None
total_files = len(files)
for i, filename in enumerate(files):
if show_progress and i % int(total_files / 10) == 0:
print(u'%{}: Loading file {}'.format(int(i * 100 / total_files), filename))
try:
mat_data = scipy.io.loadmat(''.join([base_dir, filename.decode('UTF-8')]))
except ValueError as ex:
print(u'Error loading MAT file {}: {}'.format(filename, str(ex)))
continue
# Gets a 16x240000 matrix => 16 channels reading data for 10 minutes at 400Hz
channels_data = mat_data['dataStruct'][0][0][0].transpose()
# Resamble each channel to get only a meassurement per second
# 10 minutes of measurements, grouping data on each second
channels_data = resample(channels_data, 600, axis=1, window=400)
# It seems that adding bivariate meassurements helps a lot on
# signals pattern recognition.
# For each channel, add the correlation meassurements with all the other
# channels.
# TODO: This should be done in a more efficient way ¯\_(ツ)_/¯
correlations = None
for i in range(16):
correlations_i = np.array([])
for j in range (16):
if i != j:
corr_i = correlate(channels_data[i], channels_data[j], mode='same')
correlations_i = np.concatenate([correlations_i, corr_i])
if correlations is None:
correlations = correlations_i
else:
correlations = np.vstack([correlations, correlations_i])
channels_data = np.column_stack([channels_data, correlations])
X = np.vstack([X, channels_data]) if X is not None else channels_data
return X
X = get_X_from_files(base_dir_train, data_files['file'])
y = np.repeat(data_files['class'], 16, axis=0)
print('X_shape:', X.shape, 'X_size:', X.size)
print('y_shape:', y.shape, 'y_size:', y.size)
Explanation: Loading data from .mat files
Load data for each file and get the definitive training data.
Then create the y based on the classes of the files.
Note that because each file contains 16 channels, it is neccesary to
repeat each class in data_files['class'] 16 times, respecting the order of appearence.
End of explanation
from sklearn.preprocessing import normalize
# Normalizes the data
normalize(X, copy=False)
# Plots a user normalized sample
matplotlib.rcParams['figure.figsize'] = (20.0, 5.0)
print('Showing case of file:', data_files['file'][0])
for i in range(16):
plt.subplot(8, 2, i + 1)
plt.plot(X[i])
Explanation: Pre-processing
Resamblation and normalization
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
from sklearn import linear_model
clf = linear_model.LogisticRegression(C=16, n_jobs=1, solver='liblinear', verbose=5)
%time clf.fit(X_train, y_train)
clf.score(X_test, y_test)
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
%time y_pred = clf.predict(X_test)
print(u'Accuracy:', accuracy_score(y_test, y_pred))
print(u'Precision:', precision_score(y_test, y_pred))
print(u'Recall:', recall_score(y_test, y_pred))
print(u'F1 score:', f1_score(y_test, y_pred, average='binary'))
Explanation: Model selection and predictions
End of explanation |
340 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing the GIFGIF dataset
GIFGIF is a project from the MIT media lab that aims at understanding the emotional content of animated GIF images.
The project covers 17 emotions, including happiness, fear, amusement, shame, etc.
To collect feedback from users, the web site shows two images at a time, and asks feedback as follows.
Which of the left or right image better expresses [emotion] ?
where [emotion] is one of 17 different possibilities.
Therefore, the raw data that is collected consists of outcomes of pairwise comparison between pairs of images.
Just the kind of data that choix is built for!
In this notebook, we will use choix to try making sense of the raw pairwise-comparison data.
In particular, we would like to embed the images on a scale (for a given emotion).
Dataset
We will use a dump of the raw data available at http
Step1: We also define a short utility function to display an image based on its identifier.
Step2: Processing the raw data
First, we need to transform the raw dataset into a format that choix can process.
Remember that choix encodes pairwise-comparison outcomes as tuples (i, j) (meaning "$i$ won over $j$"), and that items are assumed to be numbered by consecutive integers.
We begin by mapping all distinct images that appear in the dataset to consecutive integers.
Step3: Next, we parse the comparisons in the data and convert the image IDs to the corresponding integers.
We collect all the comparisons and filter them by emotion.
Step4: Parameter inference
Now, we are ready to fit a Bradley-Terry model to the data, in order to be able to embed the images on a quantitative scale (for a given emotion).
In the following, we consider happiness.
Step5: The parameters induce a ranking over the images.
Images ranked at the bottom are consistently found to express less happiness, and vice-versa for images ranked at the top.
Step6: Visualizing the results
The top three images that best express happiness are the following
Step7: The top three images that *least express happiness are the following
Step8: Predicting future comparison outcomes
Based on the model learnt from the data, it is also possible to predict what a user would select as "better expressing happiness" for any pair of images.
Below is an example. | Python Code:
import choix
import collections
import numpy as np
from IPython.display import Image, display
# Change this with the path to the data on your computer.
PATH_TO_DATA = "/tmp/gifgif/gifgif-dataset-20150121-v1.csv"
Explanation: Analyzing the GIFGIF dataset
GIFGIF is a project from the MIT media lab that aims at understanding the emotional content of animated GIF images.
The project covers 17 emotions, including happiness, fear, amusement, shame, etc.
To collect feedback from users, the web site shows two images at a time, and asks feedback as follows.
Which of the left or right image better expresses [emotion] ?
where [emotion] is one of 17 different possibilities.
Therefore, the raw data that is collected consists of outcomes of pairwise comparison between pairs of images.
Just the kind of data that choix is built for!
In this notebook, we will use choix to try making sense of the raw pairwise-comparison data.
In particular, we would like to embed the images on a scale (for a given emotion).
Dataset
We will use a dump of the raw data available at http://lucas.maystre.ch/gifgif-data.
Download and uncompress the dataset (you don't need to download the images).
End of explanation
def show_gif(idx):
template = "http://media.giphy.com/media/{idx}/giphy.gif"
display(Image(url=template.format(idx=idx)))
# A random image.
show_gif("k39w535jFPYrK")
Explanation: We also define a short utility function to display an image based on its identifier.
End of explanation
# First pass over the data to transform GIFGIF IDs to consecutive integers.
image_ids = set()
with open(PATH_TO_DATA) as f:
next(f) # First line is header.
for line in f:
emotion, left, right, choice = line.strip().split(",")
if len(left) > 0 and len(right) > 0:
# `if` condition eliminates corrupted data.
image_ids.add(left)
image_ids.add(right)
int_to_idx = dict(enumerate(image_ids))
idx_to_int = dict((v, k) for k, v in int_to_idx.items())
n_items = len(idx_to_int)
print("Number of distinct images: {:,}".format(n_items))
Explanation: Processing the raw data
First, we need to transform the raw dataset into a format that choix can process.
Remember that choix encodes pairwise-comparison outcomes as tuples (i, j) (meaning "$i$ won over $j$"), and that items are assumed to be numbered by consecutive integers.
We begin by mapping all distinct images that appear in the dataset to consecutive integers.
End of explanation
data = collections.defaultdict(list)
with open(PATH_TO_DATA) as f:
next(f) # First line is header.
for line in f:
emotion, left, right, choice = line.strip().split(",")
if len(left) == 0 or len(right) == 0:
# Datum is corrupted, continue.
continue
# Map ids to integers.
left = idx_to_int[left]
right = idx_to_int[right]
if choice == "left":
# Left image won the comparison.
data[emotion].append((left, right))
if choice == "right":
# Right image won the comparison.
data[emotion].append((right, left))
print("Number of comparisons for each emotion")
for emotion, comps in data.items():
print("{: <14} {: >7,}".format(emotion, len(comps)))
Explanation: Next, we parse the comparisons in the data and convert the image IDs to the corresponding integers.
We collect all the comparisons and filter them by emotion.
End of explanation
# What does the data look like?
data["happiness"][:3]
%%time
params = choix.opt_pairwise(n_items, data["happiness"])
Explanation: Parameter inference
Now, we are ready to fit a Bradley-Terry model to the data, in order to be able to embed the images on a quantitative scale (for a given emotion).
In the following, we consider happiness.
End of explanation
ranking = np.argsort(params)
Explanation: The parameters induce a ranking over the images.
Images ranked at the bottom are consistently found to express less happiness, and vice-versa for images ranked at the top.
End of explanation
for i in ranking[::-1][:3]:
show_gif(int_to_idx[i])
Explanation: Visualizing the results
The top three images that best express happiness are the following:
End of explanation
for i in ranking[:3]:
show_gif(int_to_idx[i])
Explanation: The top three images that *least express happiness are the following:
End of explanation
rank = 2500
top = ranking[::-1][rank]
show_gif(int_to_idx[top])
bottom = ranking[rank]
show_gif(int_to_idx[bottom])
prob_top_wins, _ = choix.probabilities((top, bottom), params)
print("Prob(user selects top image) = {:.2f}".format(prob_top_wins))
Explanation: Predicting future comparison outcomes
Based on the model learnt from the data, it is also possible to predict what a user would select as "better expressing happiness" for any pair of images.
Below is an example.
End of explanation |
341 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Corpus
The corpus I am using is just one I found online. The corpus you choose is central to generating realistic text.
Step2: Build Markov Chain
Step3: Generate One Tweet | Python Code:
import markovify
Explanation: Title: Generate Tweets Using Markov Chains
Slug: generate_tweets_using_markov_chain
Summary: Generate Tweets Using Markov Chains
Date: 2016-11-01 12:00
Category: Python
Tags: Other
Authors: Chris Albon
Preliminaries
End of explanation
# Get raw text as string
with open("brown.txt") as f:
text = f.read()
Explanation: Load Corpus
The corpus I am using is just one I found online. The corpus you choose is central to generating realistic text.
End of explanation
# Build the model.
text_model = markovify.Text(text)
Explanation: Build Markov Chain
End of explanation
# Print three randomly-generated sentences of no more than 140 characters
for i in range(3):
print(text_model.make_short_sentence(140))
Explanation: Generate One Tweet
End of explanation |
342 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dimensionality Reduction with the Shogun Machine Learning Toolbox
By Sergey Lisitsyn (lisitsyn) and Fernando J. Iglesias Garcia (iglesias).
This notebook illustrates <a href="http
Step1: The function above can be used to generate three-dimensional datasets with the shape of a Swiss roll, the letter S, or an helix. These are three examples of datasets which have been extensively used to compare different dimension reduction algorithms. As an illustrative exercise of what dimensionality reduction can do, we will use a few of the algorithms available in Shogun to embed this data into a two-dimensional space. This is essentially the dimension reduction process as we reduce the number of features from 3 to 2. The question that arises is
Step2: As it can be seen from the figure above, Isomap has been able to "unroll" the data, reducing its dimension from three to two. At the same time, points with similar colours in the input space are close to points with similar colours in the output space. This is, a new representation of the data has been obtained; this new representation maintains the properties of the original data, while it reduces the amount of information required to represent it. Note that the fact the embedding of the Swiss roll looks good in two dimensions stems from the intrinsic dimension of the input data. Although the original data is in a three-dimensional space, its intrinsic dimension is lower, since the only degree of freedom are the polar angle and distance from the centre, or height.
Finally, we use yet another method, Stochastic Proximity Embedding (SPE) to embed the helix | Python Code:
import numpy
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
def generate_data(curve_type, num_points=1000):
if curve_type=='swissroll':
tt = numpy.array((3*numpy.pi/2)*(1+2*numpy.random.rand(num_points)))
height = numpy.array((numpy.random.rand(num_points)-0.5))
X = numpy.array([tt*numpy.cos(tt), 10*height, tt*numpy.sin(tt)])
return X,tt
if curve_type=='scurve':
tt = numpy.array((3*numpy.pi*(numpy.random.rand(num_points)-0.5)))
height = numpy.array((numpy.random.rand(num_points)-0.5))
X = numpy.array([numpy.sin(tt), 10*height, numpy.sign(tt)*(numpy.cos(tt)-1)])
return X,tt
if curve_type=='helix':
tt = numpy.linspace(1, num_points, num_points).T / num_points
tt = tt*2*numpy.pi
X = numpy.r_[[(2+numpy.cos(8*tt))*numpy.cos(tt)],
[(2+numpy.cos(8*tt))*numpy.sin(tt)],
[numpy.sin(8*tt)]]
return X,tt
Explanation: Dimensionality Reduction with the Shogun Machine Learning Toolbox
By Sergey Lisitsyn (lisitsyn) and Fernando J. Iglesias Garcia (iglesias).
This notebook illustrates <a href="http://en.wikipedia.org/wiki/Unsupervised_learning">unsupervised learning</a> using the suite of dimensionality reduction algorithms available in Shogun. Shogun provides access to all these algorithms using Tapkee, a C++ library especialized in <a href="http://en.wikipedia.org/wiki/Dimensionality_reduction">dimensionality reduction</a>.
Hands-on introduction to dimension reduction
First of all, let us start right away by showing what the purpose of dimensionality reduction actually is. To this end, we will begin by creating a function that provides us with some data:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
def plot(data, embedded_data, colors='m'):
fig = plt.figure()
fig.set_facecolor('white')
ax = fig.add_subplot(121,projection='3d')
ax.scatter(data[0],data[1],data[2],c=colors,cmap=plt.cm.Spectral)
plt.axis('tight'); plt.axis('off')
ax = fig.add_subplot(122)
ax.scatter(embedded_data[0],embedded_data[1],c=colors,cmap=plt.cm.Spectral)
plt.axis('tight'); plt.axis('off')
plt.show()
from shogun import Isomap, features, MultidimensionalScaling
# wrap data into Shogun features
data, colors = generate_data('swissroll')
feats = features(data)
# create instance of Isomap converter and configure it
isomap = Isomap()
isomap.put('target_dim', 2)
# set the number of neighbours used in kNN search
isomap.put('k', 20)
# create instance of Multidimensional Scaling converter and configure it
mds = MultidimensionalScaling()
mds.put('target_dim', 2)
# embed Swiss roll data
embedded_data_mds = mds.embed(feats).get_feature_matrix()
embedded_data_isomap = isomap.embed(feats).get_feature_matrix()
plot(data, embedded_data_mds, colors)
plot(data, embedded_data_isomap, colors)
Explanation: The function above can be used to generate three-dimensional datasets with the shape of a Swiss roll, the letter S, or an helix. These are three examples of datasets which have been extensively used to compare different dimension reduction algorithms. As an illustrative exercise of what dimensionality reduction can do, we will use a few of the algorithms available in Shogun to embed this data into a two-dimensional space. This is essentially the dimension reduction process as we reduce the number of features from 3 to 2. The question that arises is: what principle should we use to keep some important relations between datapoints? In fact, different algorithms imply different criteria to answer this question.
Just to start, lets pick some algorithm and one of the data sets, for example lets see what embedding of the Swissroll is produced by the Isomap algorithm. The Isomap algorithm is basically a slightly modified Multidimensional Scaling (MDS) algorithm which finds embedding as a solution of the following optimization problem:
$$
\min_{x'_1, x'_2, \dots} \sum_i \sum_j \| d'(x'_i, x'_j) - d(x_i, x_j)\|^2,
$$
with defined $x_1, x_2, \dots \in X~~$ and unknown variables $x_1, x_2, \dots \in X'~~$ while $\text{dim}(X') < \text{dim}(X)~~~$,
$d: X \times X \to \mathbb{R}~~$ and $d': X' \times X' \to \mathbb{R}~~$ are defined as arbitrary distance functions (for example Euclidean).
Speaking less math, the MDS algorithm finds an embedding that preserves pairwise distances between points as much as it is possible. The Isomap algorithm changes quite small detail: the distance - instead of using local pairwise relationships it takes global factor into the account with shortest path on the neighborhood graph (so-called geodesic distance). The neighborhood graph is defined as graph with datapoints as nodes and weighted edges (with weight equal to the distance between points). The edge between point $x_i~$ and $x_j~$ exists if and only if $x_j~$ is in $k~$ nearest neighbors of $x_i$. Later we will see that that 'global factor' changes the game for the swissroll dataset.
However, first we prepare a small function to plot any of the original data sets together with its embedding.
End of explanation
from shogun import StochasticProximityEmbedding
# wrap data into Shogun features
data, colors = generate_data('helix')
features = features(data)
# create MDS instance
converter = StochasticProximityEmbedding()
converter.put('target_dim', 2)
# embed helix data
embedded_features = converter.embed(features)
embedded_data = embedded_features.get_feature_matrix()
plot(data, embedded_data, colors)
Explanation: As it can be seen from the figure above, Isomap has been able to "unroll" the data, reducing its dimension from three to two. At the same time, points with similar colours in the input space are close to points with similar colours in the output space. This is, a new representation of the data has been obtained; this new representation maintains the properties of the original data, while it reduces the amount of information required to represent it. Note that the fact the embedding of the Swiss roll looks good in two dimensions stems from the intrinsic dimension of the input data. Although the original data is in a three-dimensional space, its intrinsic dimension is lower, since the only degree of freedom are the polar angle and distance from the centre, or height.
Finally, we use yet another method, Stochastic Proximity Embedding (SPE) to embed the helix:
End of explanation |
343 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PropBank in NLTK
(C) 2019 by Damir Cavar
The material in this notebook is based on
Step1: Each propbank instance defines the following member variables
Step2: The location of the predicate and of the arguments are encoded using PropbankTreePointer objects, as well as PropbankChainTreePointer objects and PropbankSplitTreePointer objects. A PropbankTreePointer consists of a wordnum and a height
Step3: This identifies the tree constituent that is headed by the word that is the wordnum'th token in the sentence, and whose span is found by going height nodes up in the tree. This type of pointer is only useful if we also have the corresponding tree structure, since it includes empty elements such as traces in the word number count. The trees for 10% of the standard PropBank Corpus are contained in the treebank corpus
Step4: Propbank tree pointers can be converted to standard tree locations, which are usually easier to work with, using the treepos() method
Step5: In some cases, argument locations will be encoded using PropbankChainTreePointers (for trace chains) or PropbankSplitTreePointers (for discontinuous constituents). Both of these objects contain a single member variable, pieces, containing a list of the constituent pieces. They also define the method select(), which will return a tree containing all the elements of the argument. (A new head node is created, labeled "CHAIN" or "SPLIT", since the argument is not a single constituent in the original tree). Sentence #6 contains an example of an argument that is both discontinuous and contains a chain
Step6: The PropBank Corpus also provides access to the frameset files, which define the argument labels used by the annotations, on a per-verb basis. Each frameset file contains one or more predicates, such as 'turn' or 'turn_on', each of which is divided into coarse-grained word senses called rolesets. For each roleset, the frameset file provides descriptions of the argument roles, along with examples.
Step7: Note that the standard corpus distribution only contains 10% of the treebank, so the parse trees are not available for instances starting at 9353
Step8: However, if you supply your own version of the treebank corpus (by putting it before the nltk-provided version on nltk.data.path, or by creating a ptb directory as described above and using the propbank_ptb module), then you can access the trees for all instances.
A list of the verb lemmas contained in PropBank is returned by the propbank.verbs() method | Python Code:
from nltk.corpus import propbank
pb_instances = propbank.instances()
print(pb_instances)
Explanation: PropBank in NLTK
(C) 2019 by Damir Cavar
The material in this notebook is based on:
- The NLKT Howto on Propbank
- The Proposition Bank Website
- The Propbank GitHub repo
- The Google Propbank Archive
The PropBank Corpus augments the Penn Treebank syntactic trees with predicate-argument annotation. PropBank provides a specific annotation about verbs and arguments for every single tree in the Penn Treebank.
End of explanation
inst = pb_instances[103]
print("File ID:", inst.fileid)
print("Sentence Number:", inst.sentnum)
print("Word Number:", inst.wordnum)
inst.tagger
inst.inflection
infl = inst.inflection
infl.form, infl.tense, infl.aspect, infl.person, infl.voice
inst.roleset
inst.predicate
inst.arguments
Explanation: Each propbank instance defines the following member variables:
- Location information: fileid, sentnum, wordnum
- Annotator information: tagger
- Inflection information: inflection
- Roleset identifier: roleset
- Verb (aka predicate) location: predicate
- Argument locations and types: arguments
End of explanation
print(inst.predicate.wordnum, inst.predicate.height)
Explanation: The location of the predicate and of the arguments are encoded using PropbankTreePointer objects, as well as PropbankChainTreePointer objects and PropbankSplitTreePointer objects. A PropbankTreePointer consists of a wordnum and a height:
End of explanation
tree = inst.tree
from nltk.corpus import treebank
assert tree == treebank.parsed_sents(inst.fileid)[inst.sentnum]
inst.predicate.select(tree)
for (argloc, argid) in inst.arguments:
print('%-10s %s' % (argid, argloc.select(tree).pformat(500)[:50]))
Explanation: This identifies the tree constituent that is headed by the word that is the wordnum'th token in the sentence, and whose span is found by going height nodes up in the tree. This type of pointer is only useful if we also have the corresponding tree structure, since it includes empty elements such as traces in the word number count. The trees for 10% of the standard PropBank Corpus are contained in the treebank corpus:
End of explanation
treepos = inst.predicate.treepos(tree)
print (treepos, tree[treepos])
Explanation: Propbank tree pointers can be converted to standard tree locations, which are usually easier to work with, using the treepos() method:
End of explanation
inst = pb_instances[6]
inst.roleset
argloc, argid = inst.arguments[2]
argloc
argloc.pieces
argloc.pieces[0].pieces
print(argloc.select(inst.tree))
Explanation: In some cases, argument locations will be encoded using PropbankChainTreePointers (for trace chains) or PropbankSplitTreePointers (for discontinuous constituents). Both of these objects contain a single member variable, pieces, containing a list of the constituent pieces. They also define the method select(), which will return a tree containing all the elements of the argument. (A new head node is created, labeled "CHAIN" or "SPLIT", since the argument is not a single constituent in the original tree). Sentence #6 contains an example of an argument that is both discontinuous and contains a chain:
End of explanation
expose_01 = propbank.roleset('expose.01')
turn_01 = propbank.roleset('turn.01')
print(turn_01)
for role in turn_01.findall("roles/role"):
print(role.attrib['n'], role.attrib['descr'])
from xml.etree import ElementTree
print(ElementTree.tostring(turn_01.find('example')).decode('utf8').strip())
Explanation: The PropBank Corpus also provides access to the frameset files, which define the argument labels used by the annotations, on a per-verb basis. Each frameset file contains one or more predicates, such as 'turn' or 'turn_on', each of which is divided into coarse-grained word senses called rolesets. For each roleset, the frameset file provides descriptions of the argument roles, along with examples.
End of explanation
inst = pb_instances[9352]
inst.fileid
print(inst.tree)
print(inst.predicate.select(inst.tree))
inst = pb_instances[9353]
inst.fileid
print(inst.tree)
print(inst.predicate.select(inst.tree))
Explanation: Note that the standard corpus distribution only contains 10% of the treebank, so the parse trees are not available for instances starting at 9353:
End of explanation
propbank.verbs()
Explanation: However, if you supply your own version of the treebank corpus (by putting it before the nltk-provided version on nltk.data.path, or by creating a ptb directory as described above and using the propbank_ptb module), then you can access the trees for all instances.
A list of the verb lemmas contained in PropBank is returned by the propbank.verbs() method:
End of explanation |
344 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook was copied from this location.
Precision and Recall
Useful links
* https
Step1: Confusion matrix
Step2: The table below shows an example confusion matrix for a hypothetical test for a rare disease where only 2 people of out 100 have the disease. This is an unbalanced data set as a much larger number, 98 out of 100 do not have the disease. The first named row has cases of people who have the disease and the second named row has cases of people who do not have the disease. The first named column has people who test positive and the second named column has people who test negative.
This leads to four numeric cell with the top left containing true positive counts, the bottom left having false positive, the top right having false negative and the bottom right with true negative counts.
A simple way to create a very accurate test for this unbalanced example is to just assume everyone tests negative for the disease. This misses out on all the people who do actually have the disease and results in two false negative cases. However it correctly predicts 98 true negative cases. This results in a 98% accurate test. But this test cannot distinguish between people who have a disease and people who don't. Accuracy may not be a useful measure of the goodness of the test.
Two useful measures are precision and recall
Step3: An alternative test for the same rare disease where 2 out of 100 have the disease is show below. Now there is 1 true positive, 2 false positives, 1 false negative and 96 true negatives.
This test has a lower accuracy as it has correct predicted 97 out of 100 cases, lower than the previous test. This test also has a defined precision of 0.333333 and a recall of 0.5
This test correctly identifies 1 out of the 2 people who have the disease.
Step4: To demonstrate the use of accuracy, precision and recall when measuring the peformance of a classifier, we use the "Wisconsin Breast Cancer" data set.
Step5: This data set has 569 samples of which 357 are benign and 212 are malignant
Step6: We predict whether the cancer is benign or malignant using ten factors
Step7: We compare four classifiers | Python Code:
import sklearn
import pandas as pd
import numpy as np
Explanation: This notebook was copied from this location.
Precision and Recall
Useful links
* https://en.wikipedia.org/wiki/Confusion_matrix
* http://scikit-learn.org/stable/whats_new.html#version-0-17-1
A popular way to evaluate the performance of a machine learning algorithm is to use a confusion matrix. This is a table with two rows and two columns that displays the number of true positives, false positives, false negatives and true negatives.
End of explanation
index_names = ['predicted condition positive', 'predicted condition negative']
column_names = ['true condition positive', 'true condition negative']
Explanation: Confusion matrix
End of explanation
pd.DataFrame.from_records(
np.array([[0, 2], [0, 98]]).T, columns=column_names, index=index_names)
Explanation: The table below shows an example confusion matrix for a hypothetical test for a rare disease where only 2 people of out 100 have the disease. This is an unbalanced data set as a much larger number, 98 out of 100 do not have the disease. The first named row has cases of people who have the disease and the second named row has cases of people who do not have the disease. The first named column has people who test positive and the second named column has people who test negative.
This leads to four numeric cell with the top left containing true positive counts, the bottom left having false positive, the top right having false negative and the bottom right with true negative counts.
A simple way to create a very accurate test for this unbalanced example is to just assume everyone tests negative for the disease. This misses out on all the people who do actually have the disease and results in two false negative cases. However it correctly predicts 98 true negative cases. This results in a 98% accurate test. But this test cannot distinguish between people who have a disease and people who don't. Accuracy may not be a useful measure of the goodness of the test.
Two useful measures are precision and recall: Precision is a measure of how many of the selected items are relevant and recall is a measure of how many relevant items are selected.
precision = (true positives)/(true positives + false positives)
recall = (True positives)/positives
In the example below the precision is undefined while the recall is zero.
End of explanation
pd.DataFrame.from_records(
np.array([[1, 1], [2, 96]]).T, columns=column_names, index=index_names)
Explanation: An alternative test for the same rare disease where 2 out of 100 have the disease is show below. Now there is 1 true positive, 2 false positives, 1 false negative and 96 true negatives.
This test has a lower accuracy as it has correct predicted 97 out of 100 cases, lower than the previous test. This test also has a defined precision of 0.333333 and a recall of 0.5
This test correctly identifies 1 out of the 2 people who have the disease.
End of explanation
from sklearn.datasets import load_breast_cancer
dataset = load_breast_cancer()
Explanation: To demonstrate the use of accuracy, precision and recall when measuring the peformance of a classifier, we use the "Wisconsin Breast Cancer" data set.
End of explanation
target = pd.Series(dataset.target, dtype='category')
target.cat.rename_categories(['malignant', 'benign'], inplace=True)
target.value_counts()
Explanation: This data set has 569 samples of which 357 are benign and 212 are malignant
End of explanation
column_names = [
'radius', 'texture', 'perimeter', 'area',
'smoothness', 'compactness', 'concavity', 'concave_points',
'symmetry', 'fractal_dimension']
df = pd.DataFrame(data=dataset.data[:, :10], columns=column_names)
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
def get_metrics(target, predict, name):
return {
'classifier': name,
'accuracy': accuracy_score(target, predict),
'precision': precision_score(target, predict),
'recall': recall_score(target, predict)
}
from sklearn import linear_model
# C is the inverse of regularization parameter (smaller values specify strong regularization)
logreg = linear_model.LogisticRegression(C=1e5)
logreg.fit(df.values, dataset.target)
predict = logreg.predict(df.values)
result1 = get_metrics(dataset.target, predict, 'logistic regression')
from sklearn.svm import SVC
clf = SVC(kernel='rbf')
clf.fit(df.values, dataset.target)
predict = clf.predict(df.values)
result2 = get_metrics(dataset.target, predict, 'support vector (radial basis)')
from sklearn import tree
clf = tree.DecisionTreeClassifier(max_depth=10)
clf.fit(df.values, dataset.target)
predict = clf.predict(df.values)
result3 = get_metrics(dataset.target, predict, 'decision tree')
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=50)
clf.fit(df.values, dataset.target)
predict = clf.predict(df.values)
result4 = get_metrics(dataset.target, predict, 'random forest')
Explanation: We predict whether the cancer is benign or malignant using ten factors: radius, texture, perimeter, area, smoothness, compactness, concavity, concave points, symmetry and fractal dimension.
End of explanation
pd.DataFrame([result1, result2, result3, result4], columns=['classifier', 'accuracy', 'precision', 'recall'])
Explanation: We compare four classifiers: logistic regression, support vector, decision tree and random forests on three different measures, accuracy, precision and recall. The decision tree and random forest classifiers are so good that they correctly classify 100% of the samples in this data set.
End of explanation |
345 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step 1
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step4: Let's now proceed to tokenize these tweets in addition to lemmatizing them! This will help improve the performance of our LDA model!
I will utilise spacy for this process as it is a production grade NLP library that is exceptionally fast!
Step5: Lets now add these tokenized tweets to our dictionary!
Step6: I will not turn the dictionary back into a dataframe, run it through the filtration function before re-casting the dataframe into a dictionary.
This time, we are running the filtration process on the tokenized tweets column and not the content column.
NLP models are very sensitive - ensuring consistent cleaning is important!
Step7: Gensim LDA Process
Fantastic - at this point, we have everything we need to proceed with LDA from the Gensim Library.
LDA via the Gensim library requires that our data be in a very specific format.
Broadly, LDA requires a Dictionary object that is later used to create a matrix called a corpus.
The Gensim LDA Dictionary will require that we pass in a list of lists. Every sublist will be a tweet that has been split.
Let's look at my first tweet as an example.
Before
Step8: Now, I will now filter out extreme words - that is words that appear far too often and words that are rare.
Step9: We now need to voctorize all the tweets so that it can be fed to the LDA algorithm! To do this, we will create a bag of words model from our tweets.
After putting all our tweets through this bag of words model, we will end up with a 'corpus' that represents all the tweets for a particular user. In this case, that user is myself.
We will save this corpus to disk as we go along! We will use the MmCorpus object from Gensim to achieve this.
Step10: Now for the LDA part!
I will be using the LDAMulticore class from gensim!
I set the passess parameter to 100 and the chunksize to 2000.
The chunksie will ensure it use's all the documents at once, and the passess parameter will ensure it looks at all the documents 100 times before converging.
As I am using my ENTIRE tweet history, I will create 30 topics!
I will adjust this to 10 when running lda on 2nd degree connections, as I will only have 200 of their tweets!
Step11: I can then save this lda model!
Step12: I now wish to extract all of the words that appear in each of the 30 topics that the LDA model was able to create.
For each word in a topic, I will ensure that it has a frequency not equal to 0.
I will place all these words into a list and then wrap a Counter object around it!
I am doing this as I want to see the distribution of words that appear accross all topics for a particular user. The LDA process will highlight key words that a particular user often uses in their twitter freed, across all topics that a particular user discusses. As such, the words they use will be indicitive of the topics a twitter user talks about!
The counter object will simply keep a count of how many times, out of a maximum of 30 (topics) a word appears, given it has a frequency greater than 0. That is, the word appears in a topic.
Step13: I will then place this LDA Counter Object back into our dictionary!
We will then pickle this object - we will use it again for our TF-IDF analysis!
Be sure to look at the file called lda.py to see how I stuructured the code to run through the 2nd degree connections! | Python Code:
gabr_tweets = extract_users_tweets("gabr_ibrahim", 2000)
Explanation: Step 1: Obtain my tweets!
I will obtain my entire tweet history! Note: For 2nd degree potential followers, I only extract 200 of their most recent tweets!
End of explanation
gabr_dict = dict()
gabr_dict['gabr_ibrahim'] = {"content" : [], "hashtags" : [], "retweet_count": [], "favorite_count": []}
for tweet in gabr_tweets:
text = extract_text(tweet)
hashtags = extract_hashtags(tweet)
rts = tweet.retweet_count
fav = tweet.favorite_count
gabr_dict['gabr_ibrahim']['content'].append(text)
gabr_dict['gabr_ibrahim']['hashtags'].extend(hashtags)
gabr_dict['gabr_ibrahim']["retweet_count"].append(rts)
gabr_dict['gabr_ibrahim']["favorite_count"].append(fav)
Explanation: Step 2: Create a dictionary from my tweets
This dictionary will have the same structure as our already collected 2nd degree followers
End of explanation
gabr_tweets_df = pd.DataFrame.from_dict(gabr_dict, orient='index')
gabr_tweets_df.head()
clean_gabr_tweets = filtration(gabr_tweets_df, "content")
clean_gabr_tweets = dataframe_to_dict(clean_gabr_tweets)
clean_gabr_tweets #this is a list of 1 dictionary
Explanation: Step 3: Create a dataframe from my tweets
We will now turn this dictionary into a dataframe - I do this as it allows me to utilise pandas in cleaning the content of my tweets!
After the cleaning on the 'content' column, I will convert the dataframe back into a dictionary.
End of explanation
import spacy
import nltk
from gensim.models import Phrases
from gensim.models.word2vec import LineSentence
from gensim.corpora import Dictionary, MmCorpus
from gensim.models.ldamulticore import LdaMulticore
import pyLDAvis
import pyLDAvis.gensim
from collections import Counter
from gensim.corpora.dictionary import Dictionary
nlp = spacy.load('en')
gabr_tweets = clean_gabr_tweets[0]['gabr_ibrahim']['content']
gabr_tweets[:5]
Explanation: Step 4: LDA Analysis
Let's now move onto the LDA pre-processing stage and analysis!
End of explanation
tokenized_tweets = []
for tweet in gabr_tweets:
tokenized_tweet = nlp(tweet)
tweet = "" # we want to keep each tweet seperate
for token in tokenized_tweet:
if token.is_space:
continue
elif token.is_punct:
continue
elif token.is_stop:
continue
elif token.is_digit:
continue
elif len(token) == 1:
continue
elif len(token) == 2:
continue
else:
tweet += str(token.lemma_) + " " #creating lemmatized version of tweet
tokenized_tweets.append(tweet)
tokenized_tweets = list(map(str.strip, tokenized_tweets)) # strip whitespace
tokenized_tweets = [x for x in tokenized_tweets if x != ""] # remove empty entries
tokenized_tweets[:5] # you can see how this is different to the raw tweets!
Explanation: Let's now proceed to tokenize these tweets in addition to lemmatizing them! This will help improve the performance of our LDA model!
I will utilise spacy for this process as it is a production grade NLP library that is exceptionally fast!
End of explanation
clean_gabr_tweets[0]['gabr_ibrahim']['tokenized_tweets'] = tokenized_tweets
Explanation: Lets now add these tokenized tweets to our dictionary!
End of explanation
clean_gabr_tweets_df = pd.DataFrame.from_dict(clean_gabr_tweets[0], orient='index')
clean_gabr_tweets_df.head()
clean_gabr_tweets_df = filtration(clean_gabr_tweets_df, "tokenized_tweets")
clean_gabr_tweets = dataframe_to_dict(clean_gabr_tweets_df)
clean_gabr_tweets[0]['gabr_ibrahim']['tokenized_tweets'][:5]
Explanation: I will not turn the dictionary back into a dataframe, run it through the filtration function before re-casting the dataframe into a dictionary.
This time, we are running the filtration process on the tokenized tweets column and not the content column.
NLP models are very sensitive - ensuring consistent cleaning is important!
End of explanation
list_of_tweets_gabr = clean_gabr_tweets[0]['gabr_ibrahim']['tokenized_tweets']
gensim_format_tweets = []
for tweet in list_of_tweets_gabr:
list_form = tweet.split()
gensim_format_tweets.append(list_form)
gensim_format_tweets[:5]
gensim_dictionary = Dictionary(gensim_format_tweets)
Explanation: Gensim LDA Process
Fantastic - at this point, we have everything we need to proceed with LDA from the Gensim Library.
LDA via the Gensim library requires that our data be in a very specific format.
Broadly, LDA requires a Dictionary object that is later used to create a matrix called a corpus.
The Gensim LDA Dictionary will require that we pass in a list of lists. Every sublist will be a tweet that has been split.
Let's look at my first tweet as an example.
Before:
['great turnout today hope able join slide available link video webinar come soon', tweet 2, tweet 3, ...]
Correct Gensim Format:
[['great', 'turnout', 'today', 'hope', 'able', 'join', 'slide', 'available','link', 'video', 'webinar', 'come', 'soon'], [tweet 2 in split form], [...],...]
End of explanation
gensim_dictionary.filter_extremes(no_below=10, no_above=0.4)
gensim_dictionary.compactify() # remove gaps after words that were removed
Explanation: Now, I will now filter out extreme words - that is words that appear far too often and words that are rare.
End of explanation
!pwd
file_path_corpus = "/home/igabr/new-project-4"
def bag_of_words_generator(lst, dictionary):
assert type(dictionary) == Dictionary, "Please enter a Gensim Dictionary"
for i in lst:
yield dictionary.doc2bow(i)
MmCorpus.serialize(file_path_corpus+"{}.mm".format("gabr_ibrahim"), bag_of_words_generator(gensim_format_tweets, gensim_dictionary))
corpus = MmCorpus(file_path_corpus+"{}.mm".format("gabr_ibrahim"))
corpus.num_terms # the number of terms in our corpus!
corpus.num_docs # the number of documets. These are the number of tweets!
Explanation: We now need to voctorize all the tweets so that it can be fed to the LDA algorithm! To do this, we will create a bag of words model from our tweets.
After putting all our tweets through this bag of words model, we will end up with a 'corpus' that represents all the tweets for a particular user. In this case, that user is myself.
We will save this corpus to disk as we go along! We will use the MmCorpus object from Gensim to achieve this.
End of explanation
lda = LdaMulticore(corpus, num_topics=30, id2word=gensim_dictionary, chunksize=2000, workers=100, passes=100)
Explanation: Now for the LDA part!
I will be using the LDAMulticore class from gensim!
I set the passess parameter to 100 and the chunksize to 2000.
The chunksie will ensure it use's all the documents at once, and the passess parameter will ensure it looks at all the documents 100 times before converging.
As I am using my ENTIRE tweet history, I will create 30 topics!
I will adjust this to 10 when running lda on 2nd degree connections, as I will only have 200 of their tweets!
End of explanation
lda.save(file_path_corpus+"lda_model_{}".format("gabr_ibrahim"))
lda = LdaMulticore.load(file_path_corpus+"lda_model_{}".format("gabr_ibrahim"))
Explanation: I can then save this lda model!
End of explanation
from collections import Counter
word_list = []
for i in range(30):
for term, frequency in lda.show_topic(i, topn=100): #returns top 100 words for a topic
if frequency != 0:
word_list.append(term)
temp = Counter(word_list)
len(temp)
# This can be done later to help filter the important words.
important_words = []
for k, v in temp.items():
if v >= 10:
if k not in nltk_stopwords:
doc = nlp(k)
for token in doc:
if not token.is_stop:
if len(token) != 2:
important_words.append(k)
important_words
len(important_words)
Explanation: I now wish to extract all of the words that appear in each of the 30 topics that the LDA model was able to create.
For each word in a topic, I will ensure that it has a frequency not equal to 0.
I will place all these words into a list and then wrap a Counter object around it!
I am doing this as I want to see the distribution of words that appear accross all topics for a particular user. The LDA process will highlight key words that a particular user often uses in their twitter freed, across all topics that a particular user discusses. As such, the words they use will be indicitive of the topics a twitter user talks about!
The counter object will simply keep a count of how many times, out of a maximum of 30 (topics) a word appears, given it has a frequency greater than 0. That is, the word appears in a topic.
End of explanation
clean_gabr_tweets[0]['gabr_ibrahim'].keys()
clean_gabr_tweets[0]['gabr_ibrahim']['LDA'] = temp
pickle_object(clean_gabr_tweets, "gabr_ibrahim_tweets_LDA_Complete")
Explanation: I will then place this LDA Counter Object back into our dictionary!
We will then pickle this object - we will use it again for our TF-IDF analysis!
Be sure to look at the file called lda.py to see how I stuructured the code to run through the 2nd degree connections!
End of explanation |
346 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python for Bioinformatics
This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics
Chapter 2
Step1: Mathematical Operations
Step2: BATCH MODE
Listing 2.1
Step3: Listing 2.2
Step4: Listing 2.3 | Python Code:
print('Hello World!')
print("Hello", "World!")
print("Hello","World!",sep=";")
print("Hello","World!",sep=";",end='\n\n')
name = input("Enter your name: ")
name
1+1
'1'+'1'
"A string of " + 'characters'
'The answer is ' + 42
'The answer is ' + str(42)
'The answer is {0}'.format(42)
number = 42
'The answer is {0}'.format(number)
Explanation: Python for Bioinformatics
This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics
Chapter 2: First Steps with Python
End of explanation
12*2
30/3
2**8/2+100
10/4
10//4
Explanation: Mathematical Operations
End of explanation
print("Hello World!")
Explanation: BATCH MODE
Listing 2.1: hello.py: A “Hello World!” program
End of explanation
#!/usr/bin/python
print("Hello World!")
Explanation: Listing 2.2: hello2.py: Hello World! with shebang
End of explanation
#!/usr/bin/env python
# The next line prints the string "Hello World!"
print("Hello World!")
Explanation: Listing 2.3: Hello World! with comments
End of explanation |
347 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Gems, Part 3
Much of scientific computing revolves around the manipulation of indices. Most formulas involve sums of things and at the core of it the formulas differ by which things we're summing.
Being particularly clever about indexing helps with that. A complicated example is the FFT. A less complicated example is computing the inverse of a permutation
Step1: The focus of this post is to expand on an extremely useful, vectorizable isomorphism between indices, that comes up all the time
Step2: This brings us to our first numpy gem of this post, to check that our isomorphism is surjective, np.triu_indices.
Step3: The advantage over indexing into np.triu_indices is of course the scenario where you don't want to fully materialize all edges in memory, such as in frontier expansions for graph search.
You might be wondering how dangerous that np.sqrt is, especially for large numbers. Since we're concerned about the values of np.sqrt for inputs at least 1, and on this domain the mathematical function is sublinear, there's actually less rounding error in representing the square root of an integer with a double than the input itself. Details here.
Of course, we're in trouble if 8 * x + 1 cannot even up to ULP error be represented by a 64-bit double. It's imaginable to have graphs on 2**32 vertices, so it's not a completely artificial concern, and in principle we'd want to have support for edges up to index value less than $\binom{2^{32}}{2}=2^{63} - 2^{32}$. Numpy correctly refuses to perform the mapping in this case, throwing on totup(2**61).
In this case, some simple algebra and recalling that we don't need a lot of precision anyway will save the day. | Python Code:
import numpy as np
np.random.seed(1234)
x = np.random.choice(10, replace=False, size=10)
s = np.argsort(x)
inverse = np.empty_like(s)
inverse[s] = np.arange(len(s), dtype=int)
np.all(x == inverse)
Explanation: Numpy Gems, Part 3
Much of scientific computing revolves around the manipulation of indices. Most formulas involve sums of things and at the core of it the formulas differ by which things we're summing.
Being particularly clever about indexing helps with that. A complicated example is the FFT. A less complicated example is computing the inverse of a permutation:
End of explanation
# an edge index is determined by the isomorphism from
# ([n] choose 2) to [n choose 2]
# drop (i, j) to (i, j - i - 1) first. then:
# (0, 0) (0, 1) (0, 2)
# (1, 0) (1, 1)
# (2, 0)
# isomorphism goes in downward diagonals
# like valence electrons in chemistry
def c2(n):
return n * (n - 1) // 2
def fromtup(i, j):
j = j - i - 1
diagonal = i + j
return c2(diagonal + 1) + i
def totup(x):
# https://math.stackexchange.com/a/1417583 + some int/float rewriting
diagonal = (1 + np.sqrt(8 * x + 1).astype(np.uint64)) // 2 - 1
i = x - c2(diagonal + 1)
j = diagonal - i
j = j + i + 1
return i, j
nverts = 1343
edges = np.arange(c2(nverts), dtype=int)
np.all(fromtup(*totup(edges)) == edges)
Explanation: The focus of this post is to expand on an extremely useful, vectorizable isomorphism between indices, that comes up all the time: indexing pairs. In particular, it's often the case that we'd want to come up with an a priori indexing scheme into a weighted, complete undirected graph on $V$ vertices and $E$ edges.
In particular, our edge set is $\binom{[V]}{2}=\left{(0, 0), (0, 1), (0, 2), \cdots, (V-3, V-2), (V-3, V-1), (V-2, V-1)\right}$, the set of ordered $2$-tuples. Our index set is $\left[\binom{V}{2}\right]={0, 1, 2, \cdots, V(V-1)/2 - 1}$ (note we're 0-indexing here).
Can we come up with an isomorphism between these two sets that vectorizes well?
A natural question is why not just use a larger index. Say we're training a GGNN, and we want to maintain embeddings for our edges. Our examples might be in a format where we have two vertices $(v_1, v_2)$ available. We'd like to index into an edge array maintaining the corresponding embedding. Here, you may very well get away with using an array of size $V^2$. That takes about twice as much memory as you need, though.
A deeper problem is simply that you can represent invalid indices, and if your program manipulates the indices themselves, this can cause bugs. This matters in settings like GraphBLAS where you're trying to vectorize classical graph algorithms.
The following presents a completely static isomorphism that doesn't need to know V in advance.
End of explanation
left, right = totup(edges)
expected_left, expected_right = np.triu_indices(nverts, k=1)
from collections import Counter
Counter(zip(left, right)) == Counter(zip(expected_left, expected_right))
Explanation: This brings us to our first numpy gem of this post, to check that our isomorphism is surjective, np.triu_indices.
End of explanation
x = 2**53
float(8 * x + 1) == float(8 * x)
def totup_flexible(x):
x = np.asarray(x)
assert np.all(x <= 2 ** 63 - 2**32)
if x > 2 ** 53:
s = np.sqrt(2) * np.sqrt(x)
s = s.astype(np.uint64)
# in principle, the extra multiplication here could require correction
# by at most 1 ulp; luckily (s+1)**2 is representable in u64
# because (sqrt(2)*sqrt(2**63 - 2**32)*(1+3*eps) + 1) is (just square it to see)
s3 = np.stack([s - 1, s, s + 1]).reshape(-1, 3)
s = 2 * s3[np.arange(len(s3)), np.argmin(s3 ** 2 - 2 * x, axis=-1)]
else:
s = np.sqrt(8 * x + 1).astype(np.uint64)
add = 0 if x > 2 ** 53 else 1
diagonal = (1 + s) // 2 - 1
diagonal = diagonal.reshape(x.shape)
i = x - c2(diagonal + 1)
j = diagonal - i
j = j + i + 1
return i, j
x = 2 ** 63 - 2 ** 32
fromtup(*totup_flexible(x)) == x
Explanation: The advantage over indexing into np.triu_indices is of course the scenario where you don't want to fully materialize all edges in memory, such as in frontier expansions for graph search.
You might be wondering how dangerous that np.sqrt is, especially for large numbers. Since we're concerned about the values of np.sqrt for inputs at least 1, and on this domain the mathematical function is sublinear, there's actually less rounding error in representing the square root of an integer with a double than the input itself. Details here.
Of course, we're in trouble if 8 * x + 1 cannot even up to ULP error be represented by a 64-bit double. It's imaginable to have graphs on 2**32 vertices, so it's not a completely artificial concern, and in principle we'd want to have support for edges up to index value less than $\binom{2^{32}}{2}=2^{63} - 2^{32}$. Numpy correctly refuses to perform the mapping in this case, throwing on totup(2**61).
In this case, some simple algebra and recalling that we don't need a lot of precision anyway will save the day.
End of explanation |
348 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook to work with Altimetry and Lake Surface Area
Step1: GRLM Altimetry data from July 22 2008 to September 3, 2016
Create new columns of year, month, day in a convenient format
Step2: Interpolate the missing data points
Step3: Add time information to the dataframe
Step4: Resample the data to get monthly and annual variation in lake height
Step5: MODIS data Lake Surface Area (Feb 18, 2000 to Aug 13, 2015)
Step6: Create subsets of both vectors (altimetry and surface area) for the overlapping period
Step7: Compute correlation coefficient | Python Code:
% matplotlib inline
import pandas as pd
import glob
import matplotlib.pyplot as plt
GRLM = "345_GRLM10.txt"; print GRLM
df_grlm = pd.read_csv(GRLM, skiprows=43, delim_whitespace=True, names="mission,cycle,date,hour,minute,lake_height,error,mean(decibels),IonoCorrection,TropCorrection".split(","), engine='python', index_col=False)
df_grlm.head(5)
Explanation: Notebook to work with Altimetry and Lake Surface Area
End of explanation
df_grlm = pd.read_csv(GRLM, skiprows=43, delim_whitespace=True, names="mission,cycle,date,hour,minute,lake_height,error,mean(decibels),IonoCorrection,TropCorrection".split(","), engine='python', index_col=False)
def get_year(date): return int(str(date)[0:4])
def get_month(date): return int(str(date)[4:6])
def get_day(date): return int(str(date)[6:])
df_grlm['year'] = df_grlm['date'].apply(get_year)
df_grlm['month'] = df_grlm['date'].apply(get_month)
df_grlm['day'] = df_grlm['date'].apply(get_day)
df_grlm = df_grlm.where(df_grlm.minute < 61 ) # remove lines that do not have time
df_grlm = df_grlm.where(df_grlm.lake_height < 900 ) # remove entries that do not have lake-height
df_grlm.lake_height.plot(); plt.title("Actual data without resampling"); plt.ylabel("Variation (m)")
Explanation: GRLM Altimetry data from July 22 2008 to September 3, 2016
Create new columns of year, month, day in a convenient format
End of explanation
df_grlm.lake_height.interpolate().plot(); plt.title("Interpolated Actual data without resampling"); plt.ylabel("Variation (m)")
Explanation: Interpolate the missing data points
End of explanation
df = df_grlm
df[["year", "month", "day", "hour", "minute"]] = df[["year", "month", "day", "hour", "minute"]].fillna(0).astype(int)
df['Time'] = df.year.astype(str).str.cat(df.month.astype(str).astype(str), sep='-').str.cat(df.day.astype(str), sep='-')\
.str.cat(df.hour.astype(str).astype(str), sep='-').str.cat(df.minute.astype(str).astype(str), sep='-')
df = df.where(df.year>10) # to ger rid of all the nan values
df.index = pd.to_datetime(pd.Series(df["Time"]), format="%Y-%m-%d-%H-%M");
print df.index[0:3], df.index[-3:]
Explanation: Add time information to the dataframe
End of explanation
df["lake_height"].resample("M").mean().plot(); plt.title("Mean Monthly Altimetry"); plt.ylabel("Variation (m)")
df["lake_height"].resample("A").mean().plot(); plt.title("Mean Annual Altimetry"); plt.ylabel("Variation (m)")
Explanation: Resample the data to get monthly and annual variation in lake height
End of explanation
df_modis = pd.read_csv('MODIS_t.txt', names=["Area"], engine='python', index_col=False)
df_time = pd.read_csv('DV.txt', sep = "\t", names=["Year", "Month", "Day", "", "", ""], engine='python', index_col=False)
df_time['Time'] = df_time.Year.astype(str).str.cat(df_time.Month.astype(str).astype(str), sep='-').str.cat(df_time.Day.astype(str), sep='-')
df_time = df_time.where(df_time.Year>10) # to ger rid of all the nan values
df_modis.index = pd.to_datetime(pd.Series(df_time["Time"]), format="%Y-%m-%d")#df.index[0:3]
df_modis.plot(); plt.title("MODIS data - Surface Area"); plt.ylabel("Surface Area (sq.m.?)")
Explanation: MODIS data Lake Surface Area (Feb 18, 2000 to Aug 13, 2015)
End of explanation
df_glrm_subset = df["lake_height"].resample("D").mean().interpolate()
df_glrm_subset = df_glrm_subset[(df_glrm_subset.index > '2008-07-22') & (df_glrm_subset.index <= '2015-08-13')]
df_glrm_subset.plot(); plt.legend(); plt.title("Subset of Altimetry"); plt.ylabel("Variation (m)")
df_glrm_subset.index
df_modis_daily = df_modis["Area"].resample("D").mean().interpolate()
df_modis_subset = df_modis_daily[(df_modis_daily.index > '2008-07-22') & (df_modis_daily.index <= '2015-08-13')]
df_modis_subset.plot()
df_modis_subset.index
# QA: Create a time series of time alone, to check the number of data points that we should have for days.
#Note the vaiable called length
print pd.date_range('22/07/2008', periods=len(df_modis_subset), freq='D')
# Check if the two vectors are of the same length
print len(df_glrm_subset.tolist()), len(df_modis_subset.tolist())
Explanation: Create subsets of both vectors (altimetry and surface area) for the overlapping period
End of explanation
import numpy
cor = numpy.corrcoef(df_glrm_subset.resample("W").mean().interpolate().tolist(),
df_modis_subset.resample("W").mean().interpolate().tolist())
print "correlation coefficient is: " , cor[0][1]
Explanation: Compute correlation coefficient
End of explanation |
349 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
[Py-OO] Aula 03
Modelo de dados do Python
O que você vai aprender nesta aula?
Após o término da aula você terá aprendido
Step1: Podemos acessar as cartas do baralho por índice
Step2: Também podemos realizar slicing no baralho
Step3: E iterá-lo
Step4: Iterá-lo de trás para frente
Step5: Enumerá-lo!!!111!!!onze!!11!
Step6: Sorteio de cartas usando o módulo random
Step7: Sorteando 5 cartas (pode haver repetição)
Step8: Também podemos verificar se uma carta específica está no baralho
Step9: E se saber quantas cartas há no baralho
Step10: Você deve estar se perguntando quanto custou para implementar tudo isso? Respota
Step12: Podemos somar vetores usando o operador +
Step13: Usar o operador de subtração
Step14: Multiplicação por escalar
Step15: Valor absoluto (distância do vetor até a origem)
Step16: Comparação de vetores por valor
Step17: Podemos fazer verificações booleanas com o vetor
Step18: Esse exemplo usou a classe Vetor demonstrada a seguir, que implementa as operações demonstradas por meio dos métodos especiais __repr__, __abs__, __add__, __bool__, __eq__, __sub__ e __mul__
Step20: No exemplo anterior tentamos multiplicar um int por um Vetor, porém foi levantada uma exceção já que o tipo int não sabe multiplicar por Vetor. Apenas Vetor sabe multiplicar por escalar.
Para resolver esse problema precisamos antes entender como funciona x * y
Step21: Esse resultado não faz sentido, não é assim que multiplicação de vetores funciona.
Não vamos implementar aqui a multiplicação de vetores, pois o foco da aula é ensinar programação e não matemática. Portanto, precisamos permitir a multiplicação de vetores apenas por escalares, vamos corrigir a função __mul__
Step22: Essa comparação deveria retornar False, não levantar uma exceção. Podemos corrigir esse problema da seguinte maneira
Step23: Nosso vetor ainda não suporta operações unárias como -v e +v
Step24: Para que esses operadores funcionem precisamos definir os métodos __neg__ e __pos__
Step25: Ao invés disso
Step26: O desempacotamento facilita ainda mais nossa vida se tivessemos uma lista de vetores
Step27: Pois poderiamos ter acesso facilitado a x e y durante uma iteração usando o desempacotamento de sequências
Step28: Ao invés de ter que acessar os atributos diretamente
Step29: Antes de implementar essa funcionalidade precisamos entender como funciona o desempacotamento de sequências
Step30: O que acontece por trás de tudo isso é
Step31: Ele é iterado uma vez e o resultado da iteração é atribuído a primeira variável
Step32: E é iterado até chegar ao último elemento
Step33: Agora que sabemos disso fica claro que, para nosso vetor ser desempacotado precisamos torná-lo iterável. Para isso podemos definir o método __iter__ que deve retornar um iterador
Step34: Se realizarmos uma soma acumulada com outro vetor
Step35: Um novo objeto é criado, pois o objeto referenciado pela variável v não é mais o mesmo (a identidade dos objetos são diferentes).
Para que essas operações de fato modifiquem um objeto, como acontecem com objetos mutáveis
Step36: O objeto permanece o mesmo, seu valor que é alterado.
Matemáticamente não faz muito sentido ter um vetor mutável, mas para entendermos melhor esses conceitos vamos fazer um VetorMutável, como subclasse de Vetor, que altere o valor do vetor quanto as operações += e *= forem usadas implementando os métodos __iadd__ e __imul__
Step37: Essa classe VetorMutavel pode ser implementada da seguinte maneira
Step38: Classes
Step39: Quando classes são chamadas retornam instâncias
Step40: Métodos de instâncias
Step41: Geradores
Step42: Chamar geradores retorna objetos geradores que executam o código definido
Step43: Para acessar o conteúdo do gerador precisamos iterá-lo, para isso podemos usar a função embutida next()
Step44: Porém se requisitamos mais valores de um gerador que ele pode gerar uma exceção é levantada
Step45: Veremos mais sobre geradores nas próximas aulas. Para saber mais sobre chamáveis consulte o python data model e como chamáveis são expressados
Objetos
Por fim, objetos que definem um método __call__ também são chamáveis.
Para demonstrar isso vamos implementar uma tombola (gaiola de bingo). A tombola pode
Step46: Vamos criar nossa tombola que armazena os números de 1 a 20
Step47: Verificaremos seus itens
Step48: Está vazia?
Step49: Misturando
Step50: Sorteando um item da maneira clássica
Step51: Aproveitando o método __call__ que definimos podemos sortear chamado o objeto tombola sem chamar o método tombola.sorteia()
Step52: Os operadores "aritméticos" (como +, * etc.) também podem ser aplicados a outros objetos para realizar operações que fazem sentido a esse objetos. Como por exemplo em listas e strings, em que o operador + realiza a concatenação
Step53: Para mostrar como isso funciona vamos implementar uma tombola expansível que torne possível juntar os itens dessa tombola com outra tombola ou um iterável.
Vamos definir o método __add__ para permitir a "soma" de tombolas
Step54: Na linha 3 verificamos se o objeto somado é uma instância de Tombola, isso permite que nossa TombolaExpansivel seja somada com Tombola e todas suas subclasses
Step55: Podemos somar a instância de TombolaExpansivel com Tombola e suas subclasses
Step56: Sobrescrevendo o método __add__ já é possível usar a soma atribuída, porém haverá um problema indesejado
Step57: Vemos que as identidades dos objetos atribuidos a tombola_exp são diferentes. Isso por que a atribuição acumulada, por padrão, na verdade faz
Step58: E nossa função o __add__ cria um novo objeto. Como queremos que nossa TombolaExpansivel seja mutável, precisamos definir o método __iadd__ para modificar a instância.
Aproveitando que vamos mexer no __iadd__ podemos melhorar nossa tombola para receber item de qualquer iterável e não somente de Tombola e suas subclasses
Step59: Linha 9 e 10 | Python Code:
from exemplos.baralho import Baralho
baralho = Baralho()
Explanation: [Py-OO] Aula 03
Modelo de dados do Python
O que você vai aprender nesta aula?
Após o término da aula você terá aprendido:
O que é o modelo de dados do Python
Para que servem e como funcionam métodos mágicos
Protocolos em Python
Sequência
Sobrecarga de operadores
Duck Typing
Este material usou o Capítulo 1 (Modelo de dados do Python) do livro Python Fluente do Luciano Ramalho
Nesta aula vamos falar sobre como funciona o modelo de dados do Python.
O Python é uma linguagem conhecida por sua consistência. Isso permite que, após trabalhar certo tempo com a linguagem, você consiga ter palpiters corretos sobre recursos do Python que você ainda não domina.
Um exemplo da consistência da linguagem se dá pela função len(), que apesar de parecer estranho de se usar - len(collection) ao invés de collection.len() como é feito em outras linguagens - sabemos, conforme visto no curso, que podemos usá-la para qualquer coleção, enquanto outras linguagens possuem métodos de nomes diferentes para realizar essa mesma operação.
O responsável por consistência (e estranheza) é o Python data model (modelo de dados do Python) que descreve a API que pode ser usada para fazer que seus próprios objetos interajam bem com os recursos mais idiomáticos da linguagem. Ele descreve os objetos e como estes interagem entre si.
O modelo de dados formaliza as interfaces dos blocos de construção da própria linguagem, por exemplo, as sequências, os iteradores, as funções, as classes, os gerenciadores de contexto e assim por diante.
O python faz isso usando os métodos especiais: o interpretador do Python chama esses métodos para realizar operações básicas em objetos, geralmente acionados por uma sintaxe especial.
Os métodos especiais são sempre escritos com underscores duplos no início e no fim (como __getitem__). Por exemplo a sintaxe especial obj[chave] é tratada pelo método especial __getitem__. Quando o interpretador avalia colecao[chave] ele chama colecao.__getitem__(chave).
Vamos mostrar um exemplo de como podemos usar o modelo de dados do python a nosso favor. Vamos criar um baralho pythônico:
End of explanation
baralho[0]
baralho[-1]
Explanation: Podemos acessar as cartas do baralho por índice:
End of explanation
baralho[:5]
baralho[15:20]
baralho[-5:]
Explanation: Também podemos realizar slicing no baralho:
End of explanation
for carta in baralho:
print(carta)
Explanation: E iterá-lo:
End of explanation
for carta in reversed(baralho):
print(carta)
Explanation: Iterá-lo de trás para frente:
End of explanation
for carta in enumerate(baralho):
print(carta)
Explanation: Enumerá-lo!!!111!!!onze!!11!
End of explanation
from random import choice
choice(baralho)
choice(baralho)
choice(baralho)
Explanation: Sorteio de cartas usando o módulo random:
End of explanation
mao = [choice(baralho) for _ in range(5)]
mao
Explanation: Sorteando 5 cartas (pode haver repetição):
End of explanation
from exemplos.baralho import Carta
Carta('10', 'espadas') in baralho
Carta('3', 'alabardas') in baralho
Explanation: Também podemos verificar se uma carta específica está no baralho:
End of explanation
len(baralho)
Explanation: E se saber quantas cartas há no baralho:
End of explanation
from exemplos.vetor1 import Vetor
v1 = Vetor(1, -2)
v2 = Vetor(3, 4)
Explanation: Você deve estar se perguntando quanto custou para implementar tudo isso? Respota: muito pouco.
```py
Arquivo: 02-python-oo/aula-03/exemplos/baralho.py
from collections import namedtuple
Carta = namedtuple('Carta', ['valor', 'naipe'])
class Baralho:
valores = [str(n) for n in range(2, 11)] + list('AJQK')
naipes = 'copas ouros paus espadas'.split()
def __init__(self):
self.cartas = [Carta(v, n) for v in self.valores for n in self.naipes]
def __len__(self):
return len(self.cartas)
def __getitem__(self, pos):
return self.cartas[pos]
```
Métodos especiais
Vimos duas vantagens de usar os métodos especiais para tirar proveito do modelo de dados do Python:
Os usuarios de suas classes não precisarão memorizar nomes arbitrários de métodos para realizar operações comuns (Como
obter a quantidade de itens? Uso .size(), .length(), ou o quê?)
Podemos se beneficiar da biblioteca-padrão do Python e não reinventar a roda, como visto no uso das funções random.choice e reversed.
Os métodos especiais foram criados para serem chamados pelo interpretador Python e não diretamente. Não usamos objeto.__len__, para obter a quantidade de elementos, mas sim len(objeto). Se objeto for a instância de uma classe definida pelo usuário (programador), o Python chamará o método __len__ da instância.
Na grande maioria das vezes a chamada aos métodos especiais será feita de forma implícita. Por exemplo, a construção de for i in x invoca iter(x), que poderá chamar x.__iter__() se existir.
Um exemplo comum de implementação e chamada de métodos especiais diretamente é o __init__ para sobrescrever o inicilizador da superclasse. Também é comum invocar o inicializador da superclasse diretamente com, por exemplo, super().__init__() ao implementar seu próprio inicializador.
Caso precise chamar um método especial, em geral é muito melhor chamar a função embutida relacionada ou a sintaxe especial (obj[chave], len, iter, str etc.). Essas funções embutidas invocam o método especiail correspondente, porém, com frequência, oferecem outros serviços e - para os tipos embutidos - são mais rápidas que chamadas de métodos.
#TODO
Monkey patching em __setitem__
Protocolo de Sequência
Nós podemos fazer todas essas operações no Baralho sem herdar de alguma classe espeicial, pois implementamos o protocolo de sequência como definido no modelo de dados do Python. Agora ficam duas dúvidas: o que é exatamente um protocolo e uma sequência?
No contexto de programação orientada a objetos um protocolo é uma interface informal definida somente na documentação e não no código. Por exemplo, o protocolo de sequência em Python implica somente os métodos __len__ e __getitem__. Qualquer classe que implemente esses métodos poderá ser usada em qualquer lugar em que se espera uma sequência.
Esse tipo de programação ficou conhecida como Duck Typing e é muito comum em linguagens dinâmicas como Python e Ruby.
<center>A duck typing (pode ser traduzido como: tipagem pato ou pato digitando)</center>
"Não verifique se é um pato: verifique se faz quack como um pato, anda como um pato etc., de acordo com o subconjunto exatao de comportamento de pato de que você precisa para usar a linguagem." (Alex Martelli, 2000)
Essa técnica consiste em não verificar se uma classe é, por exemplo, uma sequência e sim se ela se comporta como uma sequência.
É importante notar que, como os protocolos são informais e não impostos. Geralmente você pode implementar somente a parte de um protocolo que faz sentido a sua aplicação sem que haja problemas. Por exemplo, para dar suporte a iteração é necessário implementar somente o método __getitem__ e não é necessário o __len__
Agora que sabemos como funcionam os protocolos em Python, vamos falar sobre o protocolo de sequência.
O python data model define sequências como conjuntos finitos indexados por números não negativos. Sendo n o tamanho da sequência, os índices vão de 0 a n - 1 e são acessados por a[i].
Falaremos mais sobre o protocolo de sequência futuramente. Caso queira entender mais sobre o assunto consulte sua documentação.
Emulando tipos numéricos
Vamos ver como utilizar os métodos especiais para emular tipos numéricos.
O python data model diz que números são criados por números declará-los em sua forma literal (como por exemplo a = 3, 3.4 etc.) e resultados de operações aritméticos e funções aritméticas embutidas.
Implementaremos uma classe para representar vetores bidimensionais (vetores euclidianos) usados na matemática e na física.
<img src="img/vetor.jpg" width="500">
End of explanation
v1 + v2
Explanation: Podemos somar vetores usando o operador +:
End of explanation
v1 - v2
Explanation: Usar o operador de subtração:
End of explanation
v1 * 3
v2 * -4
Explanation: Multiplicação por escalar:
End of explanation
abs(v2)
Explanation: Valor absoluto (distância do vetor até a origem):
End of explanation
v1 == v2
v1 == Vetor(1, -2)
v2 == Vetor(3, 4)
Explanation: Comparação de vetores por valor:
End of explanation
if v1:
print('v1 existe e possui valor')
if not Vetor(0, 0):
print('vetor não possui valor')
else:
print('alguma coisa deu errado')
Explanation: Podemos fazer verificações booleanas com o vetor:
End of explanation
v1
v1 * 5
5 * v1
Explanation: Esse exemplo usou a classe Vetor demonstrada a seguir, que implementa as operações demonstradas por meio dos métodos especiais __repr__, __abs__, __add__, __bool__, __eq__, __sub__ e __mul__:
```py
Arquivo: 02-python-oo/aula-03/exemplos/vetor.py
Implementa um vetor bidimensional
import math
class Vetor:
def init(self, x=0, y=0):
self.x = x
self.y = y
def __repr__(self):
return 'Vetor({!r}, {!r})'.format(self.x, self.y)
def __abs__(self):
return math.hypot(self.x, self.y)
def __add__(self, v2):
return Vetor(self.x + v2.x, self.y + v2.y)
def __bool__(self):
return bool(self.x or self.y)
def __eq__(self, v2):
return self.x == v2.x and self.y == v2.y
def __sub__(self, v2):
return Vetor(self.x - v2.x, self.y - v2.y)
def __mul__(self, scalar):
return Vetor(self.x * scalar, self.y * scalar)
```
O método __repr__ é responsável por retornar a representação do objeto para inspeção. Esse valor é usado no modo interativo e em debugers. Caso esse método não seja sobrescrito será exibido algo como <Vetor object at 0x123e9230>.
A representação do objeto é obtido a partir da função embutida repr(). É uma boa prática usar !r para obter a representação dos atributos do objeto, pois mostra a diferença fundamental entre Vector(1, 2) e Vector('1', '2') - a última não funcionará, pois os argumentos do construtor devem ser número e não str.
Também há o método __str__ que é utilizado para exibir o valor do objeto para o usuário final. Para entender melhor a diferença consulte esta thread do stack overflow que foi muito bem respondida pelos pythonistas Alex Martelli e Martijn Peters.
Esse exemplo contém alguns problemas:
End of explanation
from exemplos.vetor2 import Vetor
Vetor(1, 2) * Vetor(2, 4)
Explanation: No exemplo anterior tentamos multiplicar um int por um Vetor, porém foi levantada uma exceção já que o tipo int não sabe multiplicar por Vetor. Apenas Vetor sabe multiplicar por escalar.
Para resolver esse problema precisamos antes entender como funciona x * y:
1. Se x tiver x.__mul__, chama x.__mul__(y) e devolve o resultado a menos que seja NotImplemented
2. Se x não tiver x.__mul__, ou sua chamada devolver NotImplemented, verifica se y possui __rmul__, chama y.__rmul__(x) e devolve o resultado, a menos que seja NotImplemented
3. Se y não tiver __rmul__, ou sua chamada devolver NotImplented, levanta TypeError com uma mensagem unsupported operand type(s)
O método __rmul__ é chamado de versão refletida, reversa ou direita (do inglês right) de __mul__.
Para corrigir precisamos adicionar o método __rmul__ à clase Vetor:
```py
import math
class Vetor:
...
def __mul__(self, escalar):
return Vetor(self.x * escalar, self.y * escalar)
def __rmul__(self, outro):
return self * outro
```
Porém, ao adicionar esse código acontece outro problema:
End of explanation
from exemplos.vetor1 import Vetor
Vetor(1, 3) == [1, 2]
Vetor(2, 4) == 'oi'
Explanation: Esse resultado não faz sentido, não é assim que multiplicação de vetores funciona.
Não vamos implementar aqui a multiplicação de vetores, pois o foco da aula é ensinar programação e não matemática. Portanto, precisamos permitir a multiplicação de vetores apenas por escalares, vamos corrigir a função __mul__:
```py
import math
from numbers import Number
class Vetor:
...
def __mul__(self, escalar):
if isinstance(escalar, Real):
return Vetor(self.x * escalar, self.y * escalar)
else:
return NotImplemented
```
Verificamos se o escalar recebido de fato é um número real, se for retornamos o resultado da multiplicação do vetor pelo escalar, caso contrário é retornado NotImplemented.
Retornamos NotImplemented ao invés de levantar uma exceção, para permitir que o Python tente executar __rmul__ no escalar, pois pode ser que seja algum tipo que implemente a operação reversa da multiplicação.
Também há um problema com a comparação de valores quando comparamos vetores com outros tipos:
End of explanation
from exemplos.vetor2 import Vetor
Vetor(1, 3) == 8
Vetor(-2, 3) == [1, 2, 3]
Explanation: Essa comparação deveria retornar False, não levantar uma exceção. Podemos corrigir esse problema da seguinte maneira:
```py
import math
from numbers import Number
class Vetor:
...
def __eq__(self, outro):
if isinstance(outro, Vetor):
return self.x == outro.x and self.y == outro.y
else:
return NotImplemented
```
Agora podemos comparar Vetor com outros tipos:
End of explanation
from exemplos.vetor1 import Vetor
-Vetor(1, 5)
+Vetor(2, 3)
Explanation: Nosso vetor ainda não suporta operações unárias como -v e +v:
End of explanation
v = Vetor(3, -1)
x, y = v
x, y
Explanation: Para que esses operadores funcionem precisamos definir os métodos __neg__ e __pos__:
```py
from numbers import Real
import math
class Vetor:
...
def __neg__(self):
return self * -1
def __pos__(self):
return self
```
A função __neg__ simplesmente retornou o vetor por -1. Já a função __pos__ retorna a própria instância, pois +Vetor(x, y) é sempre igual a ele mesmo Vetor(x, y).
Seria interessante se pudessemos desempacotar os valores de x e de y de um vetor para uma tupla. Isso facilitaria nossa vida, pois poderiamos fazer isso:
End of explanation
x = v.x
y = v.y
x, y
Explanation: Ao invés disso:
End of explanation
from random import randint
lista_vetores = [Vetor(x=randint(-10, 10), y=randint(-10, 10)) for _ in range(5)]
lista_vetores
Explanation: O desempacotamento facilita ainda mais nossa vida se tivessemos uma lista de vetores:
End of explanation
for x, y in lista_vetores:
print(x, y)
Explanation: Pois poderiamos ter acesso facilitado a x e y durante uma iteração usando o desempacotamento de sequências:
End of explanation
for vetor in lista_vetores:
print(vetor.x, vetor.y)
Explanation: Ao invés de ter que acessar os atributos diretamente:
End of explanation
(a, b) = (1, 0)
a, b
[a, b] = [3, 4]
a, b
Explanation: Antes de implementar essa funcionalidade precisamos entender como funciona o desempacotamento de sequências: o objeto a direita é iterado e cada variável a esquerda é atribuída ao item resultante dessa iteração.
End of explanation
iterador = iter([3, 4])
Explanation: O que acontece por trás de tudo isso é: Extraímos o iterador da sequência a direita:
End of explanation
a = next(iterador)
Explanation: Ele é iterado uma vez e o resultado da iteração é atribuído a primeira variável:
End of explanation
b = next(iterador)
a, b
Explanation: E é iterado até chegar ao último elemento:
End of explanation
from exemplos.vetor2 import Vetor
v = Vetor(1, 2)
v, id(v)
Explanation: Agora que sabemos disso fica claro que, para nosso vetor ser desempacotado precisamos torná-lo iterável. Para isso podemos definir o método __iter__ que deve retornar um iterador:
```py
...
class Vetor:
...
def __iter__(self):
return iter((self.x, self.y))
```
Nesse método definimos uma tupla composta pelos atributos x e y da instância do Vetor e retornamos o iterador da dessa tupla.
Essa implementação funciona, porém podemos usar geradores para deixar esse método mais simples e eficiente:
```py
...
class Vetor:
...
def __iter__(self):
yield self.x; yield self.y
```
Para entender essa implementação é necessário conhecer o funcionamento de geradores, que veremos numa aula futura.
As operações que implementamos para nosso vetor não o alteram, mesmo quando usamos operadores acumulados:
End of explanation
v += Vetor(2, 3)
v, id(v)
Explanation: Se realizarmos uma soma acumulada com outro vetor:
End of explanation
lista = [1, 2, 3, 4]
lista, id(lista)
lista += [5, 6, 7, 8]
lista, id(lista)
Explanation: Um novo objeto é criado, pois o objeto referenciado pela variável v não é mais o mesmo (a identidade dos objetos são diferentes).
Para que essas operações de fato modifiquem um objeto, como acontecem com objetos mutáveis:
End of explanation
from exemplos.vetor2 import VetorMutavel
vm = VetorMutavel(2, 3)
vm, id(vm)
vm += VetorMutavel(-1, 4)
vm, id(vm)
vm *= -2
vm, id(vm)
Explanation: O objeto permanece o mesmo, seu valor que é alterado.
Matemáticamente não faz muito sentido ter um vetor mutável, mas para entendermos melhor esses conceitos vamos fazer um VetorMutável, como subclasse de Vetor, que altere o valor do vetor quanto as operações += e *= forem usadas implementando os métodos __iadd__ e __imul__:
End of explanation
def chamavel():
print('posso ser chamado')
chamavel()
type(chamavel)
Explanation: Essa classe VetorMutavel pode ser implementada da seguinte maneira:
```py
class VetorMutavel(Vetor):
def __iadd__(self, outro):
if isinstance(outro, Vetor):
self.x += outro.x
self.y += outro.y
return self
return NotImplemented
def __imul__(self, outro):
if isinstance(outro, Real):
self.x *= outro
self.y *= outro
return self
return NotImplemented
```
Emulando tipos chamáveis
Chamáveis são os tipos que podem ser chamados ou invocados quando escrevemos seu nome seguido por parentesis e podem receber argumentos. São chamáveis:
Funções:
End of explanation
class Foo:
def bar(self):
print('também posso ser chamado!')
Explanation: Classes
End of explanation
foo = Foo()
Explanation: Quando classes são chamadas retornam instâncias:
End of explanation
foo.bar()
Explanation: Métodos de instâncias
End of explanation
def gen():
yield 1
Explanation: Geradores:
End of explanation
gen()
Explanation: Chamar geradores retorna objetos geradores que executam o código definido
End of explanation
g = gen()
next(g)
Explanation: Para acessar o conteúdo do gerador precisamos iterá-lo, para isso podemos usar a função embutida next() :
End of explanation
next(g)
Explanation: Porém se requisitamos mais valores de um gerador que ele pode gerar uma exceção é levantada:
End of explanation
import random
class Tombola:
def __init__(self, itens=None):
self._itens = []
self.carrega(itens)
def __call__(self):
return self.sorteia()
def carrega(self, itens):
self._itens.extend(itens)
def inspeciona(self):
return tuple(self._itens)
def mistura(self):
random.shuffle(self._itens)
def sorteia(self):
return self._itens.pop()
def vazia(self):
return len(self._itens) == 0
Explanation: Veremos mais sobre geradores nas próximas aulas. Para saber mais sobre chamáveis consulte o python data model e como chamáveis são expressados
Objetos
Por fim, objetos que definem um método __call__ também são chamáveis.
Para demonstrar isso vamos implementar uma tombola (gaiola de bingo). A tombola pode:
- Carregar itens
- Inspecionar itens da tombola
- Verificar se está vazia
- Misturar itens
- Sortear um item
- Sortear um item chamando a instância da tombola
Vamos ao código:
End of explanation
tombola = Tombola(range(1, 21))
Explanation: Vamos criar nossa tombola que armazena os números de 1 a 20:
End of explanation
tombola.inspeciona()
Explanation: Verificaremos seus itens:
End of explanation
tombola.vazia()
Explanation: Está vazia?
End of explanation
tombola.mistura()
tombola.inspeciona()
Explanation: Misturando:
End of explanation
tombola.sorteia()
Explanation: Sorteando um item da maneira clássica:
End of explanation
tombola()
Explanation: Aproveitando o método __call__ que definimos podemos sortear chamado o objeto tombola sem chamar o método tombola.sorteia():
End of explanation
lista = [1, 2, 3, 4]
lista
lista + [5, 6, 7, 8]
[-4, -3, -2, -1, 0] + lista
pal = "palavra"
pal
pal + '!!!1!1!11onze!!!1!'
Explanation: Os operadores "aritméticos" (como +, * etc.) também podem ser aplicados a outros objetos para realizar operações que fazem sentido a esse objetos. Como por exemplo em listas e strings, em que o operador + realiza a concatenação:
End of explanation
class TombolaExpansivel(Tombola):
def __add__(self, other):
if isinstance(other, Tombola):
return TombolaExpansivel(self.inspeciona() + other.inspeciona())
else:
return NotImplemented
Explanation: Para mostrar como isso funciona vamos implementar uma tombola expansível que torne possível juntar os itens dessa tombola com outra tombola ou um iterável.
Vamos definir o método __add__ para permitir a "soma" de tombolas:
End of explanation
tombola_exp = TombolaExpansivel(range(1, 11))
tombola_exp.inspeciona()
Explanation: Na linha 3 verificamos se o objeto somado é uma instância de Tombola, isso permite que nossa TombolaExpansivel seja somada com Tombola e todas suas subclasses:
End of explanation
outra_tombola = tombola_exp + Tombola(range(11, 21))
outra_tombola.inspeciona()
mais_tombolas = tombola_exp + TombolaExpansivel(range(11, 16))
mais_tombolas.inspeciona()
Explanation: Podemos somar a instância de TombolaExpansivel com Tombola e suas subclasses:
End of explanation
id(tombola_exp), tombola_exp.inspeciona()
tombola_exp += Tombola(range(11, 16))
id(tombola_exp), tombola_exp.inspeciona()
Explanation: Sobrescrevendo o método __add__ já é possível usar a soma atribuída, porém haverá um problema indesejado:
End of explanation
tombola_exp = tombola_exp + Tombola(range(16, 21))
tombola_exp.inspeciona()
Explanation: Vemos que as identidades dos objetos atribuidos a tombola_exp são diferentes. Isso por que a atribuição acumulada, por padrão, na verdade faz:
End of explanation
class TombolaExpansivel(Tombola):
def __add__(self, other):
if isinstance(other, Tombola):
return TombolaExpansivel(self.inspeciona() + other.inspeciona())
else:
return NotImplemented
def __iadd__(self, outro):
if isinstance(outro, Tombola):
outro_iteravel = outro.inspeciona()
else:
try:
outro_iteravel = iter(outro)
except TypeError:
msg = "operando da direita no += deve ser {!r} ou um iterável"
raise TypeError(msg.format(type(self).__name__))
self.carrega(outro_iteravel)
return self
Explanation: E nossa função o __add__ cria um novo objeto. Como queremos que nossa TombolaExpansivel seja mutável, precisamos definir o método __iadd__ para modificar a instância.
Aproveitando que vamos mexer no __iadd__ podemos melhorar nossa tombola para receber item de qualquer iterável e não somente de Tombola e suas subclasses:
End of explanation
tombola_exp = TombolaExpansivel(range(-10, 1, 1))
id(tombola_exp), tombola_exp.inspeciona()
tombola_exp += [1, 2, 3, 4]
id(tombola_exp), tombola_exp.inspeciona()
Explanation: Linha 9 e 10: se o objeto à direita for uma tombola inspecionamos e "pegamos" seus itens
Linha 12 e 13: tenta extrair um iterável do objeto a direita, isso funcionará se este objeto for iterável, se não uma exceção do tipo TypeError é levantada.
Linha 14, 15 e 16: Se for levantada uma exceção TypeError é criada uma outra exceção do tipo TypeError, porém com uma mensagem de erro mais clara.
Linha 17: carrega os próprios itens e da outra tupla.
Agora podemos, de fato, modificar nossa TombolaExpansível:
End of explanation |
350 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Verifying Non-Uniformity of Subvolumes
Here, I sample subvolumes of a predetermined size, count the synapse contents, and then plot that distribution in order to show that the synapses are not uniformly distributed.
If they are uniformly distributed, then the graph of x-center × y-center × count should be a perfectly straight line.
Step1: Import data
Step2: Randomly select SUBV_COUNT subvolumes (of size SUBV_SIZE) from the larger volume. Count their contents (sum), and plot x-origin, y-origin, and count (x,y,size).
Step3: From this alone, we can see that the data are nonuniformly distributed. Now let us plot this to characterize the distribution
Step4: As this 2D histogram above shows, the synapses are not distributed uniformly in XZ-space.
We know already that they are not distributed evenly across the y-axis, as XY and YZ graphs demonstrate below | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
Explanation: Verifying Non-Uniformity of Subvolumes
Here, I sample subvolumes of a predetermined size, count the synapse contents, and then plot that distribution in order to show that the synapses are not uniformly distributed.
If they are uniformly distributed, then the graph of x-center × y-center × count should be a perfectly straight line.
End of explanation
import csv
data = open('../data/data.csv', 'r').readlines()
fieldnames = ['x', 'y', 'z', 'unmasked', 'synapses']
reader = csv.reader(data)
reader.next()
rows = [[int(col) for col in row] for row in reader]
sorted_x = sorted(list(set([r[0] for r in rows])))
sorted_y = sorted(list(set([r[1] for r in rows])))
sorted_z = sorted(list(set([r[2] for r in rows])))
vol = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))
for r in rows:
vol[sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]
SUBV_SIZE = (10, 10, 5)
SUBV_COUNT = 500
MGN = 15 # margin
Explanation: Import data:
End of explanation
import random
print vol.shape
sample_vol = vol[MGN : -MGN, MGN : -MGN, :]
subvs = []
for i in range(SUBV_COUNT):
x_origin = random.randint(0, sample_vol.shape[0] - SUBV_SIZE[0])
y_origin = random.randint(0, sample_vol.shape[1] - SUBV_SIZE[1])
z_origin = random.randint(0, sample_vol.shape[2] - SUBV_SIZE[2])
subv = sample_vol[
x_origin : x_origin + SUBV_SIZE[0],
y_origin : y_origin + SUBV_SIZE[1],
z_origin : z_origin + SUBV_SIZE[2]
]
subvs.append((x_origin, y_origin, z_origin, np.sum(subv)))
plt.scatter(x=[s[0] for s in subvs], y=[s[1] for s in subvs], c=[s[3]/400 for s in subvs])
plt.xlabel("Dataset x Axis")
plt.ylabel("Dataset y Axis")
plt.suptitle("Synapse count of subvolumes randomly selected across cortex", fontsize="14")
Explanation: Randomly select SUBV_COUNT subvolumes (of size SUBV_SIZE) from the larger volume. Count their contents (sum), and plot x-origin, y-origin, and count (x,y,size).
End of explanation
plt.hist([s[3]/10000 for s in subvs])#, y=[s[1] for s in subvs])
plt.xlabel("Synapse Count in (10x10x10) Supervoxel (x10,000)")
plt.ylabel("Number of supervoxels")
plt.suptitle("Synapse count in randomly selected supervoxels follows nonuniform distribution", fontsize="14")
plt.hist2d([s[0] for s in subvs], y=[s[2] for s in subvs])
plt.xlabel("Data x")
plt.ylabel("Data z")
plt.suptitle("Relative synapse sensities are not distributed uniformly over x/z", fontsize=16)
Explanation: From this alone, we can see that the data are nonuniformly distributed. Now let us plot this to characterize the distribution:
End of explanation
plt.hist2d([s[0] for s in subvs], y=[s[1] for s in subvs])
plt.xlabel("Data x")
plt.ylabel("Data y")
plt.suptitle("Relative synapse sensities are not distributed uniformly over x/y", fontsize=16)
plt.hist2d([s[1] for s in subvs], y=[s[2] for s in subvs])
plt.xlabel("Data y")
plt.ylabel("Data z")
plt.suptitle("Relative synapse sensities are not distributed uniformly over y/z", fontsize=16)
Explanation: As this 2D histogram above shows, the synapses are not distributed uniformly in XZ-space.
We know already that they are not distributed evenly across the y-axis, as XY and YZ graphs demonstrate below:
End of explanation |
351 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 1
Imports
Step1: Line plot of sunspot data
Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook.
Step2: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
Step3: Make a line plot showing the sunspot count as a function of year.
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
Step4: Describe the choices you have made in building this visualization and how they make it effective.
YOUR ANSWER HERE
Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 1
Imports
End of explanation
import os
assert os.path.isfile('yearssn.dat')
Explanation: Line plot of sunspot data
Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook.
End of explanation
data = np.loadtxt("yearssn.dat")
a= np.array(data)
a
years = a[:,0]
years
ssc = a[:,1]
ssc
assert len(year)==315
assert year.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
Explanation: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
End of explanation
plt.plot(years, ssc)
plt.figsize=(10,8)
plt.xlim(1700,2015) #plot is scaled from 1700 to 2015 so that the data fill the graph.
assert True # leave for grading
Explanation: Make a line plot showing the sunspot count as a function of year.
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
plt.subplots(2, 2)
for i in range(1700, 1800):
for j in range(1800,1900):
for k in range(1900,2000):
plt.plot(data)
plt.tight_layout()
assert True # leave for grading
Explanation: Describe the choices you have made in building this visualization and how they make it effective.
YOUR ANSWER HERE
Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above:
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation |
352 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Preperation
Import packages
Step1: Block the output of all cores except for one
Step2: Define an md.export_cfg object
md.export_cfg has a call method that we can use to create quick snapshots of our simulation box
Step3: Asymptotic Displacement Field of Crack from Linear Elasticity
Step4: Configuration
Create a $[\bar{1}10]\times\frac{1}{2}[111]\times[11\bar{2}]$ cell
start with a $[100]\times[010]\times[001]$ cell
Step5: Create a $[\bar{1}10]\times[111]\times[11\bar{2}]$ cell
Step6: Remove half of the atoms and readjust the position of remaining
Now one needs to cut the cell in half in $[111]$ direction. We can achive this in three steps
Step7: Readjust the postions
Step8: Replicating the unit cell
Step9: Add vacuum
Step10: Get the displacement field for this configuration
Step11: Impose the diplacement field and other boundary conditions
Step12: assign intial velocities
Step13: add hydrogen to the system
Step14: define ensemble
muvt
Step15: run gcmc | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import mapp4py
from mapp4py import md
from lib.elasticity import rot, cubic, resize, displace, crack
Explanation: Introduction
Preperation
Import packages
End of explanation
from mapp4py import mpi
if mpi().rank!=0:
with open(os.devnull, 'w') as f:
sys.stdout = f;
Explanation: Block the output of all cores except for one
End of explanation
xprt = md.export_cfg("");
Explanation: Define an md.export_cfg object
md.export_cfg has a call method that we can use to create quick snapshots of our simulation box
End of explanation
_ = np.array([[-1,1,0],[1,1,1],[1,1,-2]], dtype=np.float);
Q = np.linalg.inv(np.sqrt(_ @ _.T)) @ _;
C = rot(cubic(1.3967587463636366,0.787341583191591,0.609615090769241),Q)
B = np.linalg.inv(
np.array([
[C[0, 0, 0, 0], C[0, 0, 1, 1], C[0, 0, 0, 1]],
[C[0, 0, 1, 1], C[1, 1, 1, 1], C[1, 1, 0, 1]],
[C[0, 0, 0, 1], C[1, 1, 0, 1], C[0, 1, 0, 1]]
]
))
_ = np.roots([B[0, 0], -2.0*B[0, 2],2.0*B[0, 1]+B[2, 2], -2.0*B[1, 2], B[1, 1]])
mu = np.array([_[0],0.0]);
if np.absolute(np.conjugate(mu[0]) - _[1]) > 1.0e-12:
mu[1] = _[1];
else:
mu[1] = _[2]
alpha = np.real(mu);
beta = np.imag(mu);
p = B[0,0] * mu**2 - B[0,2] * mu + B[0, 1]
q = B[0,1] * mu - B[0, 2] + B[1, 1]/ mu
K = np.stack([p, q]) * np.array(mu[1], mu[0]) /(mu[1] - mu[0])
K_r = np.real(K)
K_i = np.imag(K)
Tr = np.stack([
np.array(np.array([[1.0, alpha[0]], [0.0, beta[0]]])),
np.array([[1.0, alpha[1]], [0.0, beta[1]]])
], axis=1)
def u_f0(x): return np.sqrt(np.sqrt(x[0] * x[0] + x[1] * x[1]) + x[0])
def u_f1(x): return np.sqrt(np.sqrt(x[0] * x[0] + x[1] * x[1]) - x[0]) * np.sign(x[1])
def disp(x):
_ = Tr @ x
return K_r @ u_f0(_) + K_i @ u_f1(_)
n = 300;
r = 10;
disp_scale = 0.3;
n0 = int(np.round(n/ (1 +np.pi), ))
n1 = n - n0
xs = np.concatenate((
np.stack([np.linspace(0, -r , n0), np.full((n0,), -1.e-8)]),
r * np.stack([np.cos(np.linspace(-np.pi, np.pi , n1)),np.sin(np.linspace(-np.pi, np.pi , n1))]),
np.stack([np.linspace(-r, 0 , n0), np.full((n0,), 1.e-8)]),
), axis =1)
xs_def = xs + disp_scale * disp(xs)
fig, ax = plt.subplots(figsize=(10.5,5), ncols = 2)
ax[0].plot(xs[0], xs[1], "b-", label="non-deformed");
ax[1].plot(xs_def[0], xs_def[1], "r-.", label="deformed");
Explanation: Asymptotic Displacement Field of Crack from Linear Elasticity
End of explanation
sim = md.atoms.import_cfg("configs/Fe_300K.cfg");
a = sim.H[0][0]
Explanation: Configuration
Create a $[\bar{1}10]\times\frac{1}{2}[111]\times[11\bar{2}]$ cell
start with a $[100]\times[010]\times[001]$ cell
End of explanation
sim.cell_change([[-1,1,0],[1,1,1],[1,1,-2]])
Explanation: Create a $[\bar{1}10]\times[111]\times[11\bar{2}]$ cell
End of explanation
H = np.array(sim.H);
def _(x):
if x[1] > 0.5*H[1, 1] - 1.0e-8:
return False;
else:
x[1] *= 2.0;
sim.do(_);
_ = np.full((3,3), 0.0)
_[1, 1] = -0.5
sim.strain(_)
Explanation: Remove half of the atoms and readjust the position of remaining
Now one needs to cut the cell in half in $[111]$ direction. We can achive this in three steps:
Remove the atoms that are above located above $\frac{1}{2}[111]$
Double the position of the remiaing atoms in the said direction
Shrink the box affinly to half on that direction
End of explanation
H = np.array(sim.H);
displace(sim,np.array([sim.H[0][0]/6.0, sim.H[1][1]/6.0, sim.H[2][2]/6.0]))
Explanation: Readjust the postions
End of explanation
max_natms=100000
H=np.array(sim.H);
n_per_area=sim.natms/(H[0,0] * H[1,1]);
_ =np.sqrt(max_natms/n_per_area);
N0 = np.array([
np.around(_ / sim.H[0][0]),
np.around(_ / sim.H[1][1]),
1], dtype=np.int32)
# make sure in 1 direction it is an even number
if N0[1] % 2 == 1:
N0[1] += 1
sim *= N0;
Explanation: Replicating the unit cell
End of explanation
vaccum = 100.0
H = np.array(sim.H);
H_new = np.array(sim.H);
H_new[0][0] += vaccum
H_new[1][1] += vaccum
resize(sim, H_new, H.sum(axis=0) * 0.5)
Explanation: Add vacuum
End of explanation
_ = np.array([[-1,1,0],[1,1,1],[1,1,-2]], dtype=np.float);
Q = np.linalg.inv(np.sqrt(_ @ _.T)) @ _;
C = rot(cubic(1.3967587463636366,0.787341583191591,0.609615090769241),Q)
disp = crack(C)
Explanation: Get the displacement field for this configuration
End of explanation
fixed_layer_thickness = 20.0
intensity = 0.5
rate = 0.001
H = np.array(sim.H);
ctr = H.sum(axis=0) * 0.5
lim = np.array([H[0, 0], H[1, 1]])
lim -= vaccum;
lim *= 0.5
lim -= fixed_layer_thickness
def _(x, x_d, x_dof):
x_rel = x[:2] - ctr[:2]
u = disp(x_rel)
x[:2] += intensity * u
if (np.abs(x_rel) < lim).sum() != 2:
x_d[:2] = rate * u
x_dof[0] = False;
x_dof[1] = False;
sim.do(_)
md.export_cfg("", extra_vecs=["x_dof"] )(sim, "dumps/crack.cfg")
Explanation: Impose the diplacement field and other boundary conditions
End of explanation
sim.kB = 8.617330350e-5
sim.hP = 4.13566766225 * 0.1 * np.sqrt(1.60217656535/1.66053904020)
sim.create_temp(300.0, 846244)
Explanation: assign intial velocities
End of explanation
sim.add_elem('H',1.007940)
Explanation: add hydrogen to the system
End of explanation
# GPa and Kelvin
def mu(p,T):
return -2.37+0.0011237850013293155*T+0.00004308665175*T*np.log(p)-0.000193889932875*T*np.log(T);
muvt = md.muvt(mu(1.0e-3,300.0), 300.0, 0.1, 'H', 73108204);
muvt.nevery = 100;
muvt.nattempts=40000;
muvt.ntally=1000;
muvt.export=md.export_cfg('dumps/dump',10000)
Explanation: define ensemble
muvt
End of explanation
muvt.run(sim,100000);
Explanation: run gcmc
End of explanation |
353 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
This question may not be clear, so please ask for clarification in the comments and I will expand. | Problem:
import numpy as np
import pandas as pd
import torch
mask, clean_input_spectrogram, output= load_data()
for i in range(len(mask[0])):
if mask[0][i] == 1:
mask[0][i] = 0
else:
mask[0][i] = 1
output[:, mask[0].to(torch.bool), :] = clean_input_spectrogram[:, mask[0].to(torch.bool), :] |
354 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Benchmarking Thinc layers with a custom benchmark layer
This notebook shows how to write a benchmark layer that can wrap any layer(s) in your network and that logs the execution times of the initialization, forward pass and backward pass. The benchmark layer can also be mapped to an operator like @ to make it easy to add debugging to your network.
Step1: To log the results, we first set up a custom logger using Python's logging module. You could also just print the stats instead, but using logging is cleaner, since it lets other users modify the logger's behavior more easily, and separates the logs from other output and write it to a file (e.g. if you're benchmarking several layers during training). The following logging config will output the date and time, the name of the logger and the logged results.
Step2: Here's a minimalistic time logger that can be initialized with the name of a given layer, and can track several events (e.g. "forward" and "backward"). When the TimeLogger.end method is called, the output is formatted nicely and the elapsed time is logged with the logger name and colored label.
Step3: The benchmark layer now has to wrap the forward pass, backward pass and initialization of the layer it wraps and log the execution times. It then returns a Thinc model instance with the custom forward function and a custom init function. We'll also allow setting a custom name to make it easier to tell multiple wrapped benchmark layers apart.
Step4: Usage examples
Using the benchmark layer as a function
We can now wrap one or more layers (including nested layers) with the benchmark function. This is the original model
Step5: Using the benchmark layer as an operator
Alternatively, we can also use Model.define_operators to map benchmark to an operator like @. The left argument of the operator is the first argument passed into the function (the layer) and the right argument is the second argument (the name). The following example wraps the whole network (two chained Linear layers) in a benchmark layer named "outer", and the first Linear layer in a benchmark layer named "first".
Step6: Using the benchmark layer during training | Python Code:
!pip install "thinc>=8.0.0a0"
Explanation: Benchmarking Thinc layers with a custom benchmark layer
This notebook shows how to write a benchmark layer that can wrap any layer(s) in your network and that logs the execution times of the initialization, forward pass and backward pass. The benchmark layer can also be mapped to an operator like @ to make it easy to add debugging to your network.
End of explanation
import logging
logger = logging.getLogger("thinc:benchmark")
if not logger.hasHandlers(): # prevent Jupyter from adding multiple loggers
formatter = logging.Formatter('%(asctime)s %(name)s %(message)s', datefmt="%Y-%m-%d %H:%M:%S")
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
Explanation: To log the results, we first set up a custom logger using Python's logging module. You could also just print the stats instead, but using logging is cleaner, since it lets other users modify the logger's behavior more easily, and separates the logs from other output and write it to a file (e.g. if you're benchmarking several layers during training). The following logging config will output the date and time, the name of the logger and the logged results.
End of explanation
from timeit import default_timer
from wasabi import color
class TimeLogger:
def __init__(self, name):
self.colors = {"forward": "green", "backward": "blue"}
self.name = name
self.timers = {}
def start(self, name):
self.timers[name] = default_timer()
def end(self, name):
result = default_timer() - self.timers[name]
label = f"{name.upper():<8}"
label = color(label, self.colors.get(name), bold=True)
logger.debug(f"{self.name:<12} | {label} | {result:.6f}")
Explanation: Here's a minimalistic time logger that can be initialized with the name of a given layer, and can track several events (e.g. "forward" and "backward"). When the TimeLogger.end method is called, the output is formatted nicely and the elapsed time is logged with the logger name and colored label.
End of explanation
from thinc.api import Model
def benchmark(layer, name=None):
name = name if name is not None else layer.name
t = TimeLogger(name)
def init(model, X, Y):
t.start("init")
result = layer.initialize(X, Y)
t.end("init")
return result
def forward(model, X, is_train):
t.start("forward")
layer_Y, layer_callback = layer(X, is_train=is_train)
t.end("forward")
def backprop(dY):
t.start("backward")
result = layer_callback(dY)
t.end("backward")
return result
return layer_Y, backprop
return Model(f"benchmark:{layer.name}", forward, init=init)
Explanation: The benchmark layer now has to wrap the forward pass, backward pass and initialization of the layer it wraps and log the execution times. It then returns a Thinc model instance with the custom forward function and a custom init function. We'll also allow setting a custom name to make it easier to tell multiple wrapped benchmark layers apart.
End of explanation
import numpy
from thinc.api import chain, Linear
X = numpy.zeros((1, 2), dtype="f")
model = benchmark(chain(benchmark(Linear(1)), Linear(1)), name="outer")
model.initialize(X=X)
Y, backprop = model(X, is_train=False)
dX = backprop(Y)
Explanation: Usage examples
Using the benchmark layer as a function
We can now wrap one or more layers (including nested layers) with the benchmark function. This is the original model:
python
model = chain(Linear(1), Linear(1))
End of explanation
from thinc.api import Model
with Model.define_operators({">>": chain, "@": benchmark}):
model = (Linear(1) @ "first" >> Linear(1)) @ "outer"
model.initialize(X=X)
Y, backprop = model(X, is_train=True)
dX = backprop(Y)
Explanation: Using the benchmark layer as an operator
Alternatively, we can also use Model.define_operators to map benchmark to an operator like @. The left argument of the operator is the first argument passed into the function (the layer) and the right argument is the second argument (the name). The following example wraps the whole network (two chained Linear layers) in a benchmark layer named "outer", and the first Linear layer in a benchmark layer named "first".
End of explanation
from thinc.api import Model, chain, Relu, Softmax, Adam
n_hidden = 32
dropout = 0.2
with Model.define_operators({">>": chain, "@": benchmark}):
model = (
Relu(nO=n_hidden, dropout=dropout) @ "relu1"
>> Relu(nO=n_hidden, dropout=dropout) @ "relu2"
>> Softmax()
)
train_X = numpy.zeros((5, 784), dtype="f")
train_Y = numpy.zeros((540, 10), dtype="f")
model.initialize(X=train_X[:5], Y=train_Y[:5])
optimizer = Adam(0.001)
for i in range(10):
for X, Y in model.ops.multibatch(8, train_X, train_Y, shuffle=True):
Yh, backprop = model.begin_update(X)
backprop(Yh - Y)
model.finish_update(optimizer)
Explanation: Using the benchmark layer during training
End of explanation |
355 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copula - Multivariate joint distribution
Step1: When modeling a system, there are often cases where multiple parameters are involved. Each of these parameters could be described with a given Probability Density Function (PDF). If would like to be able to generate a new set of parameter values, we need to be able to sample from these distributions-also called marginals. There are mainly two cases
Step2: And we can sample the PDF.
Step3: Let's come back to our 2 variables for a second. In this case we consider them to be gamma and normally distributed. If they would be independent from each other, we could sample from each PDF individually. Here we use a convenient class to do the same operation.
Reproducibility
Generating reproducible random values from copulas required explicitly setting the seed argument.
seed accepts either an initialized NumPy Generator or RandomState, or any argument acceptable
to np.random.default_rng, e.g., an integer or a sequence of integers. This example uses an
integer.
The singleton RandomState that is directly exposed in the np.random distributions is
not used, and setting np.random.seed has no effect on the values generated.
Step4: Now, above we have expressed the dependency between our variables using a copula, we can use this copula to sample a new set of observation with the same convenient class.
Step5: There are two things to note here. (i) as in the independent case, the marginals are correctly showing a gamma and normal distribution; (ii) the dependence is visible between the two variables.
Estimating copula parameters
Now, imagine we already have experimental data and we know that there is a dependency that can be expressed using a Gumbel copula. But we don't know what is the hyperparameter value for our copula. In this case, we can estimate the value.
We are going to use the sample we just generated as we already know the value of the hyperparameter we should get | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import stats
sns.set_style("darkgrid")
sns.mpl.rc("figure", figsize=(8, 8))
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
Explanation: Copula - Multivariate joint distribution
End of explanation
from statsmodels.distributions.copula.api import (
CopulaDistribution, GumbelCopula, IndependenceCopula)
copula = GumbelCopula(theta=2)
_ = copula.plot_pdf() # returns a matplotlib figure
Explanation: When modeling a system, there are often cases where multiple parameters are involved. Each of these parameters could be described with a given Probability Density Function (PDF). If would like to be able to generate a new set of parameter values, we need to be able to sample from these distributions-also called marginals. There are mainly two cases: (i) PDFs are independent; (ii) there is a dependency. One way to model the dependency it to use a copula.
Sampling from a copula
Let's use a bi-variate example and assume first that we have a prior and know how to model the dependence between our 2 variables.
In this case, we are using the Gumbel copula and fix its hyperparameter theta=2. We can visualize it's 2-dimensional PDF.
End of explanation
sample = copula.rvs(10000)
h = sns.jointplot(x=sample[:, 0], y=sample[:, 1], kind="hex")
_ = h.set_axis_labels("X1", "X2", fontsize=16)
Explanation: And we can sample the PDF.
End of explanation
marginals = [stats.gamma(2), stats.norm]
joint_dist = CopulaDistribution(copula=IndependenceCopula(), marginals=marginals)
sample = joint_dist.rvs(512, random_state=20210801)
h = sns.jointplot(x=sample[:, 0], y=sample[:, 1], kind="scatter")
_ = h.set_axis_labels("X1", "X2", fontsize=16)
Explanation: Let's come back to our 2 variables for a second. In this case we consider them to be gamma and normally distributed. If they would be independent from each other, we could sample from each PDF individually. Here we use a convenient class to do the same operation.
Reproducibility
Generating reproducible random values from copulas required explicitly setting the seed argument.
seed accepts either an initialized NumPy Generator or RandomState, or any argument acceptable
to np.random.default_rng, e.g., an integer or a sequence of integers. This example uses an
integer.
The singleton RandomState that is directly exposed in the np.random distributions is
not used, and setting np.random.seed has no effect on the values generated.
End of explanation
joint_dist = CopulaDistribution(copula, marginals)
# Use an initialized Generator object
rng = np.random.default_rng([2, 0, 2, 1, 0, 8, 0, 1])
sample = joint_dist.rvs(512, random_state=rng)
h = sns.jointplot(x=sample[:, 0], y=sample[:, 1], kind="scatter")
_ = h.set_axis_labels("X1", "X2", fontsize=16)
Explanation: Now, above we have expressed the dependency between our variables using a copula, we can use this copula to sample a new set of observation with the same convenient class.
End of explanation
copula = GumbelCopula()
theta = copula.fit_corr_param(sample)
print(theta)
Explanation: There are two things to note here. (i) as in the independent case, the marginals are correctly showing a gamma and normal distribution; (ii) the dependence is visible between the two variables.
Estimating copula parameters
Now, imagine we already have experimental data and we know that there is a dependency that can be expressed using a Gumbel copula. But we don't know what is the hyperparameter value for our copula. In this case, we can estimate the value.
We are going to use the sample we just generated as we already know the value of the hyperparameter we should get: theta=2.
End of explanation |
356 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project 2
In this project, you will implement the exploratory analysis plan developed in Project 1. This will lay the groundwork for our our first modeling exercise in Project 3.
Step 1
Step1: Step 2
Step2: Questions
Question 1. How many observations are in our dataset?
Step3: Answer
Step4: Question 3. Why would GRE have a larger STD than GPA?
Answer
Step5: Question 5. Confirm that you dropped the correct data. How can you tell?
Answer
Step6: Question 7. What do this plots show?
Answer
Step7: Question 9. If our model had an assumption of a normal distribution would we meet that requirement?
Answer
Step8: Question 12. What did you find?
Answer | Python Code:
#imports
from __future__ import division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
import pylab as pl
import numpy as np
%matplotlib inline
Explanation: Project 2
In this project, you will implement the exploratory analysis plan developed in Project 1. This will lay the groundwork for our our first modeling exercise in Project 3.
Step 1: Load the python libraries you will need for this project
End of explanation
#Read in data from source
df_raw = pd.read_csv("../assets/admissions.csv")
print df_raw.head()
Explanation: Step 2: Read in your data set
End of explanation
df_raw['admit'].count()
df_raw['gpa'].count()
df_raw.shape
rows,columns = df_raw.shape
print(rows)
print(columns)
Explanation: Questions
Question 1. How many observations are in our dataset?
End of explanation
#function
def summary_table():
#creates a summary table for df_raw using .describe()
df_raw.describe()
return x
print X
df_raw.describe()
Explanation: Answer: 400 observations. These 400 observations are displayed within 4 rows of 400 observations each.
Question 2. Create a summary table
End of explanation
df_raw.dropna()
#drops any missing data rows from admissions.csv dataset
#returns 397 observations (complete observation rows) across 4 columns
#3 rows had missing, incomplete, NaN data present
Explanation: Question 3. Why would GRE have a larger STD than GPA?
Answer: The GRE variable has a larger 'std' value since the range of GRE scores varies from 220 to 800 while the range for GPA varies from 2.26 to 4.00.
Question 4. Drop data points with missing data
End of explanation
#boxplot for GRE column data
df_raw.boxplot(column = 'gre', return_type = 'axes')
#boxplot for GPA column data
df_raw.boxplot(column = 'gpa', return_type = 'axes')
Explanation: Question 5. Confirm that you dropped the correct data. How can you tell?
Answer: Code in question one returned 400 observations across 4 rows. Culled data using '.dropna()' method returns 397 observational rows, implying that three rows had been removed due to NaN data being present.
Question 6. Create box plots for GRE and GPA
End of explanation
# distribution plot of 'admit' variable with mean
df_raw.admit.plot(kind = 'density', sharex = False, sharey = False, figsize = (10,4));plt.legend(loc='best')
#
plt.vlines(df_raw.admit.mean(), # Plot black line at mean
ymin=0,
ymax=2.0,
linewidth=4.0)
# distribution plot of 'gre' variable with mean
df_raw.gre.plot(kind = 'density', sharex = False, sharey = False, figsize = (10,4));plt.legend(loc='best')
#
plt.vlines(df_raw.gre.mean(), # Plot black line at mean
ymin=0,
ymax=0.0035,
linewidth=4.0)
# distribution plot of 'gpa' variable with mean
df_raw.gpa.plot(kind = 'density', sharex = False, sharey = False, figsize = (10,4));plt.legend(loc='best')
#
plt.vlines(df_raw.gpa.mean(), # Plot black line at mean
ymin=0,
ymax=1.0,
linewidth=4.0)
# distribution plot of 'prestige' variable with mean
df_raw.prestige.plot(kind = 'density', sharex = False, sharey = False, figsize = (10,4));plt.legend(loc='best')
#
plt.vlines(df_raw.prestige.mean(), # Plot black line at mean
ymin=0,
ymax=0.6,
linewidth=4.0)
Explanation: Question 7. What do this plots show?
Answer:
GRE Boxplot:
The mean for this variable lies just south of 600 (around 580) and the interquartile range lies between 650 and 510 as indicated by the blue square. The box plot displays a significant outlier at 300 which has not been included into the range as it falls well outside the acceptable standard deviation from the mean. Further, this value is below the lower extreme of variable GPA.
GPA Boxplot:
The mean GPA value falls right at ~3.40 with the interquartile range falling between ~3.64 at the upper quartile and ~3.18 at the lower quartile. The lower extreme of this data is right at 2.4 while the upper extreme extends beyond 4.00 despite the maximum of this data being 4.00.
Question 8. Describe each distribution
End of explanation
# correlation matrix for variables in df_raw
df_raw.corr()
Explanation: Question 9. If our model had an assumption of a normal distribution would we meet that requirement?
Answer: We would not meet that requirement as only the variable 'gre' displays itself in a quasi normal distribution. The variables for admit, gpa, and prestige are abnormally distributed.
Question 10. Does this distribution need correction? If so, why? How?
Answer: Yes, this data does need to be corrected. If we are to compare these variables through linear regression or other statistics inferential tools, the data must be normalized in order to conform to a more normal distribution.
We can accomplish this by using breating a new dataframe like so:
(df_norm = (df_raw - df_raw.mean()) / (df_raw.max() - df.min())
Sourced solution for normalization:
http://stackoverflow.com/questions/12525722/normalize-data-in-pandas
Question 11. Which of our variables are potentially colinear?
End of explanation
#utilized this stackoverflow.com resource to attempt to impute missing data
#(http://stackoverflow.com/questions/21050426/pandas-impute-nans)
#data imputation for variable 'admit'
#first commented out line of code will not run. Had errors with "keys" in ...df_raw.groupby('keys')...
#df_raw['admit'].fillna(df_raw.groupby('keys')['admit'].transform('mean'), inplace = True)
df_raw['admit'].fillna(df_raw['admit'].mean(), inplace = True)
Explanation: Question 12. What did you find?
Answer:
The strongest, most interesting correlation that exists between two variables exist between variables 'admit' and 'prestige'. The two variables are negatively correlated (-0.241). This would imply that as the prestige of your school increases by one unit your likelyhood of admission to UCLA decreases by a factor of -0.241 (or -25%), holding all other variables constant.
GPA and GRE variables are positively correlated in that as your GPA increases, your GRE score increases by a factor of 0.382408.
Question 13. Write an analysis plan for exploring the association between grad school admissions rates and prestige of undergraduate schools.
Answer:
I will examine the relationship between variables 'prestige' and 'admit' using the admissions.csv database in order to determine if the two varibles are correlated and if they are causally linked. Further, I will determine if this relationship is statistically significant.
Question 14. What is your hypothesis?
Answer:
H1 = There exists a statistically significant relationship between undergraduate school prestige ('prestige') and admission ('admit').
H0 = There exists an insignificant relationship between variables of undergraduate school prestige ('prestige') and admission ('admit').
Bonus/Advanced
1. Bonus: Explore alternatives to dropping obervations with missing data
2. Bonus: Log transform the skewed data
3. Advanced: Impute missing data
End of explanation |
357 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
These are the search queries for the Spotify Web API
Step1: 1) With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score.
Step2: 2 a) What genres are most represented in the search results?
Finding all the genres and combining into one list.
Step3: Counting the genres.
Step4: Sorting the genres by occurences.
Step5: 2 b) Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
Step6: 3 a) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.
Step7: 3 b) Is it the same artist who has the largest number of followers?
Step8: Creating a list of the popularity values, so we can sort them and say which one is the highest)
Step9: Deciding which one is highest
Step10: 4) Print a list of Lil's that are more popular than Lil' Kim.
Establishing how high Lil' Kim's popularity is. Would this be possible in one go?
Step11: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
Tip
Step12: 6 Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit?
Number of Explicit Tracks for Lil Scrappy.
Step13: And this is the same for Lil Mama
Step14: 7 a) Since we're talking about Lils, what about Biggies? How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
Step15: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average? | Python Code:
response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&market=US')
Lil_data = response.json()
Lil_data.keys()
Lil_data['artists'].keys()
Explanation: These are the search queries for the Spotify Web API
End of explanation
Lil_artists = Lil_data['artists']['items']
for artist in Lil_artists:
print(artist['name'], artist['popularity'])
Explanation: 1) With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score.
End of explanation
Lil_artists = Lil_data['artists']['items']
for artist in Lil_artists:
print(artist['name'], artist['popularity'])
#joining
if len(artist['genres']) == 0:
print("No genres listed")
else:
genres = ", ".join(artist['genres'])
print("Genres: ", genres)
Lil_artists = Lil_data['artists']['items']
Lil_genres_list = []
for genres in Lil_artists:
Lil_genres_list = genres["genres"] + Lil_genres_list
print(Lil_genres_list)
Explanation: 2 a) What genres are most represented in the search results?
Finding all the genres and combining into one list.
End of explanation
Genre_list = [[x,Lil_genres_list.count(x)] for x in set(Lil_genres_list)]
print(Genre_list)
Explanation: Counting the genres.
End of explanation
sorted(Genre_list, key = lambda x: int(x[1]), reverse=True)
Sorted_by_occurences_Genre_list = sorted(Genre_list, key = lambda x: int(x[1]), reverse=True)
print("The most frequent genre of the musicians called Lil is", Sorted_by_occurences_Genre_list[0])
Explanation: Sorting the genres by occurences.
End of explanation
Lil_artists = Lil_data['artists']['items']
for artist in Lil_artists:
if artist['genres'] == []:
print(artist['name'], artist['popularity'], "No genres listed.")
else:
print(artist['name'], artist['popularity'], artist['genres'])
Explanation: 2 b) Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
End of explanation
for artist in Lil_artists:
if artist['popularity'] >= 72 and artist['name'] != 'Lil Wayne':
print(artist['name'])
#Better solution:
most_popular_name = ""
most_popular_score = 0
for artist in Lil_artists:
#print("Comparing", artist['popularity'], 'to', most_popular_score)
if artist['popularity'] > most_popular_score:
print("checking for Lil Wayne")
if artist['name'] == 'Lil Wayne':
print('go away')
else:
#The change you are keeping track of
#a.k.a. what you are keeping track of
print('not Lil Wayne, updating our notebook')
most_popular_name = artist['name']
most_popular_score = artist['popularity']
print(most_popular_name, most_popular_score)
####### This doesn't work
#name = 'Lil Soma'
#target_score = 72
#1 INITIAL CONDITION
#second_best_artists = []
#second_best_artists = [Lil Yachty]
#Aggregation Problem
#When you're looping through a series of serious objects
#and sometimes you want to add one of those objects
#to a different list
#for artist in artists:
# print('Looking at', artist['name'])
#2 COndition
#wehen we want someone on the list
# if artist['popularity'] == 72:
# print('!!! The artist is popularity is 72.')
# second_best_artists.append(second_best_artists)
Lil_data['artists'].keys()
for artist in Lil_artists:
if artist['name'] == ["Lil Wayne"]:
print(artist['popularity'], "is")
Explanation: 3 a) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.
End of explanation
type(artist['followers'])
artist['followers']
Explanation: 3 b) Is it the same artist who has the largest number of followers?
End of explanation
Lil_artists = Lil_data['artists']['items']
List_of_Followers = []
for artist in Lil_artists:
List_of_Followers.append(artist['followers']['total'])
print(List_of_Followers)
Explanation: Creating a list of the popularity values, so we can sort them and say which one is the highest)
End of explanation
List_of_Followers.sort(reverse=True)
print(List_of_Followers)
Highest_Number_of_Followers = (List_of_Followers[0])
print(Highest_Number_of_Followers)
for artist in Lil_artists:
if artist['followers']['total'] > List_of_Followers[0] and artist['name'] != 'Lil Wayne':
print(artist['name'], "has more followers than Lil Wayne.")
else:
print("Their are no artists with more followers that Lil Wayne.")
break
Explanation: Deciding which one is highest:
End of explanation
for artist in Lil_artists:
if artist['name'] == "Lil' Kim":
print(artist['popularity'])
for artist in Lil_artists:
if artist['popularity'] > 62:
print(artist['name'], artist['popularity'])
Explanation: 4) Print a list of Lil's that are more popular than Lil' Kim.
Establishing how high Lil' Kim's popularity is. Would this be possible in one go?
End of explanation
for artist in Lil_artists:
print(artist['name'], artist['id'])
response = requests.get('https://api.spotify.com/v1/artists/5einkgXXrjhfYCyac1FANB/top-tracks?country=US')
Lil_Scrappy_data = response.json()
type(Lil_Scrappy_data)
response = requests.get('https://api.spotify.com/v1/artists/5qK5bOC6wLtuLhG5KvU17c/top-tracks?country=US')
Lil_Mama_data = response.json()
type(Lil_Mama_data)
Lil_Scrappy_data.keys()
Lil_Mama_data.keys()
type(Lil_Scrappy_data.keys())
type(Lil_Mama_data.keys())
Scrappy_tracks = Lil_Scrappy_data['tracks']
for tracks in Scrappy_tracks:
print(tracks['name'])
Mama_tracks = Lil_Mama_data['tracks']
for tracks in Mama_tracks:
print(tracks['name'])
Explanation: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
Tip: You're going to be making two separate requests, be sure you DO NOT save them into the same variable.
End of explanation
explicit_track_scrappy = 0
non_explicit_track_scrappy = 0
unknown_scrappy = 0
for tracks in Scrappy_tracks:
if tracks['explicit'] == True:
explicit_track_scrappy = explicit_track_scrappy + 1
elif tracks['explicit'] == False:
non_explicit_track_scrappy = non_explicit_track_scrappy + 1
else:
unknown_scrappy = unknown_scrappy + 1
explicit_track_pop_total = 0
non_explicit_track_pop_total = 0
for tracks in Scrappy_tracks:
if tracks['explicit'] == True:
explicit_track_pop_total = explicit_track_pop_total + tracks['popularity']
elif tracks['explicit'] == False:
non_explicit_track_pop_total = non_explicit_track_pop_total + tracks['popularity']
explicit_track_duration_total = 0
non_explicit_track_duration_total = 0
for tracks in Scrappy_tracks:
if tracks['explicit'] == True:
explicit_track_duration_total = explicit_track_duration_total + tracks['duration_ms']
elif tracks['explicit'] == False:
non_explicit_track_duration_total = non_explicit_track_duration_total + tracks['duration_ms']
print("The average rating of explicit songs by Lil Scrappy is", round(explicit_track_pop_total / explicit_track_scrappy), ".")
print("The average rating of non-explicit songs by Lil Scrappy is", round(non_explicit_track_pop_total / non_explicit_track_scrappy), ".")
print("The duration of explicit song material of Lil Scrappy is", round(explicit_track_duration_total / 1000), "minutes, and of non explicit material is", round(non_explicit_track_duration_total / 1000), "minutes.")
Explanation: 6 Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit?
Number of Explicit Tracks for Lil Scrappy.
End of explanation
explicit_track_Mama = 0
non_explicit_track_Mama = 0
unknown = 0
for tracks in Mama_tracks:
if tracks['explicit'] == True:
explicit_track_Mama = explicit_track_Mama + 1
elif tracks['explicit'] == False:
non_explicit_track_Mama = non_explicit_track_Mama + 1
else:
unknown = unknown + 1
explicit_track_pop_total_Mama = 0
non_explicit_track_pop_total_Mama = 0
for tracks in Mama_tracks:
if tracks['explicit'] == True:
explicit_track_pop_total_Mama = explicit_track_pop_total_Mama + tracks['popularity']
elif tracks['explicit'] == False:
non_explicit_track_pop_total_Mama = non_explicit_track_pop_total_Mama + tracks['popularity']
explicit_track_duration_total_Mama = 0
non_explicit_track_duration_total_Mama = 0
for tracks in Mama_tracks:
if tracks['explicit'] == True:
explicit_track_duration_total_Mama = explicit_track_duration_total_Mama + tracks['duration_ms']
elif tracks['explicit'] == False:
non_explicit_track_duration_total_Mama = non_explicit_track_duration_total_Mama + tracks['duration_ms']
print("The average rating of explicit songs by Lil Mama is", round(explicit_track_pop_total_Mama / explicit_track_Mama), ".")
print("The average rating of non-explicit songs by Lil Mama is", round(non_explicit_track_pop_total_Mama / non_explicit_track_Mama), ".")
print("The duration of explicit song material of Lil Mama is", round(explicit_track_duration_total_Mama / 1000), "minutes, and of non explicit material is", round(non_explicit_track_duration_total_Mama / 1000), "minutes.")
Explanation: And this is the same for Lil Mama:
End of explanation
response = requests.get('https://api.spotify.com/v1/search?query=Biggie&type=artist&limit=50&market=US')
Biggie_data = response.json()
response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&market=US')
Lil_data = response.json()
Biggie_artists = Biggie_data['artists']['total']
Lil_artists = Lil_data['artists']['total']
print("There are", Biggie_artists, "artists named Biggie on Spotify and", Lil_artists, "named Lil",)
Total_Download_Time_Biggie = Biggie_artists / 50 * 5
Total_Download_Time_Lil = Lil_artists / 50 * 5
print("It would take", round(Total_Download_Time_Biggie), "seconds to download all the Biggie artists and", round(Total_Download_Time_Lil), "seconds to download the Lil artists." )
Explanation: 7 a) Since we're talking about Lils, what about Biggies? How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
End of explanation
Lil_artists_popularity = Lil_data['artists']['items']
popularity_total = 0
for popularity in Lil_artists_popularity:
popularity_total = popularity_total + popularity['popularity']
print("The average rating for the top 50 artists called Lil is:", round(popularity_total / 50))
Biggie_artists_popularity = Biggie_data['artists']['items']
Biggie_popularity_total = 0
for popularity2 in Biggie_artists_popularity:
Biggie_popularity_total = Biggie_popularity_total + popularity2['popularity']
print("The average rating for the top 50 artists called Biggie is:", round(Biggie_popularity_total / 49) )
Lil_artists_popularity = Lil_data['artists']['items']
for popularity in Lil_artists_popularity:
print(popularity['name'], popularity['popularity'])
Biggie_popularity = Biggie_data['artists']['items']
for artist in Biggie_popularity:
print(artist['name'], artist['popularity'])
import csv
with open('Biggie.csv', 'w') as mycsvfile:
thedatawriter = csv.writer(mycsvfile)
for artist in Biggie_popularity:
thedatawriter.writerow(artist['name'])
thedatawriter.writerow(artist['popularity'])
Explanation: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average?
End of explanation |
358 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q1
In this question, you'll write some coding that performs string manipulation. This is pretty much your warm-up.
Part A
What's your favorite positive number? Reassign the favorite_number variable with something that's at least larger than 0.
Step1: Part B
Print out a famous quote! In the code below, fill out the string variables to contain the name of a famous person, and a quote that they said.
Step2: Part C
You're working late on a homework assignment and have copied a few lines from a Wikipedia article. In your tired stupor, your copy/paste skills leave something to be desired. Rather than try to force your mouse hand to stop shaking, you figure it's easier to write a small Python program to strip out errant whitespace from your copying-pasting.
Reassign each of the three variables (keeping their names the same, and their respective content otherwise unaltered) so that the trailing whitespace on both ends is removed.
Step3: Part D
You discover that there are numbers in the text you'd like to be able to parse out for some math down the road. After stripping out the trailing whitespace, convert the numbers to their proper numerical form. Assign them to the variables num1, num2, and num3 respectively.
Step4: Part E
Take the number below, find its square root, convert it to a string, and then print it out. You must use the correct arithmetic operator for the square root, as well as the correct casting function for the string conversion. Put the result in the variable str_version and print that out. | Python Code:
favorite_number = -1
### BEGIN SOLUTION
### END SOLUTION
print("My favorite number is: " + str(favorite_number))
assert favorite_number >= 0
Explanation: Q1
In this question, you'll write some coding that performs string manipulation. This is pretty much your warm-up.
Part A
What's your favorite positive number? Reassign the favorite_number variable with something that's at least larger than 0.
End of explanation
famous_person = ""
their_quote = ""
### BEGIN SOLUTION
### END SOLUTION
print("{}, at age {}, said:\n\n\"{}\"".format(famous_person, favorite_number, their_quote))
assert len(famous_person) > 0
assert len(their_quote) > 0
Explanation: Part B
Print out a famous quote! In the code below, fill out the string variables to contain the name of a famous person, and a quote that they said.
End of explanation
line1 = 'Python supports multiple programming paradigms, including object-oriented, imperative\n'
line2 = ' and functional programming or procedural styles. It features a dynamic type\n'
line3 = ' system and automatic memory management and has a large and comprehensive standard library.\n '
### BEGIN SOLUTION
### END SOLUTION
assert line1[-1] != '\n'
assert line2[0] != line2[1] != ' '
assert line2[-1] != '\n'
assert line3[0] != ' '
assert line3[-1] != '\n'
Explanation: Part C
You're working late on a homework assignment and have copied a few lines from a Wikipedia article. In your tired stupor, your copy/paste skills leave something to be desired. Rather than try to force your mouse hand to stop shaking, you figure it's easier to write a small Python program to strip out errant whitespace from your copying-pasting.
Reassign each of the three variables (keeping their names the same, and their respective content otherwise unaltered) so that the trailing whitespace on both ends is removed.
End of explanation
line1 = ' 495.59863 \n'
line2 = '\t134 '
line3 = '\n\t -5.4 \t'
num1 = -1
num2 = -1
num3 = -1
### BEGIN SOLUTION
### END SOLUTION
assert num1 > 495 and num1 < 496
assert num2 == 134
assert num3 > -6 and num3 < -5
Explanation: Part D
You discover that there are numbers in the text you'd like to be able to parse out for some math down the road. After stripping out the trailing whitespace, convert the numbers to their proper numerical form. Assign them to the variables num1, num2, and num3 respectively.
End of explanation
number = 3.14159265359
str_version = ""
### BEGIN SOLUTION
### END SOLUTION
assert str_version == "1.7724538509055743"
Explanation: Part E
Take the number below, find its square root, convert it to a string, and then print it out. You must use the correct arithmetic operator for the square root, as well as the correct casting function for the string conversion. Put the result in the variable str_version and print that out.
End of explanation |
359 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Descriptive statistics
Goals of this lesson
Students will learn
Step1: 0. Open dataset and load package
This dataset examines the relationship between multitasking and working memory. Link here to original paper by Uncapher et al. 2016.
Step2: 1. Familiarize yourself with the data
Quick review from data cleaning
Step3: 2. Selecting relevant variables
Sometimes datasets have many variables that are unnecessary for a given analysis. To simplify your life, and your code, we can select only the given variables we'd like to use for now.
Step4: 3. Basic Descriptives
Summarizing data
Let's learn how to make simple tables of summary statistics.
First, we will calculate summary info across all data using describe(), a useful function for creating summaries. Note that we're not creating a new object for this summary (i.e. not using the = symbol), so this will print but not save.
Step5: 3. Grouping data
Next, we will learn how to group data based on certain variables of interest.
We will use the groupby() function in pandas, which will automatically group any subsequent actions called on the data.
Step6: We can group data by more than one factor. Let's say we're interested in how levels of ADHD interact with groupStatus (multitasking
Step7: Then we'll check how evenly split these groups are by using groupby() the size() functions
Step8: Then we'll calculate some summary info about these groups
Step9: A note on piping / stringing commands together
In R, we often use the pipe %>% to string a series of steps together. We can do the same in python with many functions in a row
This is how we're able to take the output of df.groupby(["groupStatus","adhdF"]) and then send that output into the mean() function
5. Extra
Step10: How many trials were there per subject?
Step11: Combine summary statistics with the full data frame
For some analyses, you might want to add a higher level variable (e.g. subject average hitRate) alongside your long data. We can do this by summarizing the data in a new data frame and then merging it with the full data. | Python Code:
# load packages we will be using for this lesson
import pandas as pd
Explanation: Descriptive statistics
Goals of this lesson
Students will learn:
How to group and categorize data in Python
How to generative descriptive statistics in Python
End of explanation
# use pd.read_csv to open data into python
df = pd.read_csv("uncapher_2016_repeated_measures_dataset.csv")
Explanation: 0. Open dataset and load package
This dataset examines the relationship between multitasking and working memory. Link here to original paper by Uncapher et al. 2016.
End of explanation
df.head()
df.shape
df.columns
Explanation: 1. Familiarize yourself with the data
Quick review from data cleaning: take a look at the basic data structure, number of rows and columns.
End of explanation
df = df[["subjNum", "groupStatus", "adhd", "hitRate", "faRate", "dprime"]]
df.head()
Explanation: 2. Selecting relevant variables
Sometimes datasets have many variables that are unnecessary for a given analysis. To simplify your life, and your code, we can select only the given variables we'd like to use for now.
End of explanation
df.describe()
Explanation: 3. Basic Descriptives
Summarizing data
Let's learn how to make simple tables of summary statistics.
First, we will calculate summary info across all data using describe(), a useful function for creating summaries. Note that we're not creating a new object for this summary (i.e. not using the = symbol), so this will print but not save.
End of explanation
df.groupby(["groupStatus"]).mean()
Explanation: 3. Grouping data
Next, we will learn how to group data based on certain variables of interest.
We will use the groupby() function in pandas, which will automatically group any subsequent actions called on the data.
End of explanation
df["adhdF"] = pd.cut(df["adhd"],bins=2,labels=["Low","High"])
Explanation: We can group data by more than one factor. Let's say we're interested in how levels of ADHD interact with groupStatus (multitasking: high or low).
We will first make a factor for ADHD (median-split), and add it as a grouping variable using the cut() function in pandas:
End of explanation
df.groupby(["groupStatus","adhdF"]).size()
Explanation: Then we'll check how evenly split these groups are by using groupby() the size() functions:
End of explanation
df.groupby(["groupStatus","adhdF"]).mean()
Explanation: Then we'll calculate some summary info about these groups:
End of explanation
subList = df["subjNum"].unique()
nSubs = len(subList)
nSubs
Explanation: A note on piping / stringing commands together
In R, we often use the pipe %>% to string a series of steps together. We can do the same in python with many functions in a row
This is how we're able to take the output of df.groupby(["groupStatus","adhdF"]) and then send that output into the mean() function
5. Extra: Working with a long dataset
This is a repeated measures ("long") dataset, with multiple rows per subject. This makes things a bit tricker, but we are going to show you some tools for how to work with "long" datasets.
How many unique subjects are in the data?
End of explanation
nTrialsPerSubj = df.groupby(["subjNum"]).size().reset_index(name="nTrials")
nTrialsPerSubj.head()
Explanation: How many trials were there per subject?
End of explanation
avgHR = df.groupby(["subjNum"])["hitRate"].mean().reset_index(name="avgHR")
avgHR.head()
df = df.merge(avgHR,on="subjNum")
df.head()
Explanation: Combine summary statistics with the full data frame
For some analyses, you might want to add a higher level variable (e.g. subject average hitRate) alongside your long data. We can do this by summarizing the data in a new data frame and then merging it with the full data.
End of explanation |
360 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-2', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: DWD
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
361 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gibbs Sampling Example
Imagine your posterior distribution has the following form
Step1: First, let's make a contour plot of the posterior density.
Step2: Now let's run the sampler, by iteratively drawing from the conditional distribution of $x$ and $y$ given the other.
Step3: To assess how the sampler is exploring the space, we can plot a traceplot for each dimension. A traceplot plots the value of each sample against the iteration number and gives a sense of how well the sampler is exploring the space.
Step4: You can see from the traceplot the when sampling $x$, the sampler spends long periods of time near zero, and occasionally moves to and hangs out at higher values. These correspond to the two areas of high density in the countour plot.
We can also draw a histogram of $x$ to get an estimate of its marginal distribution.
Step5: This is exactly what we would expect if we projected the distribution in the contour plot down to the $x$ axis.
We can do the same plots for $y$.
Step6: Because we are in two dimensions, we can also plot the path that the sampler took through the $xy$ plane. Note that the path always takes right angles, because we are alternating between moves that only move in the $x$ direction and only move in the $y$ direction.
Step7: To see how effective the samples we have drawn will be at approximating summaries of the posterior distribution (for example the posterior mean), we can look at the autocorrelation of the samples. High autocorrelation would mean that the sample average that we take to approximate the posterior mean would be higher than expected if we had taken independent samples from the posterior distribution.
Step9: In both $x$ and $y$, we can see that the autocorrelation is quite high. This is not a big problem though because the sampler is so simple that we can draw millions of samples to make up for the high autocorrelation.
To figure out exactly how many samples we would have to draw, we can compute effective sample size, a measure of how many independent samples our our samples are equivalent to. This usese the same quantities that were used to compute the autocorrelation plot above. The following code is taken from https
Step10: Now we can compute effective sample size for x and y. | Python Code:
f= lambda x,y: np.exp(-(x*x*y*y+x*x+y*y-8*x-8*y)/2.)
Explanation: Gibbs Sampling Example
Imagine your posterior distribution has the following form:
$$ f(x, y \mid data) = (1/C)e^{-\frac{(x^2y^2+x^2+y^2-8x-8y)}{2}} $$
As is typical in Bayesian inference, you don't know what C (the normalizing constant) is, so you can't sample from this distribution using conventional methods. However, MCMC techniques allow us to sample from probability distributions without knowing this constant, and we will use one particular MCMC technique, Gibbs sampling, to do this here.
Gibbs sampling allows you to sample from a probability distribution by iteratively sampling from its conditional distributions. This strategy is very useful in problems where each unknown would have a very simple distribution if we knew all of the other unknowns. In this problem, the posterior distribution $f(x, y \mid data)$ is over two unknowns, $x$ and $y$. To perform Gibbs sampling, we sample from the distribution of $x$ holding $y$ constant at its current value, then sample from the distribution of $y$ holding $x$ constant at its current value. As it turns out, even though $f(x, y \mid data)$ is incredibly ugly, the conditional distributions are relatively simple.
After some simplification (completing the square and throwing all factors that do not involve $x$ into $g(y)$ for the first equation, and vice versa for the second), we find that the conditional distributions have a relatively simple form.
$$ p(x \mid y, data) = g(y) e^{-\left(x-\frac{4}{(1+y^2)}\right)^{2}\frac{(1+y^2)}{2}} $$
and
$$ p(y \mid x, data) = g(x) e^{-\left(y-\frac{4}{(1+x^2)}\right)^{2}\frac{(1+x^2)}{2}} $$
What are these distributions? They are in fact normals. Writing this in distributional notation,
$$ x \mid y, data \sim N\left(\frac{4}{1+y^2}, \sqrt{\frac{1}{1+y^2}}\right) $$
and similarly
$$ y \mid x, data \sim N\left(\frac{4}{1+x^2}, \sqrt{\frac{1}{1+x^2}}\right) $$.
We know how to draw from normal distributions, so if we iterate back and forth, we should be able to sample from $f(x, y \mid data)$!
End of explanation
xx=np.linspace(-1,8,100)
yy=np.linspace(-1,8,100)
xg,yg = np.meshgrid(xx,yy)
z=f(xg.ravel(),yg.ravel())
z2 = z.reshape(xg.shape)
z2
plt.contourf(xg,yg,z2)
Explanation: First, let's make a contour plot of the posterior density.
End of explanation
N = 40000
x=np.zeros(N+1)
y=np.zeros(N+1)
#Initialize x and y.
x[0]=1.
y[0]=6.
sig = lambda z,i: np.sqrt(1./(1.+z[i]*z[i]))
mu = lambda z,i: 4./(1.+z[i]*z[i])
for i in range(1,N,2):
sig_x = sig(y,i-1)
mu_x = mu(y,i-1)
x[i] = np.random.normal(mu_x, sig_x)
y[i] = y[i-1]
sig_y = sig(x, i)
mu_y = mu(x, i)
y[i+1] = np.random.normal(mu_y, sig_y)
x[i+1] = x[i]
Explanation: Now let's run the sampler, by iteratively drawing from the conditional distribution of $x$ and $y$ given the other.
End of explanation
def traceplot(z):
plt.plot(range(len(z)),z,'.')
traceplot(x)
Explanation: To assess how the sampler is exploring the space, we can plot a traceplot for each dimension. A traceplot plots the value of each sample against the iteration number and gives a sense of how well the sampler is exploring the space.
End of explanation
plt.hist(x, bins=50);
Explanation: You can see from the traceplot the when sampling $x$, the sampler spends long periods of time near zero, and occasionally moves to and hangs out at higher values. These correspond to the two areas of high density in the countour plot.
We can also draw a histogram of $x$ to get an estimate of its marginal distribution.
End of explanation
traceplot(y)
plt.hist(y, bins=50);
Explanation: This is exactly what we would expect if we projected the distribution in the contour plot down to the $x$ axis.
We can do the same plots for $y$.
End of explanation
plt.contourf(xg,yg,z2, alpha=0.6)
plt.scatter(x,y, alpha=0.1, c='k', s=5)
plt.plot(x[:100],y[:100], c='r', alpha=0.3, lw=1)
Explanation: Because we are in two dimensions, we can also plot the path that the sampler took through the $xy$ plane. Note that the path always takes right angles, because we are alternating between moves that only move in the $x$ direction and only move in the $y$ direction.
End of explanation
plt.acorr(x-np.mean(x), maxlags=100, lw=1 , normed=True);
plt.acorr(y-np.mean(y), maxlags=100, lw=1 , normed=True);
Explanation: To see how effective the samples we have drawn will be at approximating summaries of the posterior distribution (for example the posterior mean), we can look at the autocorrelation of the samples. High autocorrelation would mean that the sample average that we take to approximate the posterior mean would be higher than expected if we had taken independent samples from the posterior distribution.
End of explanation
def effectiveSampleSize(data, stepSize = 1) :
Effective sample size, as computed by BEAST Tracer.
samples = len(data)
assert len(data) > 1,"no stats for short sequences"
maxLag = min(samples//3, 1000)
gammaStat = [0,]*maxLag
#varGammaStat = [0,]*maxLag
varStat = 0.0;
if type(data) != np.ndarray :
data = np.array(data)
normalizedData = data - data.mean()
for lag in range(maxLag) :
v1 = normalizedData[:samples-lag]
v2 = normalizedData[lag:]
v = v1 * v2
gammaStat[lag] = sum(v) / len(v)
#varGammaStat[lag] = sum(v*v) / len(v)
#varGammaStat[lag] -= gammaStat[0] ** 2
# print lag, gammaStat[lag], varGammaStat[lag]
if lag == 0 :
varStat = gammaStat[0]
elif lag % 2 == 0 :
s = gammaStat[lag-1] + gammaStat[lag]
if s > 0 :
varStat += 2.0*s
else :
break
# standard error of mean
# stdErrorOfMean = Math.sqrt(varStat/samples);
# auto correlation time
act = stepSize * varStat / gammaStat[0]
# effective sample size
ess = (stepSize * samples) / act
return ess
Explanation: In both $x$ and $y$, we can see that the autocorrelation is quite high. This is not a big problem though because the sampler is so simple that we can draw millions of samples to make up for the high autocorrelation.
To figure out exactly how many samples we would have to draw, we can compute effective sample size, a measure of how many independent samples our our samples are equivalent to. This usese the same quantities that were used to compute the autocorrelation plot above. The following code is taken from https://code.google.com/p/biopy/source/browse/trunk/biopy/bayesianStats.py?r=67. You don't need to try to understand the function -- it's just here to run it, and this is a rather slow implementation.
End of explanation
esx = effectiveSampleSize(x)
esy = effectiveSampleSize(y)
print "Effective Size for x: ", esx, " of ", len(x), " samples, rate of", esx/len(x)*100, "%."
print "Effective Size for y: ", esy, " of ", len(y), " samples, rate of", esy/len(y)*100, "%."
Explanation: Now we can compute effective sample size for x and y.
End of explanation |
362 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Libraries, utilities and definitions
Step1: Fractal dimension feature selection algorithm
The algorithm is adjusted to the dataset of the experiment so the number of attributes must be modified, it calculates the approximated fractal dimension after deleting an attribute, if the new value is inside the threshold value it can be eliminated and ends when no attribute is delete from the dataset.
Step2: Fractal dimesion of a dataset
This algorithm calculates the approximate fractal dimension of the given dataset which must be loaded on a numpy data frame.
Step3: Fetching files
Step4: Processing the dataset
In this experiment the training data is analyzed using a threshold value of 0.005 | Python Code:
import numpy as np
import pandas as pd
from math import log
from os import listdir
from os.path import isfile, join
from scipy.stats import linregress
from sklearn.metrics.pairwise import euclidean_distances
from sklearn.preprocessing import StandardScaler
from time import time
from timeit import timeit
#Returns 1 if a point is inside a radius, if not, returns 0
def dist(p, r):
return 1 if p <= r else 0
#Makes the distance function aplicable to numpy arrays
check_dist = np.vectorize(dist)
Explanation: Libraries, utilities and definitions
End of explanation
def fractal_feature_selection(df, threshold=0.09):
#Obtains the approximate fractal dimension of the original dataset
base_fd = fractal_dimension(df)
print('Whole dataset approximate fractal dimension: {}'.format(base_fd))
#List for keeping track of the attributes index in ther starting order
sorted_attrib = [[0, 0], [1, 1], [2, 2], [3, 3], [4, 4], [5, 5],
[6, 6], [7, 7], [8, 8], [9, 9], [10, 10], [11, 11], [12, 12], [13, 13]]
attribute_not_deleted = True
while attribute_not_deleted:
fd_list = []
for i in sorted_attrib:
#Deletes i attribute from dataset
X = np.delete(df, i[0], axis=1)
partial_fd = fractal_dimension(X)
#Adds the information of the approximate fractal dimension to a list to obtain the one that
#contribute less to the whole dataset
fd_list.append([i[0],
partial_fd,
abs((partial_fd / indicator_fd) - 1),
abs((partial_fd / indicator_fd) - 1) < threshold])
#Sort by partial fractal dimension value
fd_list.sort(key = lambda row: row[2])
for i in fd_list:
#Checks if the variation of the fractal dimension value is inside the threshold
if i[3] == True:
#Update fractal dimension value
indicator_fd = i[1]
#Deletes attribute that doesn't contributes more the threshold value to the farctal dimension value
df = np.delete(df, i[0], axis=1)
#Deletes the i attribute from our reference list
sorted_attrib = np.delete(sorted_attrib, i[0], axis=0)
#Decremets the relative value of the attributes to the right of the deleted one
for j in xrange(i[0], len(sorted_attrib)):
sorted_attrib[j][0] -= 1
break
#No attribute was deleted
attribute_not_deleted = False
return sorted_attrib
Explanation: Fractal dimension feature selection algorithm
The algorithm is adjusted to the dataset of the experiment so the number of attributes must be modified, it calculates the approximated fractal dimension after deleting an attribute, if the new value is inside the threshold value it can be eliminated and ends when no attribute is delete from the dataset.
End of explanation
def fractal_dimension(dataset):
#Data set cardinality
N = len(dataset)
#Results list of correlation integral values
cm = []
#List of radius to test distance between points
r = [1.0];
r_index = 0;
#Executes while the sumation is greater than 0
tempSumation = 0
while True:
#Number of points that return 1 in the heaviside function
sumation = 0
#Obtaining distance between point Xi and all of the others
for j in range(N-1):
euclidean_dist_array = euclidean_distances(dataset[j].reshape(1, -1), dataset[j+1:])
sumation += np.sum(check_dist(euclidean_dist_array, r[r_index]))
if sumation <= 0 or tempSumation == sumation:
break;
cm.append((2.0 * sumation) / (N * (N - 1.0)))
r.append(r[r_index] / 2.0)
tempSumation = sumation
r_index += 1
#Deletes extra value in r
del r[-1]
#Calculate ln of both r and cm
ln_r = map(log,r)
ln_cm = map(log,cm)
#Calculate linear regresion of the points
slope_as_fd, _, _, _, _ = linregress(ln_r,ln_cm)
#Return slope as aproximate fractal dimension
return slope_as_fd
Explanation: Fractal dimesion of a dataset
This algorithm calculates the approximate fractal dimension of the given dataset which must be loaded on a numpy data frame.
End of explanation
start_time = time()
path = "..\..\Data\The Tesis EEG\Train"
files = [f for f in listdir(path) if isfile(join(path, f))]
print(files)
Explanation: Fetching files
End of explanation
threshold_values = [0.005]
#Apply fractal dimension feature selection to all the datasets in the folder for each one of the threshold values
for i in threshold_values:
results = []
for j in files:
print(j)
stdsc = StandardScaler()
df = pd.read_csv(path + '\\' + j)
X = df.ix[:,0:14]
X_std = stdsc.fit_transform(X)
X_std = np.array(X_std)
results.append(fractal_feature_selection(X_std, i))
#Interpretation oh the obtained results
print('Threshold = {}'.format(i))
for k in results:
ref = [0,1,2,3,4,5,6,7,8,9,10,11,12,13]
for l in k:
ref[l[1]] = -1
for l in ref:
if l >= 0:
print('0'),
else:
print('1'),
print('')
print('\nElapsed time: {}'.format(time() - start_time))
Explanation: Processing the dataset
In this experiment the training data is analyzed using a threshold value of 0.005
End of explanation |
363 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FDMS TME3
Kaggle How Much Did It Rain? II
Florian Toque & Paul Willot
Data Vize
Step1: 13.765.202 lines in train.csv
8.022.757 lines in test.csv
Load the dataset
Step2: Per wikipedia, a value of more than 421 mm/h is considered "Extreme/large hail"
If we encounter the value 327.40 meter per hour, we should probably start building Noah's ark
Therefor, it seems reasonable to drop values too large, considered as outliers
Step3: Quick analysis for the sparsity by column
Step4: We see that except for the fixed features minutes_past, radardist_km and Expected the dataset is mainly sparse.
Let's transform the dataset to conduct more analysis
We regroup the data by ID
Step5: How much observations is there for each ID ?
Step6: We see there is a lot of ID with 6 or 12 observations, that mean one every 5 or 10 minutes on average.
Step7: Now let's do the analysis on different subsets
Step8: Strangely we notice that the less observations there is, the more it rains on average
However more of the expected rainfall fall below 0.5
What prediction should we make if there is no data? | Python Code:
# from __future__ import exam_success
from __future__ import absolute_import
from __future__ import print_function
%matplotlib inline
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import pandas as pd
import scipy.stats as stats
# Sk cheats
from sklearn.cross_validation import cross_val_score # cross val
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.preprocessing import Imputer # get rid of nan
Explanation: FDMS TME3
Kaggle How Much Did It Rain? II
Florian Toque & Paul Willot
Data Vize
End of explanation
%%time
filename = "data/reduced_train_100000.csv"
#filename = "data/reduced_test_100000.csv"
raw = pd.read_csv(filename)
raw = raw.set_index('Id')
raw['Expected'].describe()
Explanation: 13.765.202 lines in train.csv
8.022.757 lines in test.csv
Load the dataset
End of explanation
# Considering that the gauge may concentrate the rainfall, we set the cap to 1000
# Comment this line to analyse the complete dataset
l = len(raw)
raw = raw[raw['Expected'] < 1000]
print("Dropped %d (%0.2f%%)"%(l-len(raw),(l-len(raw))/float(l)*100))
raw.head(5)
Explanation: Per wikipedia, a value of more than 421 mm/h is considered "Extreme/large hail"
If we encounter the value 327.40 meter per hour, we should probably start building Noah's ark
Therefor, it seems reasonable to drop values too large, considered as outliers
End of explanation
l = float(len(raw["minutes_past"]))
comp = [[1-raw[i].isnull().sum()/l , i] for i in raw.columns]
comp.sort(key=lambda x: x[0], reverse=True)
sns.barplot(zip(*comp)[0],zip(*comp)[1],palette=sns.cubehelix_palette(len(comp), start=.5, rot=-.75))
plt.title("Percentage of non NaN data")
plt.show()
Explanation: Quick analysis for the sparsity by column
End of explanation
# We select all features except for the minutes past,
# because we ignore the time repartition of the sequence for now
features_columns = list([u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
def getXy(raw):
selected_columns = list([ u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
data = raw[selected_columns]
docX, docY = [], []
for i in data.index.unique():
if isinstance(data.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
docX.append(m)
docY.append(float(raw.loc[i]["Expected"]))
else:
m = data.loc[i].as_matrix()
docX.append(m)
docY.append(float(raw.loc[i][:1]["Expected"]))
X , y = np.array(docX) , np.array(docY)
return X,y
raw.index.unique()
Explanation: We see that except for the fixed features minutes_past, radardist_km and Expected the dataset is mainly sparse.
Let's transform the dataset to conduct more analysis
We regroup the data by ID
End of explanation
X,y=getXy(raw)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On complete dataset)")
plt.plot()
print("Average gauge observation in mm: %0.2f"%y.mean())
Explanation: How much observations is there for each ID ?
End of explanation
pd.DataFrame(y).describe()
Explanation: We see there is a lot of ID with 6 or 12 observations, that mean one every 5 or 10 minutes on average.
End of explanation
noAnyNan = raw.loc[raw[features_columns].dropna(how='any').index.unique()]
X,y=getXy(noAnyNan)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On fully filled dataset)")
plt.plot()
print("Average gauge observation in mm: %0.2f"%y.mean())
pd.DataFrame(y).describe()
noFullNan = raw.loc[raw[features_columns].dropna(how='all').index.unique()]
index[:10]
X,y=getXy(raw)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On partly filled dataset)")
plt.plot()
print("Average gauge observation in mm: %0.2f"%y.mean())
pd.DataFrame(y).describe()
fullNan = raw.drop(raw[features_columns].dropna(how='all').index)
X,y=getXy(fullNan)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On fully empty dataset)")
plt.plot()
print("Average gauge observation in mm: %0.2f"%y.mean())
pd.DataFrame(y).describe()
Explanation: Now let's do the analysis on different subsets:
On fully filled dataset
End of explanation
print("%d observations" %(len(raw)))
#print("%d fully filled, %d partly filled, %d fully empty"
# %(len(noAnyNan),len(noFullNan),len(raw)-len(noFullNan)))
print("%0.1f%% fully filled, %0.1f%% partly filled, %0.1f%% fully empty"
%(len(noAnyNan)/float(len(raw))*100,
len(noFullNan)/float(len(raw))*100,
(len(raw)-len(noFullNan))/float(len(raw))*100))
import numpy as np
from scipy.stats import kendalltau
import seaborn as sns
#sns.set(style="ticks")
rs = np.random.RandomState(11)
x = rs.gamma(2, size=1000)
y = -.5 * x + rs.normal(size=1000)
sns.jointplot(x, y, kind="hex", stat_func=kendalltau, color="#4CB391")
Explanation: Strangely we notice that the less observations there is, the more it rains on average
However more of the expected rainfall fall below 0.5
What prediction should we make if there is no data?
End of explanation |
364 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Figure 1(i)
Step3: We start by defining a few helper variables and functions which be used for creating the plots below.
Step4: The plots are produced below.
Note that 'trans' is a list of index values associated with the H array which define the field step number when the transitions into different states occurred during the hysteresis. There were 801 steps in the hysteresis loop in total.
So in the first plot, for thickness, $t=20$nm, the transitions into a different state occurred at 219th step in the hysteresis loop and the 319th. The value of the external field at these step points were $H=0.38\times$M$_s$ and $H=2.38\times$M$_s$ repectively. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Figure 1(i): Hysteresis Plots
This notebook reproduces the three hysteresis plots in figure 1(i) which appear in the paper. The show $\left< m_z\right>$ vs. $H$, where $\left< m_z\right>$ is the spatially averaged out-of-plane ($z$) component of the magnetisation and $H$ is the strength of the external field, which was applied in the $z$ direction.
The range of field values where the different states occured throughout the hysteresis (for increasing field values only) are indicated by coloured regions on the graphs.
Finally, the values for $\frac{\mathrm{d}\left< m_z \right>}{\mathrm{d} H}$ are also plotted on the graphs.
End of explanation
# define colours used in plots
dark_purple = '#8464c5'
light_purple = '#ededfb'
dark_green = '#336433'
light_green = '#a0d9a0'
white = '#FFFFFF'
olive = '#aaa460'
def get_data(t):
Loads the hysteresis data for a specified thickness, t, from the relevant file
and calculates the values of dm_z/dH.
Creates arrays for the values of H (*Ms)
Returns mz, dm_z/dH (scaled values) and the values for H
(mz, dmdH_scaled, H_all)
# load hysteresis data for specified thickness
mx, my, mz, energy = np.load('../data/figure_1/hysteresis_loops/'
'sim_hysteresis_FeGe_nanodisk_d150_h{}.npy'.format(int(t)))
# create arrays for the Zeeman field
H_up = np.linspace(-4, 4, 400, endpoint=False)
H_down = np.linspace(4, -4, 401)
H_all = np.concatenate((H_up, H_down), axis=0)
# calculate dm/dH from the data. dm/dH is scaled to a maximum value of 1.
dm = mz[1:-1] - mz[0:-2]
dH = H_all[1:-1] - H_all[0:-2]
dmdH = dm/dH
dmdH_scaled = dmdH/max(dmdH)
return mz, dmdH_scaled, H_all
def base_plot(mz, dmdH_scaled, H_all):
Function to plot the mz vs. H hysteresis curves.
Adds colour shading to the different regions occuring throughout the hysteresis.
Requires the values of mz, dmdH_scaled and H, the array of field steps on the hysteresis loop.
The plot is returned
# get relevant data for the specified thickness
mz, dmdH_scaled, H_all = get_data(t)
# create the figure and define an axis paramerter.
fig = plt.figure(figsize=(9, 5))
ax = fig.add_subplot(111)
# plot mz vs. H values.
ax.plot(H_all[0:400], mz[0:400], 'k', linewidth=2.5, label="Increasing H")
ax.plot(H_all[400:801], mz[400:801], '--', color='k', linewidth=2.5, label="Decreasing H")
# plot the dm_z/dH vs. H values
ax.plot(H_all[0:400], dmdH_scaled[0:400], 'b--', markevery=3, linewidth=1.5, label=r'dm$_z$/dH')
# add axis labels
plt.ylabel(r'm$_{\mathrm{z}}$', fontsize=20)
plt.xlabel(r'H ($\times$M$_{\mathrm{s}}$)', fontsize=20)
plt.xticks([-4, -3 ,-2, -1, 0, 1, 2, 3, 4], fontsize=18)
plt.yticks([-1, -0.5, 0, 0.5, 1], fontsize=18)
plt.xlim([-3, 3])
# add the legend
plt.legend(loc='lower right', fontsize=16)
plt.tight_layout()
return plt, ax
Explanation: We start by defining a few helper variables and functions which be used for creating the plots below.
End of explanation
# define the thickness
t = 20
# get data
mz, dmdH_scaled, H_all = get_data(t)
plt, ax = base_plot(mz, dmdH_scaled, H_all)
trans = [219, 319]
print 'These transition values correspond to values of H={219}*Ms and H={319}*Ms\n'\
'where Ms=384 kA/m'.format(*H_all)
# add letter labels, which refer to 3D magnetisation plots
ax.text(-1.5, 0.2, '(a)', fontsize=20)
ax.text(1.35, 0.2, '(f)', fontsize=20)
ax.text(2.55, 0.2, '(g)', fontsize=20)
# Colour the different regions in the hysteresis plot
ax.axvspan(H_all[0], H_all[trans[0]], color=dark_purple)
ax.axvspan(H_all[trans[0]], H_all[trans[1]], color=light_purple)
ax.axvspan(H_all[trans[1]], 3, color=dark_green)
plt.savefig('pdfs/figure-1i-20nm.pdf')
plt.show()
t = 35
mz, dmdH_scaled, H_all = get_data(t)
plt, ax = base_plot(mz, dmdH_scaled, H_all)
trans = [207, 220, 310]
print 'These transition values correspond to values of H={207}*Ms, H={220}*Ms and H={310}*Ms\n'\
'where Ms=384 kA/m'.format(*H_all)
# add letter labels, which refer to 3D magnetisation plots
ax.text(-1.55, 0.3, '(a)', fontsize=20)
ax.text(0.1, -0.3, '(h)', fontsize=20)
ax.text(1.15, -0.3, '(f)', fontsize=20)
ax.text(2.45, -0.3, '(g)', fontsize=20)
# Colour the different regions in the hysteresis plot
ax.axvspan(H_all[0], H_all[trans[0]], color=dark_purple)
ax.axvspan(H_all[trans[0]], H_all[trans[1]], color=olive)
ax.axvspan(H_all[trans[1]], H_all[trans[2]], color=light_purple)
ax.axvspan(H_all[trans[2]], 3, color=dark_green)
plt.savefig('pdfs/figure-1i-35nm.pdf')
plt.show()
t = 55
mz, dmdH_scaled, H_all = get_data(t)
plt, ax = base_plot(mz, dmdH_scaled, H_all)
trans = [153, 176, 210, 214, 225, 304]
print 'These transition values correspond to values of H={153}*Ms, H={176}*Ms, '\
'H={210}*Ms, H={214}*Ms, H={225}*Ms and H={304}*Ms\n'\
'where Ms=384 kA/m'.format(*H_all)
# add letter labels, which refer to 3D magnetisation plots
ax.text(-2.05, 0.5, '(a)', fontsize=20)
ax.text(-0.85, 0.5, '(b)', fontsize=20)
ax.text(-0.25, 0.5, '(c)', fontsize=20)
ax.text(1.2, 0.5, '(f)', fontsize=20)
ax.text(2.4, 0.5, '(g)', fontsize=20)
ax.annotate('(d)', xy=(0.25, -0.35), xytext=(-0.25, -0.5),
arrowprops=dict(facecolor='black', shrink=0.05, width=0.5, headwidth=6, frac=0.3),
fontsize=20)
ax.annotate('(e)', xy=(0.4, -0.35), xytext=(0.6, -0.5),
arrowprops=dict(facecolor='black', shrink=0.05, width=0.5, headwidth=6, frac=0.3),
fontsize=20)
# Colour the different regions in the hysteresis plot
ax.axvspan(H_all[0], H_all[trans[0]], color=dark_purple)
ax.axvspan(H_all[trans[0]], H_all[trans[1]], color=olive)
ax.axvspan(H_all[trans[1]], H_all[trans[2]], color=light_green)
ax.axvspan(H_all[trans[2]], H_all[trans[3]], color=olive)
ax.axvspan(H_all[trans[3]], H_all[trans[4]], color=white)
ax.axvspan(H_all[trans[4]], H_all[trans[5]], color=light_purple)
ax.axvspan(H_all[trans[5]], 3, color=dark_green)
plt.savefig('pdfs/figure-1i-55nm.pdf')
plt.show()
Explanation: The plots are produced below.
Note that 'trans' is a list of index values associated with the H array which define the field step number when the transitions into different states occurred during the hysteresis. There were 801 steps in the hysteresis loop in total.
So in the first plot, for thickness, $t=20$nm, the transitions into a different state occurred at 219th step in the hysteresis loop and the 319th. The value of the external field at these step points were $H=0.38\times$M$_s$ and $H=2.38\times$M$_s$ repectively.
End of explanation |
365 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise
Step13: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_steps * n_seqs
n_batches = len(arr) // characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches*characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = arr[:, n+1:n+1+n_steps]
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/[email protected]" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell outputs
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop]*num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise: Below, implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output = tf.concat(lstm_output, axis=1)
# Reshape seq_output to a 2D tensor with lstm_size columns
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w = tf.Variable(tf.truncated_normal([in_size, out_size], stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, siftmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot =
y_reshaped =
# Softmax cross entropy loss
loss =
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
#self.loss =
#self.optimizer =
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
366 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding Stellar Data to STELLAB
Contributors
Step1: The goal is to add your data to STELLAB to produce plots such as the plot below
Step2: Adding your own data.
Step3: Uploading data
coming soon... | Python Code:
%matplotlib nbagg
import matplotlib.pyplot as plt
from NuPyCEE import stellab as st
Explanation: Adding Stellar Data to STELLAB
Contributors: Christian Ritter
In construction
End of explanation
s1=st.stellab()
xaxis='[Fe/H]'
yaxis='[O/Fe]'
s1.plot_spectro(fig=1,xaxis=xaxis,galaxy='carina')
plt.xlim(-4.5,1),plt.ylim(-1.5,1.5)
Explanation: The goal is to add your data to STELLAB to produce plots such as the plot below:
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo("R3_EZlXTFBo")
s1_new=st.stellab()
# available data
# s1_new.list_ref_papers()
s1_new.plot_spectro(fig=2,yaxis=yaxis,
obs=['stellab_data/carina_data/Fabrizio_et_al_2015_stellab'],show_err=True)
plt.xlim(-4,0),plt.ylim(-2,2)
Explanation: Adding your own data.
End of explanation
#from IPython.display import YouTubeVideo
#YouTubeVideo("Pi9NpxAvYSs")
Explanation: Uploading data
coming soon...
End of explanation |
367 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using GraphLab Create with Apache Spark
In this notebook we demonstrate how to use Apache Spark with GraphLab Create. For this notebook, we will utilize Apache Spark as a platform for doing large-scale data engineering.
The project is to learn a topic model using Wikipedia data, to see what topics are most represented in Wikipedia. The parts required for this project are
Step1: Step 2
Step2: Now that we have the SparkContext setup, let's download the Wikipedia data as an RDD. For this notebook we will only use a subset of the data, but there is code below to use the full dataset (which is about ~5GB).
Download the Wikipedia Data
Step3: Now that the rdd is defined, let's see the first few lines to confirm it is structured the way we want.
Step4: This looks good, it has a document on each line, with a subsequent index value. Since we want to split documents by space, it is important to remove any extra spaces that exist. When examining the document closely we see there are extra spaces, so let's clean those up and split each document by space. Also, let's put the index for the document as the first entry, so we have an 'id' key and then the words.
Step5: Now each document is a tuple of (index, list of words). Let's transform that into a list of (index, word) tuples instead. We will use flatMap for that.
Step6: Great, now we have things formatted the way we want, let's start aggregating to generate word counts per document.
Step7: And finally, let's create a dictionary with word as the key and count as the value.
Step8: Step 3
Step9: Looking at the most frequent words in the bag of words, it is obvious that 'stop words' are most prevalent. Let's remove them with one line, using GraphLab Create.
Step10: Great, now the most frequent words are no longer stop words. We are ready to train a Topic Model on the data.
Step 4
Step11: Step 5
Step12: Well, that is just showing predicted topic_id. Instead, let's add a column with the topic_id we just predicted, and create that as our results SFrame.
Step13: Now let's see which topic ids appear most frequently in this set of Wikipedia data
Step14: Looking at this tells us that topic ids 22 and 6 are more common in this dataset. Let's find out what words are associated with those topics.
Step15: Interesting. Wonder what this set of documents is about. Let's get the full list of topic words learned by the model.
Step16: That SFrame is less useful, let's groupby all the same topic ids and create a list of words.
Step17: This is the appropriate format for recording the topics learned, by topic_id.
Great, so now we have the results SFrame and the Topics SFrame, both of which can be saved back as Spark RDDs.
Step 6 | Python Code:
# To use GraphLab Create within PySpark, you need to set the $SPARK_HOME and $PYTHONPATH
# environment variables on the driver. A common usage:
!export SPARK_HOME="your-spark-home-dir"
!export PYTHONPATH=$SPARK_HOME/python/:$SPARK_HOME/python/lib/py4j-0.8.2.1-src.zip:$PYTHONPATH
Explanation: Using GraphLab Create with Apache Spark
In this notebook we demonstrate how to use Apache Spark with GraphLab Create. For this notebook, we will utilize Apache Spark as a platform for doing large-scale data engineering.
The project is to learn a topic model using Wikipedia data, to see what topics are most represented in Wikipedia. The parts required for this project are:
1. Set up environment
1. Turn Raw Wikipedia text into Bag of Words, Using Spark
1. Ingest Spark RDD as SFrame
1. Learn Topic Model
1. Explore topics
1. Save Results to Spark RDD
Note: Setting up Spark and PySpark are out of scope for this notebook, but are required for following along.
By using PySpark and GraphLab Create together this notebook shows how easy it is to use both systems together. If you are interested in details of how Apache Spark integration with GraphLab Create happens, check out our open-source spark-sframe package.
Note: This notebook requires GraphLab Create >=1.7 and Spark >=1.3
Step 1: Set up environment
There are many different ways to configure PySpark, but in order to use it in a standalone Python script (not in pysspark shell or using spark-submit) a handful of environment variables need to be set up correctly. The most convenient way to set these environment variables is by setting them in the shell configuration (ex. ~/.bash_profile or ~/.zshrc). For instructive purposes, here are the variables that need to be set.
Note: Running this notebook as is may not configure these environment variables correctly.
End of explanation
import graphlab as gl
from pyspark import SparkContext
import os
import requests
# Set up the SparkContext object
# this can be 'local' or 'yarn-client' in PySpark
# Remember if using yarn-client then all the paths should be accessible
# by all nodes in the cluster.
sc = SparkContext('local')
Explanation: Step 2: Turn Raw Wikipedia text into Bag of Words, Using Spark
End of explanation
import requests
def download_file(url, save_path):
local_filename = url.split('/')[-1]
r = requests.get(url, stream=True)
with open(os.path.join(save_path, local_filename), 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
return local_filename
# File to download
file_list = [16]
# If you want to use this entire Wikipedia dataset, uncomment the following line.
# This will download ~5GB of data split over 36 files.
# file_list = range(37)
# Download location for Wikipedia data
save_path = '/tmp/wikipedia'
# Actually download the files, if the location doesn't exist yet.
if not os.path.exists(save_path):
os.mkdir(save_path)
for idx in file_list:
url = 'https://static.turi.com/datasets/wikipedia/raw/w%d' % idx
print "Downloading '%s', saving to: '%s'" % (url, save_path)
download_file(url, save_path) # This will download 146MB of data.
rawRdd = sc.textFile('file:///%s/' % save_path).zipWithIndex()
Explanation: Now that we have the SparkContext setup, let's download the Wikipedia data as an RDD. For this notebook we will only use a subset of the data, but there is code below to use the full dataset (which is about ~5GB).
Download the Wikipedia Data
End of explanation
rawRdd.take(1)
Explanation: Now that the rdd is defined, let's see the first few lines to confirm it is structured the way we want.
End of explanation
# replace multiple spaces with a single space using re.sub, then trim whitespace and split on single space.
import re
splitRdd = rawRdd.map(lambda (a,b): (b, re.sub("[ ]+", " ", a).strip().split(" ")))
splitRdd.take(1)
Explanation: This looks good, it has a document on each line, with a subsequent index value. Since we want to split documents by space, it is important to remove any extra spaces that exist. When examining the document closely we see there are extra spaces, so let's clean those up and split each document by space. Also, let's put the index for the document as the first entry, so we have an 'id' key and then the words.
End of explanation
zipRdd = splitRdd.flatMap(lambda (a,b): zip([a] * len(b),b))
zipRdd.take(1)
Explanation: Now each document is a tuple of (index, list of words). Let's transform that into a list of (index, word) tuples instead. We will use flatMap for that.
End of explanation
wordRdd = zipRdd.map(lambda composite_word: (composite_word, 1)).reduceByKey(lambda a, b: a + b)
wordRdd.take(2)
Explanation: Great, now we have things formatted the way we want, let's start aggregating to generate word counts per document.
End of explanation
bagRdd = wordRdd.map(lambda (a,b):(a[0],(a[1],b))).groupByKey().map(lambda (a,b):(a,{word_count[0]:word_count[1] for word_count in b.data}))
bagRdd.take(1)
Explanation: And finally, let's create a dictionary with word as the key and count as the value.
End of explanation
data = gl.SFrame.from_rdd(bagRdd,sc)
data = data.unpack('X1')
data.rename({'X1.0':'id','X1.1':'bag_of_words'})
gl.canvas.set_target('ipynb')
data.show()
Explanation: Step 3: Ingest Spark RDD as SFrame
Now that we have the raw Wikipedia text converted into a bag-of-words using Spark, it is easy to ingest that into GraphLab Create as an SFrame.
End of explanation
# Trim out stopwords
data['bag_of_words'] = data['bag_of_words'].dict_trim_by_keys(gl.text_analytics.stopwords(), exclude=True)
data.show()
Explanation: Looking at the most frequent words in the bag of words, it is obvious that 'stop words' are most prevalent. Let's remove them with one line, using GraphLab Create.
End of explanation
# If running on entire dataset, might want to increase num_topics and num_iterations
model = gl.topic_model.create(data['bag_of_words'], num_topics=30, num_iterations=50)
Explanation: Great, now the most frequent words are no longer stop words. We are ready to train a Topic Model on the data.
Step 4: Learn Topic Model
Once we have an SFrame, training a Topic Model is one line. We are saying we are looking for the model to learn five topics, and to train for ten iterations.
End of explanation
pred = model.predict(data['bag_of_words'])
pred
Explanation: Step 5: Explore the Topics
First, let's get topic ids predicted from the model.
End of explanation
results = gl.SFrame({'doc_id':data['id'], 'topic_id':pred, 'bag_of_words':data['bag_of_words']})
results.swap_columns('doc_id', 'bag_of_words') # better SFrame formatting
results.print_rows(max_column_width=60)
Explanation: Well, that is just showing predicted topic_id. Instead, let's add a column with the topic_id we just predicted, and create that as our results SFrame.
End of explanation
gl.canvas.set_target('ipynb')
results['topic_id'].show('Categorical')
Explanation: Now let's see which topic ids appear most frequently in this set of Wikipedia data
End of explanation
model.get_topics([22], output_type='topic_words').print_rows(max_column_width=100)
model.get_topics([6], output_type='topic_words').print_rows(max_column_width=100)
Explanation: Looking at this tells us that topic ids 22 and 6 are more common in this dataset. Let's find out what words are associated with those topics.
End of explanation
topics = model.get_topics()
topics = topics.rename({'topic':'topic_id'})
topics
Explanation: Interesting. Wonder what this set of documents is about. Let's get the full list of topic words learned by the model.
End of explanation
topics.groupby(['topic_id'], {'topic_words':gl.aggregate.CONCAT("word")}).print_rows(max_column_width=80)
Explanation: That SFrame is less useful, let's groupby all the same topic ids and create a list of words.
End of explanation
# to save the predictions as an RDD
predictions_rdd = data.to_rdd(sc)
predictions_rdd.saveAsTextFile('file:///tmp/predictions.rdd')
# save the topic_ids with their topic words
topics_rdd = topics.to_rdd(sc)
topics_rdd.saveAsTextFile('file:///tmp/topics.rdd')
Explanation: This is the appropriate format for recording the topics learned, by topic_id.
Great, so now we have the results SFrame and the Topics SFrame, both of which can be saved back as Spark RDDs.
Step 6: Save Results to Spark RDD
So now we have all the results ready as two SFrames. The first has the bag-of-words with the predicted topic_id, and the second has the topic words for each topic_id. These are both tables we can save as Spark RDDs, so subsequent Spark programs can utilize the findings from the Topic Model.
End of explanation |
368 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.
We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
Step1: Note
Step2: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
Step3: To shut the window showing the simulation, use env.close().
If you ran the simulation above, we can look at the rewards
Step4: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
Q-Network
We train our Q-learning agent using the Bellman Equation
Step5: Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
Step6: Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent
Step7: Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
Step8: Training
Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
Step9: Visualizing training
Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.
Step10: Testing
Let's checkout how our trained agent plays the game. | Python Code:
import gym
import tensorflow as tf
import numpy as np
Explanation: Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.
We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
End of explanation
# Create the Cart-Pole game environment
env = gym.make('CartPole-v0')
Explanation: Note: Make sure you have OpenAI Gym cloned into the same directory with this notebook. I've included gym as a submodule, so you can run git submodule --init --recursive to pull the contents into the gym repo.
End of explanation
env.reset()
rewards = []
actions = [np.random.choice(2) for _ in range(100)]
for _ in range(1000):
env.render()
state, reward, done, info = env.step(env.action_space.sample()) # take a random action
#import pdb; pdb.set_trace()
#state, reward = env.step(env.step(actions[i]))
#import pdb; pdb.set_trace()
rewards.append(reward)
if done:
rewards = []
env.reset()
env.close()
Explanation: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
End of explanation
print(rewards[-20:])
Explanation: To shut the window showing the simulation, use env.close().
If you ran the simulation above, we can look at the rewards:
End of explanation
class QNetwork:
def __init__(self, learning_rate=0.01, state_size=4,
action_size=2, hidden_size=10,
name='QNetwork'):
# state inputs to the Q-network
with tf.variable_scope(name):
self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')
# One hot encode the actions to later choose the Q-value for the action
self.actions_ = tf.placeholder(tf.int32, [None], name='actions')
one_hot_actions = tf.one_hot(self.actions_, action_size)
# Target Q values for training
self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')
# ReLU hidden layers
self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)
self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)
# Linear output layer
self.output = tf.contrib.layers.fully_connected(self.fc2, action_size,
activation_fn=None)
### Train with loss (targetQ - Q)^2
# output has length 2, for two actions. This next line chooses
# one value from output (per row) according to the one-hot encoded actions.
self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)
self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))
self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)
Explanation: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
Q-Network
We train our Q-learning agent using the Bellman Equation:
$$
Q(s, a) = r + \gamma \max{Q(s', a')}
$$
where $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$.
Before we used this equation to learn values for a Q-table. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function.
<img src="assets/deep-q-learning.png" width=450px>
Now, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers.
<img src="assets/q-network.png" width=550px>
As I showed before, we can define our targets for training as $\hat{Q}(s,a) = r + \gamma \max{Q(s', a')}$. Then we update the weights by minimizing $(\hat{Q}(s,a) - Q(s,a))^2$.
For this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights.
Below is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out.
End of explanation
from collections import deque
class Memory():
def __init__(self, max_size = 1000):
self.buffer = deque(maxlen=max_size)
def add(self, experience):
self.buffer.append(experience)
def sample(self, batch_size):
idx = np.random.choice(np.arange(len(self.buffer)),
size=batch_size,
replace=False)
return [self.buffer[ii] for ii in idx]
Explanation: Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
End of explanation
train_episodes = 1000 # max number of episodes to learn from
max_steps = 200 # max steps in an episode
gamma = 0.99 # future reward discount
# Exploration parameters
explore_start = 1.0 # exploration probability at start
explore_stop = 0.01 # minimum exploration probability
decay_rate = 0.0001 # exponential decay rate for exploration prob
# Network parameters
hidden_size = 64 # number of units in each Q-network hidden layer
learning_rate = 0.0001 # Q-network learning rate
# Memory parameters
memory_size = 10000 # memory capacity
batch_size = 20 # experience mini-batch size
pretrain_length = batch_size # number experiences to pretrain the memory
tf.reset_default_graph()
mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)
Explanation: Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent:
Initialize the memory $D$
Initialize the action-value network $Q$ with random weights
For episode = 1, $M$ do
For $t$, $T$ do
With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s,a)$
Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$
Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$
Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$
Set $\hat{Q}j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max{a'}{Q(s'_j, a')}$
Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$
endfor
endfor
Hyperparameters
One of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation.
End of explanation
# Initialize the simulation
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
memory = Memory(max_size=memory_size)
# Make a bunch of random actions and store the experiences
for ii in range(pretrain_length):
# Uncomment the line below to watch the simulation
# env.render()
# Make a random action
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
if done:
# The simulation fails so no next state
next_state = np.zeros(state.shape)
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
Explanation: Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
End of explanation
# Now train with experiences
saver = tf.train.Saver()
rewards_list = []
with tf.Session() as sess:
# Initialize variables
sess.run(tf.global_variables_initializer())
step = 0
for ep in range(1, train_episodes):
total_reward = 0
t = 0
while t < max_steps:
step += 1
# Uncomment this next line to watch the training
# env.render()
# Explore or Exploit
explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step)
if explore_p > np.random.rand():
# Make a random action
action = env.action_space.sample()
else:
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
total_reward += reward
if done:
# the episode ends so no next state
next_state = np.zeros(state.shape)
t = max_steps
print('Episode: {}'.format(ep),
'Total reward: {}'.format(total_reward),
'Training loss: {:.4f}'.format(loss),
'Explore P: {:.4f}'.format(explore_p))
rewards_list.append((ep, total_reward))
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
t += 1
# Sample mini-batch from memory
batch = memory.sample(batch_size)
states = np.array([each[0] for each in batch])
actions = np.array([each[1] for each in batch])
rewards = np.array([each[2] for each in batch])
next_states = np.array([each[3] for each in batch])
# Train network
target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})
# Set target_Qs to 0 for states where episode ends
episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)
target_Qs[episode_ends] = (0, 0)
targets = rewards + gamma * np.max(target_Qs, axis=1)
loss, _ = sess.run([mainQN.loss, mainQN.opt],
feed_dict={mainQN.inputs_: states,
mainQN.targetQs_: targets,
mainQN.actions_: actions})
saver.save(sess, "checkpoints/cartpole.ckpt")
Explanation: Training
Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
eps, rews = np.array(rewards_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
Explanation: Visualizing training
Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.
End of explanation
test_episodes = 10
test_max_steps = 400
env.reset()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
for ep in range(1, test_episodes):
t = 0
while t < test_max_steps:
env.render()
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
if done:
t = test_max_steps
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
state = next_state
t += 1
env.close()
Explanation: Testing
Let's checkout how our trained agent plays the game.
End of explanation |
369 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas
Pandas Objects
In the previous chapter we discussed the very basics of Python and NumPy. Here we go one step further and introduce the Pandas package and its data structures. At the very basic level, Pandas can be thought of as enhanced versions of NumPy arrays in which rows and columns come with labels (rather than simple integer indices). Pandas obviously provides more features than this, but it is important to first get an understanding of Pandas' data structure before delving into more advanced topics.
Step1: Pandas Series
While a NumPy array has an implicitly defined index, Pandas allows for an explicitly defined index. That could mean strings or nonsequential indices. We first look at Pandas Series objects. These are one-dimensional arrays of indexed data. We can use lists or arrays to create it.
Step2: NumPy array operations (as discussed in the previous chapter) such as filtering with a boolean array, scalar multiplication, or applying math functions, will preserve the index-value link.
Pandas' Data Frames
Constructing Data Frames
While Pandas' Series are comparable to a one-dimensional array with flexible indices, DataFrames are comparable to two-dimensional arrays with both flexible row and column names.
Step3: Ultimately, there are many more ways to create a DataFrame but we'll leave it with the above two examples. Reason being, as we will see later on, that data for a DataFrame is usually imported from a txt, csv or xls file. This process is fairly simple and will be discussed later on. Regarding DataFrames, check the excellent overview by Chris Moffit or see the help page pd.DataFrame? for more examples on how to create a DataFrame.
As we would expect, each DataFrame has some callable attributes.
Step4: Working with data and adding new data columns is straight forward
Step5: Indexing and Selection
In the previous chapter we discussed how to access elements of NumPy arrays. In general, the same patterns are applicable to Pandas objects. However, there are a few quirks that we'll discuss to prevent confusion.
Step6: IMPORTANT
Step7: Because this was recognized as a source of confusion, Pandas introduced the loc and iloc attribute. loc allows indexing and slicing with the explicit index, iloc allows slicing that always references the implicit index.
Step8: Indexing for DataFrame works similar to what we discussed thus far.
Step9: The index will always be shown. Thus if we reset the index such that the company names represent the index, then we could simply use comps['PE'].
Step10: We can also use dot notation to access a column.
Step11: One slicing option you might come across when studying python scripts is the .ix indexer. It is a hybrid of the two functions .loc and .iloc. However, the .ix indexer is deprecated in favor of the more strict .iloc and .loc and thus we won't discuss it here.
Index Alignment
Pandas will align indices in the process of performing operations for both Series as well as DataFrames. This proves to be very convenient when dealing with incomplete data.
Step12: If we wish to fill the blanks with another value than NaN, we can do so by using the add() method and specify the fill_value.
Step13: Handling Missing Data
So far we have always dealt with complete data sets. Real world data, however, is hardly ever clean and homogeneous. Often data sets will have some amount of missing values. Further complicating the issue is the fact that different conventions exists to indicate missing data (NaN, None, NA, null, -9999).
Developers of NumPy and Pandas chose to use NaN (acronym for Not a Number) as missing data representation. Operations on np.arrays, Series or DataFrames containing NaN values are possible. However, one needs to keep in mind that any arithmetic operation with NaN will be another NaN.
Step14: NumPy provides special functions which can deal with NaN values.
Step15: While NumPy only accepts np.nan, Pandas is also able do handle None as input. Yet internally, Pandas will convert None values to NaN.
Step16: But how do we deal with NaN values? Python provides some specific methods
Step17: .dropna() can not drop single values, but it can drop full columns or rows. For this, the method takes the parameter axis='rows' or axis='columns'.
Step18: Beyond the axis you can specify the parameter how and thresh.
For parameter how, default is set to how='any' which means that any row or column (depending on your selection) with NaN values will be dropped. Alternatively you could set it to how='all' to remove only those rows/columns where all entries are of sort NaN.
For parameter thresh, default is set to thresh=None. For example setting a thresh=3 will drop rows/columns with less than 3 non-null values.
Step19: Sometimes it is also adequate to replace NaN cells with a specific value. For this method .fillna() is available.
Step20: Combining Datasets
Concat and Append
Concatenating, appending, merging or joining data sets is a deep (and some say 'dull') topic. Anyone who had the pleasure of learning relational algebra can tell. Pandas has four functions that will do the job for you and of which you should have heard
Step21: Notice that pd.Timestamp('10.03.2020') is interpreted as 3rd of October 2018 while pd.Timestamp('31.03.2020') as 31st of March. Here it is important to realize that the default format is the American way of writing a date
Step22: The pd.Period function has specific properties such as start_time and end_time.
Step23: Date Ranges
The command pd.date_range generates an index with indicated length according to a particular frequency.
Step24: You could also pass just a start or end date combined with a number of periods to generate.
Step25: As became obvious from above examples, pd.date_range by default generates daily timestamps. If you wish another frequency - such as monthly, annual, etc. - you add the freq= argument to your command.
Step26: Here's an overview of Pandas frequency codes
Step27: Indexing, Selection, Subsetting of Time Series
Both Timestamp and Period can be used as index. Lists of Timestamp and Period are automatically coerced to DatetimeIndex and PeriodIndex, respectively. This is convenient as it allows us to index and slice the data object as if it were a regular Series or DataFrame.
Step28: To select a subset, we can apply the same logic as shown before.
Step29: Similarly, you could choose a full year or a specific month with ts['2021'] or ts['2021-05'].
Step30: Importing Data
File Path
For most of this course we will use data stored in csv format which we will have to import. For this we can make use of Panda's read_csv() function. If you check the function's help page, you might be overwhelmed by all the possible parameter. Below follows an example which loads Swiss stock market data for the four companies Schindler, ABB, Georg Fischer, and Sulzer from a csv. To load it we necessarily need to specify the file name and its path.
Pandas will start looking from where your current python file or notebook is located. Python's working directory is set to where your current .py or .ipynb file is stored. If you have stored your file in a subfolder, one can simply preced the file name with the path
Step31: A few notes
Step32: Importing from Web Link
When data is updated on a regular basis, it is certainly more convenient to directly load a related file from an existing (static) url than to manually download it time and time again before running a script. Since Pandas version 0.19.2, pd.read_csv() is able to handle that. A simple example is provided below, where the csv file with historical closing prices of the 30 day volatility index on the SMI (VSMI) is downloaded.
Step33: For further details on how to load/import data to Python check Pandas' tutorial on the topic.
Example
Step34: Let us say we want to have a statistical summary of the closing prices per share. We can use the .groupby() method to first split the values, select the closing prices, and then apply the .describe() method to have the desired summary.
Step35: Assume we wish to investigate ABB's stock a bit further. For this we need to slice the multiindex objcet df. Slicing multiindex objects is a bit trickier than doing the same on a simple data frame with only a single index. Below is an example how you slice the DataFrame based on a date range (here we take all dates) and on the ticker 'ABBN'. For further examples on how to slice multiindex objects, see here.
Step36: Having the data set up helps run further analysis. Note that plots will be discussed in a separate chapter and thus we will not get into it here.
Step37: Let's check if the returns follow a normal distribution. We have many approaches to check this, both with plots and statistics. Below are some options presented. We will make use of the stats sublibrary of the scipy package.
Step38: Often the Shapiro Wilk test is used to check if values follow a normal distribution. The function sp.stats.shapiro() tests the null hypothesis that the data was drawn from a normal distribution. If the p-value is very small, it means it is unlikely that the data came from a normal distribution.
Step39: Or alternatively we could combine the histogram with a kernel density estimation (KDE). | Python Code:
# We start by importing the NumPy, Pandas packages
import numpy as np
import pandas as pd
Explanation: Pandas
Pandas Objects
In the previous chapter we discussed the very basics of Python and NumPy. Here we go one step further and introduce the Pandas package and its data structures. At the very basic level, Pandas can be thought of as enhanced versions of NumPy arrays in which rows and columns come with labels (rather than simple integer indices). Pandas obviously provides more features than this, but it is important to first get an understanding of Pandas' data structure before delving into more advanced topics.
End of explanation
data = pd.Series([0.25, 0.5, 0.25, 1])
data
data = pd.Series([0.25, 0.5, 0.25, 1], index=['w', 'x', 'y', 'z'])
data
# Item access works as we would expect
data['x']
# Another example
data = pd.Series([0.25, 0.5, 0.75, 1],
index=[2, 7, 4, 1])
data
data[2]
# Pandas series from NumPy array
vec = np.linspace(start=0.2, stop=1, num=5)
pd.Series(vec)
type(pd.Series(vec))
Explanation: Pandas Series
While a NumPy array has an implicitly defined index, Pandas allows for an explicitly defined index. That could mean strings or nonsequential indices. We first look at Pandas Series objects. These are one-dimensional arrays of indexed data. We can use lists or arrays to create it.
End of explanation
data = {'Company': ['Schindler', 'ABB', 'GF', 'Sulzer'],
'yrEndClose': [179.6, 21.48, 834, 105],
'eps': [7.14, 0.87, 53, 1.73]}
comps = pd.DataFrame(data)
comps
data = [{'Company': 'Schindler', 'yrEndClose': 179.6, 'eps': 7.14},
{'Company': 'ABB', 'yrEndClose': 21.48, 'eps': 0.87},
{'Company': 'GF', 'yrEndClose': 834, 'eps': 53},
{'Company': 'Sulzer', 'yrEndClose': 105, 'eps': 1.73}]
pd.DataFrame(data)
Explanation: NumPy array operations (as discussed in the previous chapter) such as filtering with a boolean array, scalar multiplication, or applying math functions, will preserve the index-value link.
Pandas' Data Frames
Constructing Data Frames
While Pandas' Series are comparable to a one-dimensional array with flexible indices, DataFrames are comparable to two-dimensional arrays with both flexible row and column names.
End of explanation
print(comps.index)
print(comps.size, comps.shape, comps.ndim)
Explanation: Ultimately, there are many more ways to create a DataFrame but we'll leave it with the above two examples. Reason being, as we will see later on, that data for a DataFrame is usually imported from a txt, csv or xls file. This process is fairly simple and will be discussed later on. Regarding DataFrames, check the excellent overview by Chris Moffit or see the help page pd.DataFrame? for more examples on how to create a DataFrame.
As we would expect, each DataFrame has some callable attributes.
End of explanation
comps['PE'] = comps['yrEndClose'] / comps['eps']
comps['Year'] = 2020
comps
# Reorder columns
comps = comps[['Company', 'Year', 'PE', 'eps', 'yrEndClose']]
print(comps)
# Renaming columns
comps.columns = ['Company', 'Year', 'PE', 'EPS', 'Price']
comps.columns.values
# Or renaming just one column
colNms = comps.columns.values
colNms[4] = 'yrEndClose'
comps.columns = colNms
comps.columns.values
Explanation: Working with data and adding new data columns is straight forward:
End of explanation
data = pd.Series([0, 1, 2], index=['a', 'b', 'c'])
# Adding a float
data['d'] = 2.5
data
# Slicing by explicit index
data['a':'c']
# Slicing by implicit index
data[0:2]
Explanation: Indexing and Selection
In the previous chapter we discussed how to access elements of NumPy arrays. In general, the same patterns are applicable to Pandas objects. However, there are a few quirks that we'll discuss to prevent confusion.
End of explanation
data = pd.Series(['a', 'b', 'c'], index=[1, 3, 5])
data
# Explicit index when indexing
data[1]
# Implicit index when slicing
data[1:3]
Explanation: IMPORTANT:
Notice that when using the explicit index, the final index is included. On the other hand, when you use the implicit index (i.e. data[0:2]), the final index is excluded.
Let's consider an example where the Series object has an explicit integer index.
End of explanation
data.loc[1]
data.loc[1:3]
data.iloc[1]
data.iloc[1:3]
Explanation: Because this was recognized as a source of confusion, Pandas introduced the loc and iloc attribute. loc allows indexing and slicing with the explicit index, iloc allows slicing that always references the implicit index.
End of explanation
comps[['Company', 'PE']]
Explanation: Indexing for DataFrame works similar to what we discussed thus far.
End of explanation
compsInd = comps.set_index('Company')
print(compsInd)
compsInd['PE']
Explanation: The index will always be shown. Thus if we reset the index such that the company names represent the index, then we could simply use comps['PE'].
End of explanation
comps.EPS[:2]
Explanation: We can also use dot notation to access a column.
End of explanation
np.random.seed(1234)
A = pd.DataFrame(np.random.randint(low=10, high=99, size=(2,2)),
columns=['A', 'C'])
A
B = pd.DataFrame(np.random.randint(low=0, high=10, size=(3,3)),
columns=list('BAC'))
B
A + B
Explanation: One slicing option you might come across when studying python scripts is the .ix indexer. It is a hybrid of the two functions .loc and .iloc. However, the .ix indexer is deprecated in favor of the more strict .iloc and .loc and thus we won't discuss it here.
Index Alignment
Pandas will align indices in the process of performing operations for both Series as well as DataFrames. This proves to be very convenient when dealing with incomplete data.
End of explanation
A.add(B, fill_value=0)
Explanation: If we wish to fill the blanks with another value than NaN, we can do so by using the add() method and specify the fill_value.
End of explanation
val = np.array([0, np.nan, 1, 2])
val + 1
val * 0
Explanation: Handling Missing Data
So far we have always dealt with complete data sets. Real world data, however, is hardly ever clean and homogeneous. Often data sets will have some amount of missing values. Further complicating the issue is the fact that different conventions exists to indicate missing data (NaN, None, NA, null, -9999).
Developers of NumPy and Pandas chose to use NaN (acronym for Not a Number) as missing data representation. Operations on np.arrays, Series or DataFrames containing NaN values are possible. However, one needs to keep in mind that any arithmetic operation with NaN will be another NaN.
End of explanation
print(val.sum(), val.min(), val.max())
print(np.nansum(val), np.nanmin(val), np.nanmax(val))
Explanation: NumPy provides special functions which can deal with NaN values.
End of explanation
seq = pd.Series([1, np.nan, 2, None])
seq
Explanation: While NumPy only accepts np.nan, Pandas is also able do handle None as input. Yet internally, Pandas will convert None values to NaN.
End of explanation
# Sample Series
ser = pd.Series([1, None, 2., np.nan])
# Boolean mask
print(ser.isnull())
# Sliced Series
print(ser[ser.notnull()])
# Create a DataFrame
df = pd.DataFrame(10 + np.arange(9).reshape(3, 3),
columns= ['A', 'B', 'C'])
df.iloc[0, 1] = np.nan; df.iloc[2, 0] = np.nan
df
Explanation: But how do we deal with NaN values? Python provides some specific methods:
| Method | Description |
|:------------:|------------------------------------------------------------------|
| .isnull() | Generates boolean mask indicating missing values |
| .notnull() | Opposite of .isnull() |
| .dropna() | Returns a filtered version of the data |
| .fillna() | Returns a copy of the data with missing values filled or imputed |
End of explanation
df.dropna() # Similar to df.dropna(axis=0) or df.dropna(axis='rows')
df.dropna(axis='columns') # similar to df.dropna(axis=1)
Explanation: .dropna() can not drop single values, but it can drop full columns or rows. For this, the method takes the parameter axis='rows' or axis='columns'.
End of explanation
df['D'] = np.nan
df
df.dropna(axis='columns', how='all')
df.dropna(axis='rows', thresh=3)
Explanation: Beyond the axis you can specify the parameter how and thresh.
For parameter how, default is set to how='any' which means that any row or column (depending on your selection) with NaN values will be dropped. Alternatively you could set it to how='all' to remove only those rows/columns where all entries are of sort NaN.
For parameter thresh, default is set to thresh=None. For example setting a thresh=3 will drop rows/columns with less than 3 non-null values.
End of explanation
df.fillna(value=-9999)
df
# Forward-fill to propagate previous value forward
df.fillna(axis='rows', method='ffill')
# Backward-fill to propagate previous value forward
df.fillna(axis='rows', method='bfill')
Explanation: Sometimes it is also adequate to replace NaN cells with a specific value. For this method .fillna() is available.
End of explanation
# Simple Time Stamps
print(pd.Timestamp(day=31, year=2021, month=12))
print(pd.Timestamp(2021, 12, 31, 13, 8))
print(pd.Timestamp('10.03.2021 17:32:15'))
print(pd.Timestamp('31.03.2021 17:32:15'))
print(pd.Timestamp('2021-03-31'))
print(pd.Timestamp('2000-07'))
print(pd.to_datetime("1st of August, 1992"))
Explanation: Combining Datasets
Concat and Append
Concatenating, appending, merging or joining data sets is a deep (and some say 'dull') topic. Anyone who had the pleasure of learning relational algebra can tell. Pandas has four functions that will do the job for you and of which you should have heard:
pd.append()
pd.concat()
pd.merge()
pd.join()
From time to time one of these functions will appear in this course. However, we will not properly discuss these functions in any detail. Unfortunately, doing it would consume too much time and would be beyond the purpose of this course. Nonetheless, I recommend to spend 15min in learning the basics by reading through Pandas' intuitive introduction which can be found here. It is kept fairly brief and with all the examples and visual representations, the functions are explained in a much better way than this tutorial could ever do.
Another valuable resource is of course again Jake VanderPlas' Data Science Handbook. You might find his explanations and examples very helpful and since it's freely available on GitHub, why not give it a shot. Here's the link (Combining Datasets).
Pandas Time Series
Timestamps and Periods
Pandas was developed with a focus on financial data. It thus does not surprise that Pandas has incorporated an easy and reliable way of handling datetime formats. Pandas handling of date/time objects improves on the existing package datetime and NumPy's numpy.datetime64 object and provides the necessary tools to efficiently handle datetimes.
The most basic kind of time series object in Pandas are pd.Series or pd.DataFrame objects indexed with timestamps.
End of explanation
# Time Periods
Q3 = pd.Period('2021-09', freq='Q')
Q3
M9 = pd.Period('2021-09', freq='M')
M9
Explanation: Notice that pd.Timestamp('10.03.2020') is interpreted as 3rd of October 2018 while pd.Timestamp('31.03.2020') as 31st of March. Here it is important to realize that the default format is the American way of writing a date: 'mm.dd.yyyy'.
Besides pd.Timestamp Pandas does also have a function for periods: pd.Period. The difference is subtle: The former is for a specific point in time, the latter represents a bounded interval.
End of explanation
Q3.start_time
M9.end_time
Explanation: The pd.Period function has specific properties such as start_time and end_time.
End of explanation
pd.date_range(start='20210104', end='20210108')
Explanation: Date Ranges
The command pd.date_range generates an index with indicated length according to a particular frequency.
End of explanation
pd.date_range(start='2021-12-17', periods=2)
pd.date_range(end='2021-09-30', periods=4)
Explanation: You could also pass just a start or end date combined with a number of periods to generate.
End of explanation
pd.date_range(end='2021-12-31', periods=4, freq='BQS')
Explanation: As became obvious from above examples, pd.date_range by default generates daily timestamps. If you wish another frequency - such as monthly, annual, etc. - you add the freq= argument to your command.
End of explanation
# Third Friday of each month between Jan-18 and Sep-18
pd.date_range('2021-01-01', '2021-09-01', freq='WOM-3FRI')
Explanation: Here's an overview of Pandas frequency codes:
| Code | Description || Code | Description |
|:--------:|------------------------||:---------:|------------------|
| D | Calendar day || A | Year end |
| B | Business day || AS | Year start |
| W | Weekly || BA | Business year end |
| M | Month end || BAS | Business year start |
| MS | Month start || H | Hours |
| BM | Business month end || BH | Business hours |
| BMS | Business month start || T | Minutes |
| Q | Quarter end || S | Seconds |
| QS | Quarter start || L | Miliseconds |
| BQ | Business quarter end || U | Microseconds |
| BQS | Business quarter start || N | Nanoseconds |
Beyond the above frequencies, Pandas has one more useful further option: "week of month". This enables you to
get dates like the third Friday of each month. Anyone dealing with options will recoginze these dates as the standard dates of monthly expiry.
End of explanation
dates = pd.date_range(start='2019-01-01', end='2021-12-31', freq='D')
ts = pd.Series(np.random.randn(len(dates)), index=dates)
print(ts.head(), '\n')
print(ts.tail())
Explanation: Indexing, Selection, Subsetting of Time Series
Both Timestamp and Period can be used as index. Lists of Timestamp and Period are automatically coerced to DatetimeIndex and PeriodIndex, respectively. This is convenient as it allows us to index and slice the data object as if it were a regular Series or DataFrame.
End of explanation
# Fancy indexing
rng = pd.date_range(start='2021-02-28', end='2021-03-01')
ts[rng]
# Indexing by string
print(ts['20211231'])
print(ts['2021-06-30'])
Explanation: To select a subset, we can apply the same logic as shown before.
End of explanation
# Slicing
ts['2021-12-25':'2021-12-30']
Explanation: Similarly, you could choose a full year or a specific month with ts['2021'] or ts['2021-05'].
End of explanation
# Print working directory (uncomment to run)
#!cd
pd.read_csv('Data/ShareData.csv', sep=',').head(3)
fl = pd.read_csv('Data/ShareData.csv', sep=',')
fl.dtypes
Explanation: Importing Data
File Path
For most of this course we will use data stored in csv format which we will have to import. For this we can make use of Panda's read_csv() function. If you check the function's help page, you might be overwhelmed by all the possible parameter. Below follows an example which loads Swiss stock market data for the four companies Schindler, ABB, Georg Fischer, and Sulzer from a csv. To load it we necessarily need to specify the file name and its path.
Pandas will start looking from where your current python file or notebook is located. Python's working directory is set to where your current .py or .ipynb file is stored. If you have stored your file in a subfolder, one can simply preced the file name with the path: pd.read_csv('dataSubFolder/anotherSubFolder/data.csv). Given your file is located in another folder, one could either use an explicit path as in pd.read_csv('C:/Users/Username/Path/To/Your/Folder/data.csv') or you can move from your current directory to where your data is located with '..' For example pd.read_csv('../../../dataFolder/data.csv') will go 3 levels up and then into a dataFolder. If you wish to check the path of your current working directory, use !cd (Windows) or !pwd (Mac) to find out.
End of explanation
df = pd.read_csv('Data/ShareData.csv', sep=',',
parse_dates=['Date'], dayfirst=True,
index_col=['Date', 'Ticker'], thousands="?")
# Print first 3 data rows
df.head(3)
df.dtypes
Explanation: A few notes:
* CSV stands for comma separated values. The majority of csv-files indeed use commas to separate the values. Sometimes there are however other separators used such as semicolons or (worse) tabs. If that is the case, set argument e.g. sep=';' as the separator.
* To make the function parse the 'Date' column as dates we have to add parse_dates=['Date'].
* The dates in the csv have format 'dd.mm.yyy'. Pandas default is 'mm.dd.yyyy'. Thus we need to specify that the dates have days first, then months. For this we specify dayfirst=True.
* 'Date' and 'Ticker' uniqueliy identify each row. Therefore we wish to set these two columns as index. This is done by adding index_col=['NameOfColumn'].
* Due to the thousands separator sign, entries are not loaded as actual numbers but strings. This can be corrected by specifying the thousands="'" parameter.
* The above import shows that Pandas has taken the file's first row as the headers. Alternatively one could set header=None or add the argument skiprows=n where n defines the number of rows (from top) that should be skipped.
End of explanation
url = 'https://www.six-group.com/exchanges/downloads/indexdata/h_vsmi_30.csv'
data = pd.read_csv(url, sep=';', parse_dates=['Date'],
dayfirst=True, index_col='Date')
data.tail()
Explanation: Importing from Web Link
When data is updated on a regular basis, it is certainly more convenient to directly load a related file from an existing (static) url than to manually download it time and time again before running a script. Since Pandas version 0.19.2, pd.read_csv() is able to handle that. A simple example is provided below, where the csv file with historical closing prices of the 30 day volatility index on the SMI (VSMI) is downloaded.
End of explanation
# Sort df for dates (ascending)
df = df.sort_index(ascending=True)
Explanation: For further details on how to load/import data to Python check Pandas' tutorial on the topic.
Example: Working with Stock Data
In what follows it is shown how DataFrames are helpful in analyzing data. For that we will make use of the previously loaded stock data. The functions run below will not be introduced individually. But based on the annotated code, the comments in class and the output, the functions should easily be understood.
End of explanation
df.groupby(['Ticker'])['Close'].describe()
# Add a column with the returns
shft = len(df.index.levels[1])
df['Return'] = np.log(df['Close'] / df['Close'].shift(shft))
# Check for NA values
df.isnull().sum()
Explanation: Let us say we want to have a statistical summary of the closing prices per share. We can use the .groupby() method to first split the values, select the closing prices, and then apply the .describe() method to have the desired summary.
End of explanation
# Assign ABB data to variable abb
idx = pd.IndexSlice
abb = df.loc[idx[:, ['ABBN']], idx[:]].copy()
# Add column indicating the quarter (but excl. year)
abb['Quarter'] = pd.PeriodIndex(abb.index.levels[0], freq='Q').strftime('Q%q')
# Add rolling 252d mean
abb['Rol252dMean'] = abb['Close'].rolling(window=252).mean()
# Add (annualized) historical rolling volatility
abb['Rol252dVol'] = abb['Return'].rolling(window=252).std() * np.sqrt(252)
# Drop Ticker Index as it is all ABB data now
abb = abb.reset_index(level=1, drop=True)
abb.tail(3)
Explanation: Assume we wish to investigate ABB's stock a bit further. For this we need to slice the multiindex objcet df. Slicing multiindex objects is a bit trickier than doing the same on a simple data frame with only a single index. Below is an example how you slice the DataFrame based on a date range (here we take all dates) and on the ticker 'ABBN'. For further examples on how to slice multiindex objects, see here.
End of explanation
# Setup for plotting
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import seaborn as sns
import scipy as sp
import statsmodels.api as sm
abb[['Close', 'Rol252dMean']].plot(figsize=(16, 10));
abb[['Close', 'Rol252dVol', 'Return']].plot(subplots=True, figsize=(16, 10));
Explanation: Having the data set up helps run further analysis. Note that plots will be discussed in a separate chapter and thus we will not get into it here.
End of explanation
# Select returns
rets = abb['Return'].dropna()
# Calc skewness (Norm.dist: 0)
print('Skewness:', sp.stats.skew(rets))
# Calc kurtosis (Norm.dist: 3); for excess kurt set 'fisher=False'
print('Kurtosis: ', sp.stats.kurtosis(rets, fisher=False))
Explanation: Let's check if the returns follow a normal distribution. We have many approaches to check this, both with plots and statistics. Below are some options presented. We will make use of the stats sublibrary of the scipy package.
End of explanation
# Apply Shapiro-Wilk test
print('Shapiro Wilk Test:')
print('Test Statistic: ', sp.stats.shapiro(rets)[0])
print('p-Value: ', sp.stats.shapiro(rets)[1])
# Plot the log-returns with a normal distribution
plt.hist(rets, bins=50, density=True, label='frequency')
plt.xlabel('log-returns')
plt.ylabel('frequency')
x = np.linspace(np.min(rets), np.max(rets))
plt.plot(x, sp.stats.norm.pdf(x, loc=np.mean(rets), scale=np.std(rets)),
'r', lw=2.0, label='pdf')
plt.legend();
Explanation: Often the Shapiro Wilk test is used to check if values follow a normal distribution. The function sp.stats.shapiro() tests the null hypothesis that the data was drawn from a normal distribution. If the p-value is very small, it means it is unlikely that the data came from a normal distribution.
End of explanation
# KDE plot in Seaborn
sns.displot(data=rets, kind='hist', kde=True);
# qqplot
sm.qqplot(rets, line='s');
Explanation: Or alternatively we could combine the histogram with a kernel density estimation (KDE).
End of explanation |
370 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Local Search
Utility Functions
The module extractVariables implements the function $\texttt{extractVars}(e)$ that takes a Python expression $e$ as its argument and returns the set of all variables and function names occurring in $e$.
Step1: The function collect_variables(expr) takes a string expr that can be interpreted as a Python expression as input and collects all variables occurring in expr. It takes care to eliminate the function symbols from the names returned by extract_variables.
Step2: The function arb(S) takes a set S as input and returns an arbitrary element from
this set.
Step3: We need the function choice from the module random. Given a list L, random.choice(L) returns a random element from L. In order to have reproducible results, we have to set the seed for the random number generator.
Step4: Given a dictionary A, the function extend(A) returns a dictionary B such that B[key] = value and B[x] = A[x] for all x that are different from key.
Step5: The module Set implements <em style="color
Step6: The function cast_to_set(L) returns a Set object containing all elements from the iterable L.
Step7: Given a list of sets L, the function union(L) returns the set of all elements occurring in some set $S$ that is itself a member of the list L, i.e. we have
$$ \texttt{union}(L) = { x \mid \exists S \in L
Step8: We define the class Failure of exceptions so that we can distinguish Failure exceptions from other exceptions. This is done by creating a new, empty class that is derived from the class Exception.
Step9: A Constraint Problem Solver Using Local Search
The procedure solve(P, consistency) takes a constraint satisfaction problem
P and a flag consistency as input. Here P is a triple of the form
$$ \mathcal{P} = \langle \mathtt{Variables}, \mathtt{Values}, \mathtt{Constraints} \rangle $$
where
- $\mathtt{Variables}$ is a set of strings which serve as variables,
- $\mathtt{Values}$ is a set of values that can be assigned
to the variables in the set $\mathtt{Variables}$.
- $\mathtt{Constraints}$ is a set of formulas from first order logic.
Each of these formulas is called a constraint of $\mathcal{P}$.
The CSP P is solved using local search. If consistency is True, consistency checking is used as a
preprocessing step.
Step10: The function local_search takes three parameters.
Variables is the set of all variables occurring in the given CSP*.
* ValuesPerVar is a dictionary. For every variable x, ValuesPerVar[x] is
the set of values that can be used to instantiate x.
* Annotated is a dictionary. For every constraint $f$, $\texttt{Annotated}[f]$ is the set
of variables occurring in $f$.
If the computation is successful, local_search returns a dictionary that encodes a solution of the
given CSP by mapping variables to values. The algorithm applied works as follows
Step11: The function numConflicts takes three arguments
Step12: Consistency Checking
The function variables_2_formulas takes the set of annotated constraints as input. It returns
a dictionary that attaches to every variable x the set of those constraints f such that x occurs in f.
Step13: The function enforce_consistency takes 4 arguments
Step14: The procedure exists_values takes five arguments
Step15: The function all_assignments returns the list of all possible assignments for the variables in the set Vars.
For every variable x, the values for x are taken from ValuesPerVar[x].
Step16: Solving the Eight-Queens-Puzzle
Step17: As the N queens problem is not very difficult, we will not use consistency checking.
Local search takes 62 milliseconds on my desktop to solve the eight queens puzzle.
Step18: The 100 queens problem can be solved in 32 seconds if we do not use consistency checking.
Step19: Solving the Zebra Puzzle
Step20: Solving the Zebra Puzzle takes about 4 seconds, provided we use consistency checking.
Step21: Solving a Sudoku Puzzle
Step22: Solving the given Sudoku puzzle takes about 2 minutes, provided we use consistency checking. For hard problems, local search is not a good idea.
Step23: Solving a Crypto-Arithmetic Puzzle
Step24: Solving the crypto-arithmetic puzzle took 160 milliseconds with consistency checking. | Python Code:
import extractVariables as ev
Explanation: Local Search
Utility Functions
The module extractVariables implements the function $\texttt{extractVars}(e)$ that takes a Python expression $e$ as its argument and returns the set of all variables and function names occurring in $e$.
End of explanation
def collect_variables(expr):
return frozenset(var for var in ev.extractVars(expr)
if var not in dir(__builtins__)
if var not in ['and', 'or', 'not']
)
Explanation: The function collect_variables(expr) takes a string expr that can be interpreted as a Python expression as input and collects all variables occurring in expr. It takes care to eliminate the function symbols from the names returned by extract_variables.
End of explanation
def arb(S):
for x in S:
return x
Explanation: The function arb(S) takes a set S as input and returns an arbitrary element from
this set.
End of explanation
import random
random.seed(42)
Explanation: We need the function choice from the module random. Given a list L, random.choice(L) returns a random element from L. In order to have reproducible results, we have to set the seed for the random number generator.
End of explanation
def extend(A, key, value):
B = A.copy()
B[key] = value
return B
Explanation: Given a dictionary A, the function extend(A) returns a dictionary B such that B[key] = value and B[x] = A[x] for all x that are different from key.
End of explanation
import sys
sys.path.append('..')
import Set
Explanation: The module Set implements <em style="color:blue;">sets</em> as
<a href="https://en.wikipedia.org/wiki/AVL_tree">AVL trees</a>.
The API provided by Set offers the following functions and methods:
- Set() creates an empty set.
- S.isEmpty() checks whether the set S is empty.
- S.member(x) checks whether x is an element of the set S.
- S.insert(x) inserts x into the set S.
This does not return a new set but rather modifies the set S.
- S.delete(x) deletes x from the set S.
This does not return a new set but rather modifies the set S.
- S.pop() returns the smallest element of the set S.
Furthermore, this element is removed from S.
- S.pop_last() returns the biggest element of the set S.
Furthermore, this element is removed from S.
- S.first() returns the smallest element of the set S.
- S.last() returns the biggest element of the set S.
Since sets are implemented as <em style="color:blue;">ordered binary trees</em>, the elements of a set need to be <em style="color:blue;">comparable</em>, i.e. if x and y are inserted into a set, then the
expression x < y must return a Boolean value and < has to define a
<em style="color:blue;">linear order</em>.
The module Set can be used to implement a priority queue that supports the removal of elements.
End of explanation
def cast_to_Set(L):
Result = Set.Set()
for x in L:
Result.insert(x)
return Result
Explanation: The function cast_to_set(L) returns a Set object containing all elements from the iterable L.
End of explanation
def union(L):
return { x for S in L for x in S }
Explanation: Given a list of sets L, the function union(L) returns the set of all elements occurring in some set $S$ that is itself a member of the list L, i.e. we have
$$ \texttt{union}(L) = { x \mid \exists S \in L : x \in L }. $$
End of explanation
class Failure(Exception):
pass
Explanation: We define the class Failure of exceptions so that we can distinguish Failure exceptions from other exceptions. This is done by creating a new, empty class that is derived from the class Exception.
End of explanation
def solve(P, consistency=True):
Variables, Values, Constraints = P
VarsInConstrs = union([ collect_variables(f) for f in Constraints ])
MisspelledVars = (VarsInConstrs - Variables) | (Variables - VarsInConstrs)
if MisspelledVars:
print("Did you misspell any of the following Variables?")
for v in MisspelledVars:
print(v)
ValuesPerVar = { x: Values for x in Variables }
Annotated = { f: collect_variables(f) for f in Constraints }
if consistency:
Connected = {}
Var2Formulas = variables_2_formulas(Annotated)
for x in Variables:
Connected[x] = union([ V for f, V in Annotated.items() if x in V ]) - { x }
try:
enforce_consistency(ValuesPerVar, Var2Formulas, Annotated, Connected)
for x, Values in ValuesPerVar.items():
print(f'{x}: {Values}')
except Failure:
return None
return local_search(Variables, ValuesPerVar, Annotated)
Explanation: A Constraint Problem Solver Using Local Search
The procedure solve(P, consistency) takes a constraint satisfaction problem
P and a flag consistency as input. Here P is a triple of the form
$$ \mathcal{P} = \langle \mathtt{Variables}, \mathtt{Values}, \mathtt{Constraints} \rangle $$
where
- $\mathtt{Variables}$ is a set of strings which serve as variables,
- $\mathtt{Values}$ is a set of values that can be assigned
to the variables in the set $\mathtt{Variables}$.
- $\mathtt{Constraints}$ is a set of formulas from first order logic.
Each of these formulas is called a constraint of $\mathcal{P}$.
The CSP P is solved using local search. If consistency is True, consistency checking is used as a
preprocessing step.
End of explanation
def local_search(Variables, ValuesPerVar, Annotated):
Variables = list(Variables) # convert to list for random.choice(Variables) to work
Assignment = { x: random.choice(list(ValuesPerVar[x])) for x in Variables }
iteration = 0
lastVar = arb(Variables)
while True:
Conflicts = [ (numConflicts(x, Assignment, Annotated), x) for x in Variables
if x != lastVar
]
maxNum, _ = Set.last(cast_to_Set(Conflicts))
if maxNum == 0 and numConflicts(lastVar, Assignment, Annotated) == 0:
print(f'Number of iterations: {iteration}')
return Assignment
if iteration % 11 == 0: # avoid infinite loop
x = random.choice(Variables)
else: # choose var with max number of conflicts
FaultyVars = [ var for (num, var) in Conflicts if num == maxNum ]
x = random.choice(FaultyVars)
if iteration % 13 == 0: # avoid infinite loop
newVal = random.choice(list(ValuesPerVar[x]))
else:
Conflicts = [ (numConflicts(x, extend(Assignment, x, val), Annotated), val)
for val in ValuesPerVar[x]
]
minNum, _ = Set.first(cast_to_Set(Conflicts))
ValuesForX = [ val for (n, val) in Conflicts if n == minNum ]
newVal = random.choice(ValuesForX)
Assignment[x] = newVal
lastVar = x
iteration += 1
Explanation: The function local_search takes three parameters.
Variables is the set of all variables occurring in the given CSP*.
* ValuesPerVar is a dictionary. For every variable x, ValuesPerVar[x] is
the set of values that can be used to instantiate x.
* Annotated is a dictionary. For every constraint $f$, $\texttt{Annotated}[f]$ is the set
of variables occurring in $f$.
If the computation is successful, local_search returns a dictionary that encodes a solution of the
given CSP by mapping variables to values. The algorithm applied works as follows:
* Initialize the values of the variables in $\texttt{Variables}$ randomly.
If all $\texttt{Constraints}$ are satisfied, return the current variable binding as a solution.
* For every $x \in \texttt{Variables}$, count the number of unsatisfied* constraints that involve the
variable $x$.
* Set $\texttt{maxNum}$ to be the maximum of these numbers, i.e. $\texttt{maxNum}$ is the maximal number of
unsatisfied constraints for any variable.
* Compute the list $\texttt{FaultyVars}$ of those variables that have $\texttt{maxNum}$ unsatisfied constraints.
* Randomly choose a variable $x$ from the set $\texttt{FaultyVars}$.
* Find a value $d \in \texttt{ValuesPerVar[x]}$ such that by assigning $d$ to the variable $x$, the number of
unsatisfied constraints for the variable $x$ is minimized.
If there is more than one value $d$ with this property, choose the value $d$ randomly from those values
that minimize the number of unsatisfied constraints.
* Rinse and repeat until a solution is found.
End of explanation
def numConflicts(x, Assign, Annotated):
NewAssign = Assign.copy()
return len([ (f, V) for (f, V) in Annotated.items()
if x in V and not eval(f, NewAssign)
])
Explanation: The function numConflicts takes three arguments:
- x is a variable,
- Assign is a dictionary mapping variables to values,
- Annotated is a set of pairs of the form (f, V) where f is a constraint and V is the set of variables occurring in f.
The function returns the number of constraints f such that x occurs in f but f is not satisfied.
End of explanation
def variables_2_formulas(Annotated):
Dictionary = {};
for f, Vars in Annotated.items():
for x in Vars:
if x in Dictionary:
Dictionary[x] |= { f }
else:
Dictionary[x] = { f }
return Dictionary
Explanation: Consistency Checking
The function variables_2_formulas takes the set of annotated constraints as input. It returns
a dictionary that attaches to every variable x the set of those constraints f such that x occurs in f.
End of explanation
def enforce_consistency(ValuesPerVar, Var2Formulas, Annotated, Connected):
UncheckedVars = set(Var2Formulas.keys())
while UncheckedVars:
variable = UncheckedVars.pop()
Constraints = Var2Formulas[variable]
Values = ValuesPerVar[variable]
RemovedVals = set()
for f in Constraints:
OtherVars = Annotated[f] - { variable }
for value in Values:
if not exists_values(variable, value, f, OtherVars, ValuesPerVar):
RemovedVals |= { value }
UncheckedVars |= Connected[variable]
Remaining = Values - RemovedVals
if not Remaining:
raise Failure()
ValuesPerVar[variable] = Remaining
Explanation: The function enforce_consistency takes 4 arguments:
- ValuesPerVar is a dictionary. For every variable x we have that ValuesPerVar[x] is the set of values that can be substituted for x.
- Var2Formulas is a dictionary. For every variable x we have that Var2Formulas[x] is the set of those formulas that mention the variable x.
- Annotated is a dictionary. For every constraint f, Annotated[f] is the set of variables occurring in f.
- Connected is a dictionary. For every variable x we have that Connected[x] is the set of those variables y that are directly connected with the variable x. Two variables x and y are directly connected if there is a constraint F such that both x and y occur in F. In this case, F is connecting x and y.
The function enforce_consistencyshrinks the sets ValuesPerVar[x] such that the values in ValuesPerVar[x] are consistent for x for all constraints.
End of explanation
def exists_values(var, val, f, Vars, ValuesPerVar):
Assignments = all_assignments(Vars, ValuesPerVar)
return any(eval(f, extend(A, var, val)) for A in Assignments)
Explanation: The procedure exists_values takes five arguments:
- var is a variable,
- val is a value val,
- f is a constraint,
- Vars is the set Vars of those variables in f that are different from var, and
- ValuesPerVar is a dictionary. For every variable x we have that ValuesPerVar[x] is the set of those values that still may be tried for x.
The function checks whether there is a value for var such that the other variables occurring in the constraint f can be set to values such that the constraint f is satisfied.
End of explanation
def all_assignments(Variables, ValuesPerVar):
Variables = set(Variables) # turn frozenset into a set
if not Variables:
return [ {} ] # list containing empty assignment
var = Variables.pop()
Values = ValuesPerVar[var]
Assignments = all_assignments(Variables, ValuesPerVar)
return [ extend(A, var, val) for A in Assignments
for val in ValuesPerVar[var]
]
Explanation: The function all_assignments returns the list of all possible assignments for the variables in the set Vars.
For every variable x, the values for x are taken from ValuesPerVar[x].
End of explanation
%%capture
%run N-Queens-Problem-CSP.ipynb
P = create_csp(8)
Explanation: Solving the Eight-Queens-Puzzle
End of explanation
%%time
Solution = solve(P, False)
print(f'Solution = {Solution}')
show_solution(Solution)
Explanation: As the N queens problem is not very difficult, we will not use consistency checking.
Local search takes 62 milliseconds on my desktop to solve the eight queens puzzle.
End of explanation
P = create_csp(100)
%%time
Solution = solve(P, False)
Explanation: The 100 queens problem can be solved in 32 seconds if we do not use consistency checking.
End of explanation
%run Zebra.ipynb
zebra = zebra_csp()
%%time
Solution = solve(zebra, True)
Explanation: Solving the Zebra Puzzle
End of explanation
show_solution(Solution)
Explanation: Solving the Zebra Puzzle takes about 4 seconds, provided we use consistency checking.
End of explanation
%run Sudoku.ipynb
csp = sudoku_csp(Sudoku)
csp
Explanation: Solving a Sudoku Puzzle
End of explanation
%%time
Solution = solve(csp)
show_solution(Solution)
Explanation: Solving the given Sudoku puzzle takes about 2 minutes, provided we use consistency checking. For hard problems, local search is not a good idea.
End of explanation
%run Crypto-Arithmetic.ipynb
csp = crypto_csp()
Explanation: Solving a Crypto-Arithmetic Puzzle
End of explanation
%%time
Solution = solve(csp, True)
show_solution(Solution)
Explanation: Solving the crypto-arithmetic puzzle took 160 milliseconds with consistency checking.
End of explanation |
371 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Measuring monotonic relationships
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie with example algorithms by David Edwards
Reference
Step1: Spearman Rank Correlation
Intuition
The intution is now that instead of looking at the relationship between the two variables, we look at the relationship between the ranks. This is robust to outliers and the scale of the data.
Definition
The argument method='average' indicates that when we have a tie, we average the ranks that the numbers would occupy. For example, the two 5's above, which would take up ranks 1 and 2, each get assigned a rank of $1.5$.
To compute the Spearman rank correlation for two data sets $X$ and $Y$, each of size $n$, we use the formula
$$r_S = 1 - \frac{6 \sum_{i=1}^n d_i^2}{n(n^2 - 1)}$$
where $d_i$ is the difference between the ranks of the $i$th pair of observations, $X_i - Y_i$.
The result will always be between $-1$ and $1$. A positive value indicates a positive relationship between the variables, while a negative value indicates an inverse relationship. A value of 0 implies the absense of any monotonic relationship. This does not mean that there is no relationship; for instance, if $Y$ is equal to $X$ with a delay of 2, they are related simply and precisely, but their $r_S$ can be close to zero
Step2: Let's take a look at the distribution of measured correlation coefficients and compare the spearman with the regular metric.
Step3: Now let's see how the Spearman rank and Regular coefficients cope when we add more noise to the situation.
Step4: We can see that the Spearman rank correlation copes with the non-linear relationship much better at most levels of noise. Interestingly, at very high levels, it seems to do worse than regular correlation.
Delay in correlation
Of you might have the case that one process affects another, but after a time lag. Now let's see what happens if we add the delay.
Step5: Sure enough, the relationship is not detected. It is important when using both regular and spearman correlation to check for lagged relationships by offsetting your data and testing for different offset values.
Built-In Function
We can also use the spearmanr function in the scipy.stats library
Step6: We now have ourselves an $r_S$, but how do we interpret it? It's positive, so we know that the variables are not anticorrelated. It's not very large, so we know they aren't perfectly positively correlated, but it's hard to say from a glance just how significant the correlation is. Luckily, spearmanr also computes the p-value for this coefficient and sample size for us. We can see that the p-value here is above 0.05; therefore, we cannot claim that $X$ and $Y$ are correlated.
Real World Example
Step7: It looks as though the expense ratio and Sharpe ratio may in fact be anticorrelated. But, we don't consider any of this meaningful as the p-value is above 0.05.
Real World Use Case | Python Code:
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import math
# Example of ranking data
l = [10, 9, 5, 7, 5]
print 'Raw data: ', l
print 'Ranking: ', list(stats.rankdata(l, method='average'))
Explanation: Measuring monotonic relationships
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie with example algorithms by David Edwards
Reference: DeFusco, Richard A. "Tests Concerning Correlation: The Spearman Rank Correlation Coefficient." Quantitative Investment Analysis. Hoboken, NJ: Wiley, 2007
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License. Please do not remove this attribution.
The Spearman Rank Correlation Coefficient allows us to determine whether or not two data series move together; that is, when one increases (decreases) the other also increases (decreases). This is more general than a linear relationship; for instance, $y = e^x$ is a monotonic function, but not a linear one. Therefore, in computing it we compare not the raw data but the ranks of the data.
This is useful when your data sets may be in different units, and therefore not linearly related (for example, the price of a square plot of land and its side length, since the price is more likely to be linear in the area). It's also suitable for data sets which not satisfy the assumptions that other tests require, such as the observations being normally distributed as would be necessary for a t-test.
End of explanation
## Let's see an example of this
n = 100
def compare_correlation_and_spearman_rank(n, noise):
X = np.random.poisson(size=n)
Y = np.exp(X) + noise * np.random.normal(size=n)
Xrank = stats.rankdata(X, method='average')
# n-2 is the second to last element
Yrank = stats.rankdata(Y, method='average')
diffs = Xrank - Yrank # order doesn't matter since we'll be squaring these values
r_s = 1 - 6*sum(diffs*diffs)/(n*(n**2 - 1))
c_c = np.corrcoef(X, Y)[0,1]
return r_s, c_c
experiments = 1000
spearman_dist = np.ndarray(experiments)
correlation_dist = np.ndarray(experiments)
for i in range(experiments):
r_s, c_c = compare_correlation_and_spearman_rank(n, 1.0)
spearman_dist[i] = r_s
correlation_dist[i] = c_c
print 'Spearman Rank Coefficient: ' + str(np.mean(spearman_dist))
# Compare to the regular correlation coefficient
print 'Correlation coefficient: ' + str(np.mean(correlation_dist))
Explanation: Spearman Rank Correlation
Intuition
The intution is now that instead of looking at the relationship between the two variables, we look at the relationship between the ranks. This is robust to outliers and the scale of the data.
Definition
The argument method='average' indicates that when we have a tie, we average the ranks that the numbers would occupy. For example, the two 5's above, which would take up ranks 1 and 2, each get assigned a rank of $1.5$.
To compute the Spearman rank correlation for two data sets $X$ and $Y$, each of size $n$, we use the formula
$$r_S = 1 - \frac{6 \sum_{i=1}^n d_i^2}{n(n^2 - 1)}$$
where $d_i$ is the difference between the ranks of the $i$th pair of observations, $X_i - Y_i$.
The result will always be between $-1$ and $1$. A positive value indicates a positive relationship between the variables, while a negative value indicates an inverse relationship. A value of 0 implies the absense of any monotonic relationship. This does not mean that there is no relationship; for instance, if $Y$ is equal to $X$ with a delay of 2, they are related simply and precisely, but their $r_S$ can be close to zero:
Experiment
Let's see what happens if we draw $X$ from a poisson distribution (non-normal), and then set $Y = e^X + \epsilon$ where $\epsilon$ is drawn from another poisson distribution. We'll take the spearman rank and the correlation coefficient on this data and then run the entire experiment many times. Because $e^X$ produces many values that are far away from the rest, we can this of this as modeling 'outliers' in our data. Spearman rank compresses the outliers and does better at measuring correlation. Normal correlation is confused by the outliers and on average will measure less of a relationship than is actually there.
End of explanation
plt.hist(spearman_dist, bins=50, alpha=0.5)
plt.hist(correlation_dist, bins=50, alpha=0.5)
plt.legend(['Spearman Rank', 'Regular Correlation'])
plt.xlabel
Explanation: Let's take a look at the distribution of measured correlation coefficients and compare the spearman with the regular metric.
End of explanation
n = 100
noises = np.linspace(0, 3, 30)
experiments = 100
spearman = np.ndarray(len(noises))
correlation = np.ndarray(len(noises))
for i in range(len(noises)):
# Run many experiments for each noise setting
rank_coef = 0.0
corr_coef = 0.0
noise = noises[i]
for j in range(experiments):
r_s, c_c = compare_correlation_and_spearman_rank(n, noise)
rank_coef += r_s
corr_coef += c_c
spearman[i] = rank_coef/experiments
correlation[i] = corr_coef/experiments
plt.scatter(noises, spearman, color='r')
plt.scatter(noises, correlation)
plt.legend(['Spearman Rank', 'Regular Correlation'])
plt.xlabel('Amount of Noise')
plt.ylabel('Average Correlation Coefficient')
Explanation: Now let's see how the Spearman rank and Regular coefficients cope when we add more noise to the situation.
End of explanation
n = 100
X = np.random.rand(n)
Xrank = stats.rankdata(X, method='average')
# n-2 is the second to last element
Yrank = stats.rankdata([1,1] + list(X[:(n-2)]), method='average')
diffs = Xrank - Yrank # order doesn't matter since we'll be squaring these values
r_s = 1 - 6*sum(diffs*diffs)/(n*(n**2 - 1))
print r_s
Explanation: We can see that the Spearman rank correlation copes with the non-linear relationship much better at most levels of noise. Interestingly, at very high levels, it seems to do worse than regular correlation.
Delay in correlation
Of you might have the case that one process affects another, but after a time lag. Now let's see what happens if we add the delay.
End of explanation
# Generate two random data sets
np.random.seed(161)
X = np.random.rand(10)
Y = np.random.rand(10)
r_s = stats.spearmanr(X, Y)
print 'Spearman Rank Coefficient: ', r_s[0]
print 'p-value: ', r_s[1]
Explanation: Sure enough, the relationship is not detected. It is important when using both regular and spearman correlation to check for lagged relationships by offsetting your data and testing for different offset values.
Built-In Function
We can also use the spearmanr function in the scipy.stats library:
End of explanation
# Copying data for DXHLX, UGPIX, ALQIX, IIRFX, TMSCX, FHKCX, MCDFX, NRICX, PINHX, ANGLX
expense = [1.35, 1.79, 1.45, 1.88, 1., 1.01, 1.19, 1.92, 0.51, 1.24]
sharpe = [0.9, 1., 1.11, 0.1, 1.01, 1.69, 1.62, 1.83, 1.94, 2.16]
plt.scatter(expense, sharpe)
plt.xlabel('Expense Ratio')
plt.xlabel('Sharpe Ratio')
r_S = stats.spearmanr(expense, sharpe)
print 'Spearman Rank Coefficient: ', r_S[0]
print 'p-value: ', r_S[1]
Explanation: We now have ourselves an $r_S$, but how do we interpret it? It's positive, so we know that the variables are not anticorrelated. It's not very large, so we know they aren't perfectly positively correlated, but it's hard to say from a glance just how significant the correlation is. Luckily, spearmanr also computes the p-value for this coefficient and sample size for us. We can see that the p-value here is above 0.05; therefore, we cannot claim that $X$ and $Y$ are correlated.
Real World Example: Mutual Fund Expense Ratio
Now that we've seen how Spearman rank correlation works, we'll quickly go through the process again with some real data. For instance, we may wonder whether the expense ratio of a mutual fund is indicative of its three-year Sharpe ratio. That is, does spending more money on administration, management, etc. lower the risk or increase the returns? Quantopian does not currently support mutual funds, so we will pull the data from Yahoo Finance.
End of explanation
symbol_list = ['A', 'AA', 'AAC', 'AAL', 'AAMC', 'AAME', 'AAN', 'AAOI', 'AAON', 'AAP', 'AAPL', 'AAT', 'AAU', 'AAV', 'AAVL', 'AAWW', 'AB', 'ABAC', 'ABAX', 'ABB', 'ABBV', 'ABC', 'ABCB', 'ABCD', 'ABCO', 'ABCW', 'ABDC', 'ABEV', 'ABG', 'ABGB']
# Get the returns over the lookback window
start = '2014-12-01'
end = '2015-01-01'
historical_returns = get_pricing(symbol_list, fields='price', start_date=start, end_date=end).pct_change()[1:]
# Compute our stock score
scores = np.mean(historical_returns)
print 'Our Scores\n'
print scores
print '\n'
start = '2015-01-01'
end = '2015-02-01'
walk_forward_returns = get_pricing(symbol_list, fields='price', start_date=start, end_date=end).pct_change()[1:]
walk_forward_returns = np.mean(walk_forward_returns)
print 'The Walk Forward Returns\n'
print walk_forward_returns
print '\n'
plt.scatter(scores, walk_forward_returns)
plt.xlabel('Scores')
plt.ylabel('Walk Forward Returns')
r_s = stats.spearmanr(scores, walk_forward_returns)
print 'Correlation Coefficient: ' + str(r_s[0])
print 'p-value: ' + str(r_s[1])
Explanation: It looks as though the expense ratio and Sharpe ratio may in fact be anticorrelated. But, we don't consider any of this meaningful as the p-value is above 0.05.
Real World Use Case: Evaluating a Ranking Model
Let's say that we have some way of ranking securities and that we'd like to test how well our ranking performs in practice. In this case our model just takes the mean daily return for the last month and ranks the stocks by that metric.
We hypothesize that this will be predictive of the mean returns over the next month. To test this we score the stocks based on a lookback window, then take the spearman rank correlation of the score and the mean returns over the walk forward month.
End of explanation |
372 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have written a custom model where I have defined a custom optimizer. I would like to update the learning rate of the optimizer when loss on training set increases. | Problem:
import numpy as np
import pandas as pd
import torch
optim = load_data()
for param_group in optim.param_groups:
param_group['lr'] = 0.001 |
373 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The following PNCollection objects will contain all the terms in the different parts of the binding energy.
Step1: Individual energy terms
In this notebook, every term will be multiplied by the following coefficient.
Step2: Note that fractions need to be entered as, e.g., frac(3,4) so that they are not converted to finite-precision decimals.
The nonspinning orbital binding energy is known through 4pN. The expressions through 3.5pN here come from Eq. (194) of Blanchet (2006).
The 4pN term from Eq. (5.2d) of Jaranowski and Schäfer is known exactly, now that the $\nu$-linear piece is given as Eq. (32) of Bini and Damour (2013a). The remaining terms are not known exactly, but Bini and Damour (2103b) have derived some terms, though there is incomplete information, which are noted as the constants in the following cell. Note that, though the notation is confusing, Bini and Damour claim they did not calculate the coefficient they call $a_6^{\text{ln 1}}$; but it seems to be given in their Eq. (64).
Step3: (Is the following true? What about this paper?)
The spin-squared terms (by which I mean both spin-spin and spin-orbit squared terms) in the energy are known only at 2pN order (from Kidder (1995) and Will and Wiseman (1996)). They are most conveniently given in Eq. (C4) of Arun et al.
Step4: The spin-orbit terms in the energy are now complete to 4.0pN (the last term is zero). These terms come from Eq. (4.6) of Bohé et al. (2012)
Step5: The tidal-coupling terms come in to the energy at relative 5pN order, and are known to 6pN order.
These terms come from Eq. (2.11) of Vines et al. (2011). Note their unusual convention for mass ratios, where $\chi_1 = m_1/m$ in their notation; in particular, $\chi$ is not a spin parameter. Also note that $\hat{\lambda} = \lambda_2 v^{10}/(m_1+m_2)^5$, and we need to add the coupling terms again with $1 \leftrightarrow 2$. Finally, note the normalization difference, where the overall factor is different by $-2$.
Step6: Collected energy terms | Python Code:
BindingEnergy_NoSpin = PNCollection()
BindingEnergy_Spin = PNCollection()
BindingEnergy_NSTides = PNCollection()
Explanation: The following PNCollection objects will contain all the terms in the different parts of the binding energy.
End of explanation
BindingEnergy_NoSpin.AddDerivedVariable('E_coeff', -(M*nu*v**2)/2)
Explanation: Individual energy terms
In this notebook, every term will be multiplied by the following coefficient.
End of explanation
# Equation numbers below refer to v1 of Bini and Damour (2013b)
a_6__c1 = 0 # not yet known
a_6__ln1 = -frac(144,5) # coefficient of nu in Eq. (64)
a_65__c1 = 0 # not yet known
a_65__ln1 = 0 # not yet known
a_7__c1 = 0 # not yet known
a_7__ln1 = 0 # not yet known
BindingEnergy_NoSpin.AddDerivedConstant('E_0', 1)
# E_1 is 0
BindingEnergy_NoSpin.AddDerivedConstant('E_2', -frac(3,4) - frac(1,12)*nu)
# E_3 is 0
BindingEnergy_NoSpin.AddDerivedConstant('E_4', -frac(27,8) + frac(19,8)*nu - frac(1,24)*nu**2)
# E_5 is 0
BindingEnergy_NoSpin.AddDerivedConstant('E_6', -frac(675,64) + (frac(34445,576) - frac(205,96)*pi**2)*nu - frac(155,96)*nu**2
- frac(35,5184)*nu**3)
# E_7 is 0
BindingEnergy_NoSpin.AddDerivedConstant('E_8',
-frac(3969,128) + (-frac(123671,5760)+frac(9037,1536)*pi**2+frac(1792,15)*ln(2)+frac(896,15)*EulerGamma)*nu
+ (-frac(498449,3456) + frac(3157,576)*pi**2)*nu**2 + frac(301,1728)*nu**3 + frac(77,31104)*nu**4)
BindingEnergy_NoSpin.AddDerivedConstant('E_lnv_8', frac(896,15)*nu)
# E_9 is 0
# Below are the incomplete terms
BindingEnergy_NoSpin.AddDerivedConstant('E_10', -frac(45927,512)
+ (-frac(228916843,115200) - frac(9976,35)*EulerGamma + frac(729,7)*ln(3)
- frac(23672,35)*ln(2) + frac(126779,512)*pi**2)*nu
+ (-frac(21337,1024)*pi**2 + 3*a_6__c1 - frac(896,5)*ln(2)
- frac(448,5)*EulerGamma + frac(189745,576) + frac(2,3)*a_6__ln1)*nu**2
+ (-frac(1353,256)*pi**2 + frac(69423,512))*nu**3
+ frac(55,512)*nu**4
+ frac(1,512)*nu**5)
BindingEnergy_NoSpin.AddDerivedConstant('E_lnv_10', -frac(9976,35)*nu + (- frac(448,5) + 6*a_6__ln1)*nu**2)
BindingEnergy_NoSpin.AddDerivedConstant('E_11', frac(10,3)*nu * (frac(13696,525)*pi + nu*a_65__c1))
BindingEnergy_NoSpin.AddDerivedConstant('E_12',
- frac(264627,1024)+frac(2717,6718464)*nu**6+frac(5159,248832)*nu**5
+ (frac(272855,124416)*pi**2-frac(20543435,373248))*nu**4
+ (frac(1232,27)*EulerGamma+frac(6634243,110592)*pi**2-frac(11,2)*a_6__c1
-frac(75018547,51840)+frac(2464,27)*ln(2) -frac(20,9)*a_6__ln1)*nu**3
+ (frac(113594718743,14515200)+frac(18491,2304)*pi**4
+frac(246004,105)*ln(2)+frac(112772,105)*EulerGamma+frac(11,2)*a_6__c1+a_6__ln1+frac(2,3)*a_7__ln1
+ frac(11,3)*a_7__c1-frac(86017789,110592)*pi**2-frac(2673,14)*ln(3))*nu**2
+ (- frac(389727504721,43545600)+frac(74888,243)*ln(2) - frac(7128,7)*ln(3)-frac(30809603,786432)*pi**4
-frac(3934568,8505)*EulerGamma +frac(9118627045,5308416)*pi**2)*nu )
BindingEnergy_NoSpin.AddDerivedConstant('E_lnv_12',
frac(22,3)*a_7__ln1 - 2*frac(1967284,8505)*nu + 2*(frac(56386,105)+frac(11,2)*a_6__ln1)*nu**2
+ 2*(frac(616,27)-frac(11,2)*a_6__ln1)*nu**3)
Explanation: Note that fractions need to be entered as, e.g., frac(3,4) so that they are not converted to finite-precision decimals.
The nonspinning orbital binding energy is known through 4pN. The expressions through 3.5pN here come from Eq. (194) of Blanchet (2006).
The 4pN term from Eq. (5.2d) of Jaranowski and Schäfer is known exactly, now that the $\nu$-linear piece is given as Eq. (32) of Bini and Damour (2013a). The remaining terms are not known exactly, but Bini and Damour (2103b) have derived some terms, though there is incomplete information, which are noted as the constants in the following cell. Note that, though the notation is confusing, Bini and Damour claim they did not calculate the coefficient they call $a_6^{\text{ln 1}}$; but it seems to be given in their Eq. (64).
End of explanation
# Lower-order terms are 0
BindingEnergy_Spin.AddDerivedVariable('E_SQ_4',
(1+delta-2*nu)*(chi1chi1+chi2chi2)/4 - 3*(chi_a_ell**2+chi_s_ell**2)/2
- delta*( chi2chi2/2 + 3*chi_a_ell*chi_s_ell ) + nu*( chi1chi2 + 6*chi_a_ell**2 ))
Explanation: (Is the following true? What about this paper?)
The spin-squared terms (by which I mean both spin-spin and spin-orbit squared terms) in the energy are known only at 2pN order (from Kidder (1995) and Will and Wiseman (1996)). They are most conveniently given in Eq. (C4) of Arun et al.
End of explanation
# Lower-order terms are 0
BindingEnergy_Spin.AddDerivedVariable('E_SO_3', (frac(14,3)*S_ell + 2*delta*Sigma_ell)/M**2)
# E_SO_4 is 0
BindingEnergy_Spin.AddDerivedVariable('E_SO_5', ((11-61*nu/9)*S_ell + (3-10*nu/3)*delta*Sigma_ell)/M**2)
# E_SO_6 is 0
BindingEnergy_Spin.AddDerivedVariable('E_SO_7',
((frac(135,4)-frac(367,4)*nu+frac(29,12)*nu**2)*S_ell + (frac(27,4)-39*nu+frac(5,4)*nu**2)*delta*Sigma_ell)/M**2)
# E_SO_8 is 0
Explanation: The spin-orbit terms in the energy are now complete to 4.0pN (the last term is zero). These terms come from Eq. (4.6) of Bohé et al. (2012):
End of explanation
BindingEnergy_NSTides = PNCollection()
# Lower-order terms are 0
BindingEnergy_NSTides.AddDerivedConstant('E_NSTides_10', (-9*(M1/M2)*lambda2 - 9*(M2/M1)*lambda1)/M**5)
# E_NSTidal_11 is 0
BindingEnergy_NSTides.AddDerivedConstant('E_NSTides_12',
(-frac(11,2)*(M1/M2)*(3+2*M2/M+3*(M2/M)**2)*lambda2 - frac(11,2)*(M2/M1)*(3+2*M1/M+3*(M1/M)**2)*lambda1)/M**5)
Explanation: The tidal-coupling terms come in to the energy at relative 5pN order, and are known to 6pN order.
These terms come from Eq. (2.11) of Vines et al. (2011). Note their unusual convention for mass ratios, where $\chi_1 = m_1/m$ in their notation; in particular, $\chi$ is not a spin parameter. Also note that $\hat{\lambda} = \lambda_2 v^{10}/(m_1+m_2)^5$, and we need to add the coupling terms again with $1 \leftrightarrow 2$. Finally, note the normalization difference, where the overall factor is different by $-2$.
End of explanation
def BindingEnergyExpression(BindingEnergyTerms=[BindingEnergy_NoSpin, BindingEnergy_Spin], PNOrder=frac(7,2)):
# We have to play some tricks with the log terms so that `horner` works
def logterm(key,val):
if 'lnv' in val:
return logv
else:
return 1
return E_coeff*horner(sum([key*(v**n)*logterm(key,val)
for Terms in BindingEnergyTerms
for n in range(2*PNOrder+1)
for key,val in Terms.items()
if val.endswith('_{0}'.format(n))])).subs(logv, ln(v))
def BindingEnergyDerivativeExpression(BindingEnergyTerms=[BindingEnergy_NoSpin, BindingEnergy_Spin], PNOrder=frac(7,2)):
Energy = BindingEnergyExpression(BindingEnergyTerms, PNOrder)
return horner(diff(Energy.subs(E_coeff, E_coeff.substitution), v)
.simplify().subs(log(v),logv)).simplify().subs(logv, ln(v))
# display(BindingEnergyExpression(PNOrder=frac(8,2)))
# display(BindingEnergyDerivativeExpression(PNOrder=frac(8,2)))
Explanation: Collected energy terms
End of explanation |
374 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deputado Histogramado
expressao.xyz/deputado/
Como processar as sessões do parlamento Português
Índice
Reunír o dataset
Contando as palavras mais comuns
Fazendo histogramas
Representações geograficas
Simplificar o dataset e exportar para o expressao.xyz/deputado/
O que se passou nas mais de 4000 sessões de discussão do parlamento Português que ocorreram desde 1976?
Neste notebook vamos tentar visualizar o que se passou da maneira mais simples - contando palavras, e fazendo gráficos.
Para obter os textos de todas as sessões usaremos o demo.cratica.org, onde podemos aceder facilmente a todas as sessões do parlamento de 1976 a 2015. Depois com um pouco de python, pandas e matplotlib vamos analisar o que se passou.
Para executar estes notebook será necessário descarregar e abrir com o Jupiter Notebooks (a distribuição Anaconda faz com que instalar todas as ferramentas necessárias seja fácil - https
Step1: Temos ~800 MB de dados. O servidor onde o backend do site vai funcionar apenas têm 1GB de memória, o que cria um desafio técnico. Como a útilidade do site é apenas contar palavras ou expressões que ocorrem mais em certas sessões, e não em todas as sessões ('enfermeiro' vs 'deputado'), podemos retirar essas palavras mais usuais
Step2: Fazendo uma contagem ás palavras mais frequentes que ainda restam
Step3: E estimando a redução de tamanho
Step4: 536 MB. Nada mau. Graças a esta redução tornou-se possível fazer uma query do site funcionar em ~4 seg em vez de 30 seg pois agora os dados cabem na memória. De notar que a ordem das palavras é a mesma, mas geram-se alguns problemas contando certas expressões ('porto de mar' é agora 'porto mar', e contando 'porto mar' tambem se contam ocorrencias de '(...)Porto. Mar(...)', pois retiramos os pontos e reduzimos os espaços consecutivos a um único. Mesmo assim, o dataset é perfeitamente útil para identificar em que sessões se falou de um certo assunto.
Exportemos entao o ficheiro CSV que vai ser usado no site | Python Code:
%matplotlib inline
import pylab
import matplotlib
import pandas
import numpy
dateparse = lambda x: pandas.datetime.strptime(x, '%Y-%m-%d')
sessoes = pandas.read_csv('sessoes_democratica_org.csv',index_col=0,parse_dates=['data'], date_parser=dateparse)
del sessoes['tamanho']
total0 = numpy.sum(sessoes['sessao'].map(len))
print(total0)
Explanation: Deputado Histogramado
expressao.xyz/deputado/
Como processar as sessões do parlamento Português
Índice
Reunír o dataset
Contando as palavras mais comuns
Fazendo histogramas
Representações geograficas
Simplificar o dataset e exportar para o expressao.xyz/deputado/
O que se passou nas mais de 4000 sessões de discussão do parlamento Português que ocorreram desde 1976?
Neste notebook vamos tentar visualizar o que se passou da maneira mais simples - contando palavras, e fazendo gráficos.
Para obter os textos de todas as sessões usaremos o demo.cratica.org, onde podemos aceder facilmente a todas as sessões do parlamento de 1976 a 2015. Depois com um pouco de python, pandas e matplotlib vamos analisar o que se passou.
Para executar estes notebook será necessário descarregar e abrir com o Jupiter Notebooks (a distribuição Anaconda faz com que instalar todas as ferramentas necessárias seja fácil - https://www.continuum.io/downloads)
Parte 3 - Simplificar o dataset e exportar
Código para carregar os dados do notebook anterior:
End of explanation
def substitui_palavras_comuns(texto):
t = texto.replace('.',' ').replace('\n',' ').replace(',',' ').replace(')',' ').replace('(',' ').replace('!',' ').replace('?',' ').replace(':',' ').replace(';',' ')
t = t.replace(' de ',' ').replace(' que ',' ').replace(' do ',' ').replace(' da ',' ').replace(' sr ',' ').replace(' não ',' ').replace(' em ',' ').replace(' se ','').replace(' para',' ').replace(' os ',' ').replace(' dos ',' ').replace(' uma ',' ').replace(' um ',' ').replace(' as ',' ').replace(' dos ',' ').replace(' no ',' ').replace(' dos ',' ').replace('presidente','').replace(' na ',' ').replace(' por ','').replace('presidente','').replace(' com ',' ').replace(' ao ',' ').replace('deputado','').replace(' das ',' ').replace(' como ','').replace('governo','').replace(' ou ','').replace(' mais ',' ').replace(' assembleia ','').replace(' ser ',' ').replace(' tem ',' ')
t = t.replace(' srs ','').replace(' pelo ','').replace(' mas ','').replace(' foi ','').replace('srs.','').replace('palavra','').replace(' que ','').replace(' sua ','').replace(' artigo ','').replace(' nos ','').replace(' eu ','').replace('muito','').replace('sobre ','').replace('também','').replace('proposta','').replace(' aos ',' ').replace(' esta ',' ').replace(' já ',' ')
t = t.replace(' vamos ',' ').replace(' nesta ',' ').replace(' lhe ',' ').replace(' meu ',' ').replace(' eu ',' ').replace(' vai ',' ')
t = t.replace(' isso ',' ').replace(' dia ',' ').replace(' discussão ',' ').replace(' dizer ',' ').replace(' seus ',' ').replace(' apenas ',' ').replace(' agora ',' ')
t = t.replace(' ª ',' ').replace(' foram ',' ').replace(' pois ',' ').replace(' nem ',' ').replace(' suas ',' ').replace(' deste ',' ').replace(' quer ',' ').replace(' desta ',' ').replace(' qual ',' ')
t = t.replace(' o ',' ').replace(' a ',' ').replace(' e ',' ').replace(' é ',' ').replace(' à ',' ').replace(' s ',' ')
t = t.replace(' - ','').replace(' º ',' ').replace(' n ',' ').replace(' . ',' ').replace(' são ',' ').replace(' está ',' ').replace(' seu ',' ').replace(' há ',' ').replace('orador',' ').replace(' este ',' ').replace(' pela ',' ').replace(' bem ',' ').replace(' nós ',' ').replace('porque','').replace('aqui','').replace(' às ',' ').replace('ainda','').replace('todos','').replace(' só ',' ').replace('fazer',' ').replace(' sem ',' ').replace(' qualquer ',' ').replace(' quanto ',' ').replace(' pode ',' ').replace(' nosso ',' ').replace(' neste ',' ').replace(' ter ',' ').replace(' mesmo ',' ').replace(' essa ',' ').replace(' até ',' ').replace(' me ',' ').replace(' nossa ',' ').replace(' entre ',' ').replace(' nas ',' ').replace(' esse ',' ').replace(' será ',' ').replace(' isto ',' ').replace(' quando ',' ').replace(' seja ',' ').replace(' assim ',' ').replace(' quanto ',' ').replace(' pode ',' ').replace(' é ',' ')
t = t.replace(' ',' ').replace(' ',' ').replace(' ',' ')
return t
sessoes['sessao'] = sessoes['sessao'].map(substitui_palavras_comuns)
Explanation: Temos ~800 MB de dados. O servidor onde o backend do site vai funcionar apenas têm 1GB de memória, o que cria um desafio técnico. Como a útilidade do site é apenas contar palavras ou expressões que ocorrem mais em certas sessões, e não em todas as sessões ('enfermeiro' vs 'deputado'), podemos retirar essas palavras mais usuais:
End of explanation
import re
from collections import Counter
def agrupa_palavras(texto):
texto = texto.lower() #processa tudo em minusculas
palavras = re.split(';|,|\n| |\(|\)|\?|\!|:',texto) # separa as palavras
palavras = [x.title() for x in palavras if len(x)>0] # organiza e remove as palavras com menos de 5 caracteres
return palavras
def conta_palavras(sessoes):
lista = sessoes['sessao'].map(agrupa_palavras) # cria uma lista de 'lista de palavras', um elemento por sessao
palavras = []
for l in lista:
palavras.extend(l) # junta as 'listas de palavras' todas na mesma lista
return Counter(palavras).most_common(100) # conta as palavras mais frequentes
x = conta_palavras(sessoes[1:100])
for (y,z) in x:
print(str(str(z)+' x '+y))
Explanation: Fazendo uma contagem ás palavras mais frequentes que ainda restam:
End of explanation
total = numpy.sum(sessoes['sessao'].map(len))
print(str(total/total0*100)+' %')
print(total)
Explanation: E estimando a redução de tamanho:
End of explanation
sessoes.to_csv('sessoes_democratica_clipped.csv')
Explanation: 536 MB. Nada mau. Graças a esta redução tornou-se possível fazer uma query do site funcionar em ~4 seg em vez de 30 seg pois agora os dados cabem na memória. De notar que a ordem das palavras é a mesma, mas geram-se alguns problemas contando certas expressões ('porto de mar' é agora 'porto mar', e contando 'porto mar' tambem se contam ocorrencias de '(...)Porto. Mar(...)', pois retiramos os pontos e reduzimos os espaços consecutivos a um único. Mesmo assim, o dataset é perfeitamente útil para identificar em que sessões se falou de um certo assunto.
Exportemos entao o ficheiro CSV que vai ser usado no site:
End of explanation |
375 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Custom training loop with Keras and MultiWorkerMirroredStrategy
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Before importing TensorFlow, make a few changes to the environment
Step3: Reset the 'TF_CONFIG' environment variable (you'll see more about this later).
Step4: Make sure that the current directory is on Python's path. This allows the notebook to import the files written by %%writefile later.
Step5: Now import TensorFlow.
Step6: Dataset and model definition
Next, create an mnist.py file with a simple model and dataset setup. This Python file will be used by the worker-processes in this tutorial
Step7: Multi-worker configuration
Now let's enter the world of multi-worker training. In TensorFlow, the 'TF_CONFIG' environment variable is required for training on multiple machines. Each machine may have a different role. The 'TF_CONFIG' variable used below is a JSON string that specifies the cluster configuration on each worker that is part of the cluster. This is the default method for specifying a cluster, using cluster_resolver.TFConfigClusterResolver, but there are other options available in the distribute.cluster_resolver module. Learn more about setting up the 'TF_CONFIG' variable in the Distributed training guide.
Describe your cluster
Here is an example configuration
Step8: Note that tf_config is just a local variable in Python. To use it for training configuration, serialize it as a JSON and place it in a 'TF_CONFIG' environment variable. Here is the same 'TF_CONFIG' serialized as a JSON string
Step9: There are two components of 'TF_CONFIG'
Step10: you can then access the environment variable from a subprocess
Step11: In the next section, you'll use this to pass the 'TF_CONFIG' to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial
Step12: Note
Step13: Auto-shard your data across workers
In multi-worker training, dataset sharding is needed to ensure convergence and reproducibility. Sharding means handing each worker a subset of the entire dataset—it helps create the experience similar to training on a single worker. In the example below, you're relying on the default autosharding policy of tf.distribute. You can also customize it by setting the tf.data.experimental.AutoShardPolicy of the tf.data.experimental.DistributeOptions. To learn more, refer to the Sharding section of the Distributed input tutorial.
Step14: Define a custom training loop and train the model
Specify an optimizer
Step17: Define a training step with tf.function
Step18: Checkpoint saving and restoring
As you write a custom training loop, you need to handle checkpoint saving manually instead of relying on a Keras callback. Note that for MultiWorkerMirroredStrategy, saving a checkpoint or a complete model requires the participation of all workers, because attempting to save only on the chief worker could lead to a deadlock. Workers also need to write to different paths to avoid overwriting each other. Here's an example of how to configure the directories
Step19: Create one tf.train.Checkpoint that tracks the model, which is managed by a tf.train.CheckpointManager, so that only the latest checkpoints are preserved
Step20: Now, when you need to restore a checkpoint, you can find the latest checkpoint saved using the convenient tf.train.latest_checkpoint function (or by calling tf.train.CheckpointManager.restore_or_initialize).
Step21: After restoring the checkpoint, you can continue with training your custom training loop.
Step24: Complete code at a glance
To sum up all the procedures discussed so far
Step25: The current directory now contains both Python files
Step26: So JSON-serialize the 'TF_CONFIG' and add it to the environment variables
Step27: Now, you can launch a worker process that will run the main.py and use the 'TF_CONFIG'
Step28: There are a few things to note about the above command
Step29: Now, check the output to the worker's log file so far
Step30: The last line of the log file should say
Step31: Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process)
Step32: If you recheck the logs written by the first worker, notice that it participated in training that model | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import json
import os
import sys
Explanation: Custom training loop with Keras and MultiWorkerMirroredStrategy
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/multi_worker_with_ctl"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/distribute/multi_worker_with_ctl.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/distribute/multi_worker_with_ctl.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/distribute/multi_worker_with_ctl.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
This tutorial demonstrates how to perform multi-worker distributed training with a Keras model and with custom training loops using the tf.distribute.Strategy API. The training loop is distributed via tf.distribute.MultiWorkerMirroredStrategy, such that a tf.keras model—designed to run on single-worker—can seamlessly work on multiple workers with minimal code changes. Custom training loops provide flexibility and a greater control on training, while also making it easier to debug the model. Learn more about writing a basic training loop, writing a training loop from scratch and custom training.
If you are looking for how to use MultiWorkerMirroredStrategy with tf.keras.Model.fit, refer to this tutorial instead.
Distributed Training in TensorFlow guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of tf.distribute.Strategy APIs.
Setup
First, some necessary imports.
End of explanation
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
Explanation: Before importing TensorFlow, make a few changes to the environment:
* Disable all GPUs. This prevents errors caused by all workers trying to use the same GPU. In a real-world application, each worker would be on a different machine.
End of explanation
os.environ.pop('TF_CONFIG', None)
Explanation: Reset the 'TF_CONFIG' environment variable (you'll see more about this later).
End of explanation
if '.' not in sys.path:
sys.path.insert(0, '.')
Explanation: Make sure that the current directory is on Python's path. This allows the notebook to import the files written by %%writefile later.
End of explanation
import tensorflow as tf
Explanation: Now import TensorFlow.
End of explanation
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000)
return train_dataset
def dataset_fn(global_batch_size, input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
dataset = mnist_dataset(batch_size)
dataset = dataset.shard(input_context.num_input_pipelines,
input_context.input_pipeline_id)
dataset = dataset.batch(batch_size)
return dataset
def build_cnn_model():
return tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
Explanation: Dataset and model definition
Next, create an mnist.py file with a simple model and dataset setup. This Python file will be used by the worker-processes in this tutorial:
End of explanation
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
Explanation: Multi-worker configuration
Now let's enter the world of multi-worker training. In TensorFlow, the 'TF_CONFIG' environment variable is required for training on multiple machines. Each machine may have a different role. The 'TF_CONFIG' variable used below is a JSON string that specifies the cluster configuration on each worker that is part of the cluster. This is the default method for specifying a cluster, using cluster_resolver.TFConfigClusterResolver, but there are other options available in the distribute.cluster_resolver module. Learn more about setting up the 'TF_CONFIG' variable in the Distributed training guide.
Describe your cluster
Here is an example configuration:
End of explanation
json.dumps(tf_config)
Explanation: Note that tf_config is just a local variable in Python. To use it for training configuration, serialize it as a JSON and place it in a 'TF_CONFIG' environment variable. Here is the same 'TF_CONFIG' serialized as a JSON string:
End of explanation
os.environ['GREETINGS'] = 'Hello TensorFlow!'
Explanation: There are two components of 'TF_CONFIG': 'cluster' and 'task'.
'cluster' is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs such as 'worker'. In multi-worker training with MultiWorkerMirroredStrategy, there is usually one 'worker' that takes on a little more responsibility like saving checkpoints and writing summary files for TensorBoard in addition to what a regular 'worker' does. Such a worker is referred to as the 'chief' worker, and it is customary that the 'worker' with 'index' 0 is appointed as the chief worker.
'task' provides information of the current task and is different on each worker. It specifies the 'type' and 'index' of that worker.
In this example, you set the task 'type' to 'worker' and the task 'index' to 0. This machine is the first worker and will be appointed as the chief worker and do more work than the others. Note that other machines will need to have the 'TF_CONFIG' environment variable set as well, and it should have the same 'cluster' dict, but different task 'type' or task 'index' depending on what the roles of those machines are.
For illustration purposes, this tutorial shows how one may set a 'TF_CONFIG' with two workers on 'localhost'. In practice, users would create multiple workers on external IP addresses/ports, and set 'TF_CONFIG' on each worker appropriately.
This example uses two workers. The first worker's 'TF_CONFIG' is shown above. For the second worker, set tf_config['task']['index']=1.
Environment variables and subprocesses in notebooks
Subprocesses inherit environment variables from their parent. So if you set an environment variable in this Jupyter Notebook process:
End of explanation
%%bash
echo ${GREETINGS}
Explanation: you can then access the environment variable from a subprocess:
End of explanation
strategy = tf.distribute.MultiWorkerMirroredStrategy()
Explanation: In the next section, you'll use this to pass the 'TF_CONFIG' to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial: To demonstrate a minimal multi-worker example.
MultiWorkerMirroredStrategy
Before training the model, first create an instance of tf.distribute.MultiWorkerMirroredStrategy:
End of explanation
import mnist
with strategy.scope():
# Model building needs to be within `strategy.scope()`.
multi_worker_model = mnist.build_cnn_model()
Explanation: Note: 'TF_CONFIG' is parsed and TensorFlow's GRPC servers are started at the time you call tf.distribute.MultiWorkerMirroredStrategy. Therefore, you must set the 'TF_CONFIG' environment variable before you instantiate a tf.distribute.Strategy. To save time in this illustrative example, this is not demonstrated in this tutorial, so that servers do not need to start. You can find a full example in the last section of this tutorial.
Use tf.distribute.Strategy.scope to specify that a strategy should be used when building your model. This allows the strategy to control things like variable placement—it will create copies of all variables in the model's layers on each device across all workers.
End of explanation
per_worker_batch_size = 64
num_workers = len(tf_config['cluster']['worker'])
global_batch_size = per_worker_batch_size * num_workers
with strategy.scope():
multi_worker_dataset = strategy.distribute_datasets_from_function(
lambda input_context: mnist.dataset_fn(global_batch_size, input_context))
Explanation: Auto-shard your data across workers
In multi-worker training, dataset sharding is needed to ensure convergence and reproducibility. Sharding means handing each worker a subset of the entire dataset—it helps create the experience similar to training on a single worker. In the example below, you're relying on the default autosharding policy of tf.distribute. You can also customize it by setting the tf.data.experimental.AutoShardPolicy of the tf.data.experimental.DistributeOptions. To learn more, refer to the Sharding section of the Distributed input tutorial.
End of explanation
with strategy.scope():
# The creation of optimizer and train_accuracy needs to be in
# `strategy.scope()` as well, since they create variables.
optimizer = tf.keras.optimizers.RMSprop(learning_rate=0.001)
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='train_accuracy')
Explanation: Define a custom training loop and train the model
Specify an optimizer:
End of explanation
@tf.function
def train_step(iterator):
Training step function.
def step_fn(inputs):
Per-Replica step function.
x, y = inputs
with tf.GradientTape() as tape:
predictions = multi_worker_model(x, training=True)
per_batch_loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True,
reduction=tf.keras.losses.Reduction.NONE)(y, predictions)
loss = tf.nn.compute_average_loss(
per_batch_loss, global_batch_size=global_batch_size)
grads = tape.gradient(loss, multi_worker_model.trainable_variables)
optimizer.apply_gradients(
zip(grads, multi_worker_model.trainable_variables))
train_accuracy.update_state(y, predictions)
return loss
per_replica_losses = strategy.run(step_fn, args=(next(iterator),))
return strategy.reduce(
tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None)
Explanation: Define a training step with tf.function:
End of explanation
from multiprocessing import util
checkpoint_dir = os.path.join(util.get_temp_dir(), 'ckpt')
def _is_chief(task_type, task_id, cluster_spec):
return (task_type is None
or task_type == 'chief'
or (task_type == 'worker'
and task_id == 0
and "chief" not in cluster_spec.as_dict()))
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id, cluster_spec):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id, cluster_spec):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
Explanation: Checkpoint saving and restoring
As you write a custom training loop, you need to handle checkpoint saving manually instead of relying on a Keras callback. Note that for MultiWorkerMirroredStrategy, saving a checkpoint or a complete model requires the participation of all workers, because attempting to save only on the chief worker could lead to a deadlock. Workers also need to write to different paths to avoid overwriting each other. Here's an example of how to configure the directories:
End of explanation
epoch = tf.Variable(
initial_value=tf.constant(0, dtype=tf.dtypes.int64), name='epoch')
step_in_epoch = tf.Variable(
initial_value=tf.constant(0, dtype=tf.dtypes.int64),
name='step_in_epoch')
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
# Normally, you don't need to manually instantiate a `ClusterSpec`, but in this
# illustrative example you did not set `'TF_CONFIG'` before initializing the
# strategy. Check out the next section for "real-world" usage.
cluster_spec = tf.train.ClusterSpec(tf_config['cluster'])
checkpoint = tf.train.Checkpoint(
model=multi_worker_model, epoch=epoch, step_in_epoch=step_in_epoch)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id,
cluster_spec)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
Explanation: Create one tf.train.Checkpoint that tracks the model, which is managed by a tf.train.CheckpointManager, so that only the latest checkpoints are preserved:
End of explanation
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
if latest_checkpoint:
checkpoint.restore(latest_checkpoint)
Explanation: Now, when you need to restore a checkpoint, you can find the latest checkpoint saved using the convenient tf.train.latest_checkpoint function (or by calling tf.train.CheckpointManager.restore_or_initialize).
End of explanation
num_epochs = 3
num_steps_per_epoch = 70
while epoch.numpy() < num_epochs:
iterator = iter(multi_worker_dataset)
total_loss = 0.0
num_batches = 0
while step_in_epoch.numpy() < num_steps_per_epoch:
total_loss += train_step(iterator)
num_batches += 1
step_in_epoch.assign_add(1)
train_loss = total_loss / num_batches
print('Epoch: %d, accuracy: %f, train_loss: %f.'
%(epoch.numpy(), train_accuracy.result(), train_loss))
train_accuracy.reset_states()
# Once the `CheckpointManager` is set up, you're now ready to save, and remove
# the checkpoints non-chief workers saved.
checkpoint_manager.save()
if not _is_chief(task_type, task_id, cluster_spec):
tf.io.gfile.rmtree(write_checkpoint_dir)
epoch.assign_add(1)
step_in_epoch.assign(0)
Explanation: After restoring the checkpoint, you can continue with training your custom training loop.
End of explanation
%%writefile main.py
#@title File: `main.py`
import os
import json
import tensorflow as tf
import mnist
from multiprocessing import util
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
global_batch_size = per_worker_batch_size * num_workers
num_epochs = 3
num_steps_per_epoch=70
# Checkpoint saving and restoring
def _is_chief(task_type, task_id, cluster_spec):
return (task_type is None
or task_type == 'chief'
or (task_type == 'worker'
and task_id == 0
and 'chief' not in cluster_spec.as_dict()))
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id, cluster_spec):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id, cluster_spec):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
checkpoint_dir = os.path.join(util.get_temp_dir(), 'ckpt')
# Define Strategy
strategy = tf.distribute.MultiWorkerMirroredStrategy()
with strategy.scope():
# Model building/compiling need to be within `tf.distribute.Strategy.scope`.
multi_worker_model = mnist.build_cnn_model()
multi_worker_dataset = strategy.distribute_datasets_from_function(
lambda input_context: mnist.dataset_fn(global_batch_size, input_context))
optimizer = tf.keras.optimizers.RMSprop(learning_rate=0.001)
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='train_accuracy')
@tf.function
def train_step(iterator):
Training step function.
def step_fn(inputs):
Per-Replica step function.
x, y = inputs
with tf.GradientTape() as tape:
predictions = multi_worker_model(x, training=True)
per_batch_loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True,
reduction=tf.keras.losses.Reduction.NONE)(y, predictions)
loss = tf.nn.compute_average_loss(
per_batch_loss, global_batch_size=global_batch_size)
grads = tape.gradient(loss, multi_worker_model.trainable_variables)
optimizer.apply_gradients(
zip(grads, multi_worker_model.trainable_variables))
train_accuracy.update_state(y, predictions)
return loss
per_replica_losses = strategy.run(step_fn, args=(next(iterator),))
return strategy.reduce(
tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None)
epoch = tf.Variable(
initial_value=tf.constant(0, dtype=tf.dtypes.int64), name='epoch')
step_in_epoch = tf.Variable(
initial_value=tf.constant(0, dtype=tf.dtypes.int64),
name='step_in_epoch')
task_type, task_id, cluster_spec = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id,
strategy.cluster_resolver.cluster_spec())
checkpoint = tf.train.Checkpoint(
model=multi_worker_model, epoch=epoch, step_in_epoch=step_in_epoch)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id,
cluster_spec)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
# Restoring the checkpoint
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
if latest_checkpoint:
checkpoint.restore(latest_checkpoint)
# Resume our CTL training
while epoch.numpy() < num_epochs:
iterator = iter(multi_worker_dataset)
total_loss = 0.0
num_batches = 0
while step_in_epoch.numpy() < num_steps_per_epoch:
total_loss += train_step(iterator)
num_batches += 1
step_in_epoch.assign_add(1)
train_loss = total_loss / num_batches
print('Epoch: %d, accuracy: %f, train_loss: %f.'
%(epoch.numpy(), train_accuracy.result(), train_loss))
train_accuracy.reset_states()
checkpoint_manager.save()
if not _is_chief(task_type, task_id, cluster_spec):
tf.io.gfile.rmtree(write_checkpoint_dir)
epoch.assign_add(1)
step_in_epoch.assign(0)
Explanation: Complete code at a glance
To sum up all the procedures discussed so far:
You create worker processes.
Pass 'TF_CONFIG's to the worker processes.
Let each work process run the script below that contains the training code.
End of explanation
%%bash
ls *.py
Explanation: The current directory now contains both Python files:
End of explanation
os.environ['TF_CONFIG'] = json.dumps(tf_config)
Explanation: So JSON-serialize the 'TF_CONFIG' and add it to the environment variables:
End of explanation
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
Explanation: Now, you can launch a worker process that will run the main.py and use the 'TF_CONFIG':
End of explanation
import time
time.sleep(20)
Explanation: There are a few things to note about the above command:
It uses the %%bash which is a notebook "magic" to run some bash commands.
It uses the --bg flag to run the bash process in the background, because this worker will not terminate. It waits for all the workers before it starts.
The backgrounded worker process won't print the output to this notebook. The &> redirects its output to a file, so that you can inspect what happened.
Wait a few seconds for the process to start up:
End of explanation
%%bash
cat job_0.log
Explanation: Now, check the output to the worker's log file so far:
End of explanation
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
Explanation: The last line of the log file should say: Started server with target: grpc://localhost:12345. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed.
Update the tf_config for the second worker's process to pick up:
End of explanation
%%bash
python main.py > /dev/null 2>&1
Explanation: Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
End of explanation
%%bash
cat job_0.log
# Delete the `'TF_CONFIG'`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
Explanation: If you recheck the logs written by the first worker, notice that it participated in training that model:
End of explanation |
376 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
numpy scipy pandas matplotlib scikit-learn
NumPy
Step1: Unlike Python lists (which are limited to one dimension), NumPy arrays can be multi-dimensional. For example, here we will reshape our x array into a 3x3 array
Step2: A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using .T
Step3: Pandas | Python Code:
import numpy as np
x = np.arange(1, 10)
x
x ** 2
Explanation: numpy scipy pandas matplotlib scikit-learn
NumPy: Numerical Python
NumPy provides an efficient way to store and manipulate multi-dimensional dense arrays in Python. The important features of NumPy are:
It provides an ndarray structure, which allows efficient storage and manipulation of vectors, matrices, and higher-dimensional datasets.
It provides a readable and efficient syntax for operating on this data, from simple element-wise arithmetic to more complicated linear algebraic operations.
End of explanation
M = x.reshape((3, 3))
M
Explanation: Unlike Python lists (which are limited to one dimension), NumPy arrays can be multi-dimensional. For example, here we will reshape our x array into a 3x3 array:
End of explanation
M.T
Explanation: A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using .T:
End of explanation
import pandas as pd
df = pd.DataFrame({'label': ['A', 'B', 'C', 'A', 'B', 'C'],
'value': [1, 2, 3, 4, 5, 6]})
df
df['label']
df['value'].sum()
df.groupby('label').sum()
Explanation: Pandas: Labeled Column-oriented Data
Pandas is a much newer package than NumPy, and is in fact built on top of it. What Pandas provides is a labeled interface to multi-dimensional data, in the form of a DataFrame object that will feel very familiar to users of R and related languages. DataFrames in Pandas look something like this:
End of explanation |
377 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
378 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<i class="fa fa-book"></i> Primero librerias
Step1: <i class="fa fa-database"></i> Vamos a crear datos de jugete
Crea varios "blobs"
recuerda la funcion de scikit-learn datasets.make_blobs()
Tambien prueba
python
centers = [[1, 1], [-1, -1], [1, -1]]
X,Y = datasets.make_blobs(n_samples=10000, centers=centers, cluster_std=0.6)
Step2: <i class="fa fa-tree"></i> Ahora vamos a crear un modelo de arbol
podemos usar DecisionTreeClassifier como clasificador
Step3: <i class="fa fa-question-circle"></i> Que parametros y funciones tiene el classificador?
Hint
Step4: vamos a ajustar nuestro modelo con fit y sacar su puntaje con score
Step5: <i class="fa fa-question-circle"></i>
Por que no queremos 100%?
Este problema se llama "Overfitting"
<i class="fa fa-list"></i> Pasos para un tipico algoritmo ML
Step6: cuales son los tamanios de estos nuevos datos?
Step7: y ahora entrenamos nuestro modelo y checamos el error
Step8: <i class="fa fa-question-circle"></i>
Como se ve nuestro modelo?
Que fue mas importante para hacer una decision?
Como podemos mejorar y controlar como dividimos nuestros datos?
Step9: Validación cruzada y
K-fold
Y lo mejor es que podemos hacer todo de usa sola patada con sci-kit!
Hay que usar cross_val_score
Step10: <i class="fa fa-question-circle"></i>
Y como podemos mejorar un arbol de decision?
RandomForestClassifier(n_estimators=n_estimators) Al rescate!
Step11: a probarlo!
Step12: mejoro?
Step13: Pero ahora tenemos un parametro nuevo, cuantos arboles queremos usar?
<i class="fa fa-tree"></i>,<i class="fa fa-tree"></i>,<i class="fa fa-tree"></i> ...
Que tal si probamos con un for loop!? Y checamos el error conforme al numero de arboles?
Actividad!
Hay que
Step14: <i class="fa fa-pagelines"></i> El conjunto de datos Iris
Un modelo multi-dimensional
Step15: Actividad | Python Code:
import numpy as np
import sklearn as sk
import matplotlib.pyplot as plt
import sklearn.datasets as datasets
import seaborn as sns
%matplotlib inline
Explanation: <i class="fa fa-book"></i> Primero librerias
End of explanation
centers = [[1, 1], [-1, -1], [1, -1]]
X,Y = datasets.make_blobs(n_samples=1000, centers=centers, cluster_std=0.6)
plt.scatter(X[:,0],X[:,1],c=Y)
plt.jet()
Explanation: <i class="fa fa-database"></i> Vamos a crear datos de jugete
Crea varios "blobs"
recuerda la funcion de scikit-learn datasets.make_blobs()
Tambien prueba
python
centers = [[1, 1], [-1, -1], [1, -1]]
X,Y = datasets.make_blobs(n_samples=10000, centers=centers, cluster_std=0.6)
End of explanation
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
Explanation: <i class="fa fa-tree"></i> Ahora vamos a crear un modelo de arbol
podemos usar DecisionTreeClassifier como clasificador
End of explanation
help(clf)
Explanation: <i class="fa fa-question-circle"></i> Que parametros y funciones tiene el classificador?
Hint: usa help(cosa)!
End of explanation
clf.fit(X,Y)
clf.score(X,Y)
clf = DecisionTreeClassifier()
clf.fit(X[:1000],Y[:1000])
clf.score(X,Y)*100
Explanation: vamos a ajustar nuestro modelo con fit y sacar su puntaje con score
End of explanation
from sklearn.cross_validation import train_test_split
X_train,X_test, Y_train, Y_test= train_test_split(X,Y,test_size=0.90)
Explanation: <i class="fa fa-question-circle"></i>
Por que no queremos 100%?
Este problema se llama "Overfitting"
<i class="fa fa-list"></i> Pasos para un tipico algoritmo ML:
Crear un modelo
Particionar tus datos en diferentes pedazos (10% entrenar y 90% prueba)
Entrenar tu modelo sobre cada pedazo de los datos
Escogete el mejor modelo o el promedio de los modelos
Predice!
Primero vamos a particionar los datos usando
End of explanation
plt.scatter(X_test[:,0],X_test[:,1],c=Y_test)
Explanation: cuales son los tamanios de estos nuevos datos?
End of explanation
clf = DecisionTreeClassifier()
clf.fit(X_train,Y_train)
clf.score(X_test,Y_test)*100
Explanation: y ahora entrenamos nuestro modelo y checamos el error
End of explanation
clf.feature_importances_
Explanation: <i class="fa fa-question-circle"></i>
Como se ve nuestro modelo?
Que fue mas importante para hacer una decision?
Como podemos mejorar y controlar como dividimos nuestros datos?
End of explanation
from sklearn.cross_validation import cross_val_score
resultados = cross_val_score(clf,X,Y, cv=10)
np.mean(resultados)
Explanation: Validación cruzada y
K-fold
Y lo mejor es que podemos hacer todo de usa sola patada con sci-kit!
Hay que usar cross_val_score
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
print(clf)
Explanation: <i class="fa fa-question-circle"></i>
Y como podemos mejorar un arbol de decision?
RandomForestClassifier(n_estimators=n_estimators) Al rescate!
End of explanation
resultados = cross_val_score(clf,X,Y, cv=10)
Explanation: a probarlo!
End of explanation
resultados.mean()
Explanation: mejoro?
End of explanation
ks=[2,3,5,8,10,12,15,18,20,25,30,35,40,45,50]
scores=[]
for i in ks:
clf = RandomForestClassifier(n_estimators=i)
resultados = cross_val_score(clf,X,Y, cv=10)
scores.append( np.mean(resultados) )
plt.plot(ks,scores)
Explanation: Pero ahora tenemos un parametro nuevo, cuantos arboles queremos usar?
<i class="fa fa-tree"></i>,<i class="fa fa-tree"></i>,<i class="fa fa-tree"></i> ...
Que tal si probamos con un for loop!? Y checamos el error conforme al numero de arboles?
Actividad!
Hay que :
Definir nuestro rango de arboles a probar en un arreglo
hacer un for loop sobre este arreglo
Para cada elemento, entrena un bosque y saca el score
Guarda el score en una lista
graficalo!
End of explanation
g = sns.PairGrid(iris, hue="species")
g = g.map(plt.scatter)
g = g.add_legend()
Explanation: <i class="fa fa-pagelines"></i> El conjunto de datos Iris
Un modelo multi-dimensional
End of explanation
iris = datasets.load_iris()
X = iris.data
Y = iris.target
Explanation: Actividad:
Objetivo: Entrena un arbol para predecir la especie de la planta
Checa las graficas, que variables podrian ser mas importante?
Agarra los datos, que dimensiones son?
Rompelos en pedacitos y entrena tus modelos
Que scores te da? Que resulto ser importante?
End of explanation |
379 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parameter selection, Validation, and Testing
Most models have parameters that influence how complex a model they can learn. Remember using KNeighborsRegressor.
If we change the number of neighbors we consider, we get a smoother and smoother prediction
Step1: There is a function in scikit-learn, called validation_plot to reproduce the cartoon figure above. It plots one parameter, such as the number of neighbors, against training and validation error (using cross-validation)
Step2: Note that many neighbors mean a "smooth" or "simple" model, so the plot is the mirror image of the diagram above.
If multiple parameters are important, like the parameters C and gamma in an SVM (more about that later), all possible combinations are tried
Step3: As this is such a very common pattern, there is a built-in class for this in scikit-learn, GridSearchCV. GridSearchCV takes a dictionary that describes the parameters that should be tried and a model to train.
The grid of parameters is defined as a dictionary, where the keys are the parameters and the values are the settings to be tested.
Step4: One of the great things about GridSearchCV is that it is a meta-estimator. It takes an estimator like SVR above, and creates a new estimator, that behaves exactly the same - in this case, like a regressor.
So we can call fit on it, to train it
Step5: What fit does is a bit more involved then what we did above. First, it runs the same loop with cross-validation, to find the best parameter combination.
Once it has the best combination, it runs fit again on all data passed to fit (without cross-validation), to built a single new model using the best parameter setting.
Then, as with all models, we can use predict or score
Step6: You can inspect the best parameters found by GridSearchCV in the best_params_ attribute, and the best score in the best_score_ attribute
Step7: There is a problem with using this score for evaluation, however. You might be making what is called a multiple hypothesis testing error. If you try very many parameter settings, some of them will work better just by chance, and the score that you obtained might not reflect how your model would perform on new unseen data.
Therefore, it is good to split off a separate test-set before performing grid-search. This pattern can be seen as a training-validation-test split, and is common in machine learning
Step8: We can also look at the parameters that were selected
Step9: Some practitioners go for an easier scheme, splitting the data simply into three parts, training, validation and testing. This is a possible alternative if your training set is very large, or it is infeasible to train many models using cross-validation because training a model takes very long.
You can do this with scikit-learn for example by splitting of a test-set and then applying GridSearchCV with ShuffleSplit cross-validation with a single iteration
Step10: This is much faster, but might result in worse hyperparameters and therefore worse results. | Python Code:
from sklearn.model_selection import cross_val_score, KFold
from sklearn.neighbors import KNeighborsRegressor
# generate toy dataset:
x = np.linspace(-3, 3, 100)
rng = np.random.RandomState(42)
y = np.sin(4 * x) + x + rng.normal(size=len(x))
X = x[:, np.newaxis]
cv = KFold(shuffle=True)
# for each parameter setting do cross_validation:
for n_neighbors in [1, 3, 5, 10, 20]:
scores = cross_val_score(KNeighborsRegressor(n_neighbors=n_neighbors), X, y, cv=cv)
print("n_neighbors: %d, average score: %f" % (n_neighbors, np.mean(scores)))
Explanation: Parameter selection, Validation, and Testing
Most models have parameters that influence how complex a model they can learn. Remember using KNeighborsRegressor.
If we change the number of neighbors we consider, we get a smoother and smoother prediction:
<img src="figures/plot_kneigbors_regularization.png" width="100%">
In the above figure, we see fits for three different values of n_neighbors.
For n_neighbors=2, the data is overfit, the model is too flexible and can adjust too much to the noise in the training data. For n_neighbors=20, the model is not flexible enough, and can not model the variation in the data appropriately.
In the middle, for n_neighbors = 5, we have found a good mid-point. It fits
the data fairly well, and does not suffer from the overfit or underfit
problems seen in the figures on either side. What we would like is a
way to quantitatively identify overfit and underfit, and optimize the
hyperparameters (in this case, the polynomial degree d) in order to
determine the best algorithm.
We trade off remembering too much about the particularities and noise of the training data vs. not modeling enough of the variability. This is a trade-off that needs to be made in basically every machine learning application and is a central concept, called bias-variance-tradeoff or "overfitting vs underfitting".
<img src="figures/overfitting_underfitting_cartoon.svg" width="100%">
Hyperparameters, Over-fitting, and Under-fitting
Unfortunately, there is no general rule how to find the sweet spot, and so machine learning practitioners have to find the best trade-off of model-complexity and generalization by trying several hyperparameter settings. Hyperparameters are the internal knobs or tuning parameters of a machine learning algorithm (in contrast to model parameters that the algorithm learns from the training data -- for example, the weight coefficients of a linear regression model); the number of k in K-nearest neighbors is such a hyperparameter.
Most commonly this "hyperparameter tuning" is done using a brute force search, for example over multiple values of n_neighbors:
End of explanation
from sklearn.model_selection import validation_curve
n_neighbors = [1, 3, 5, 10, 20, 50]
train_errors, test_errors = validation_curve(KNeighborsRegressor(), X, y, param_name="n_neighbors",
param_range=n_neighbors, cv=cv)
plt.plot(n_neighbors, train_errors.mean(axis=1), label="train error")
plt.plot(n_neighbors, test_errors.mean(axis=1), label="test error")
plt.legend(loc="best")
Explanation: There is a function in scikit-learn, called validation_plot to reproduce the cartoon figure above. It plots one parameter, such as the number of neighbors, against training and validation error (using cross-validation):
End of explanation
from sklearn.model_selection import cross_val_score, KFold
from sklearn.svm import SVR
# each parameter setting do cross_validation:
for C in [0.001, 0.01, 0.1, 1, 10]:
for gamma in [0.001, 0.01, 0.1, 1]:
scores = cross_val_score(SVR(C=C, gamma=gamma), X, y, cv=cv)
print("C: %f, gamma: %f, average score: %f" % (C, gamma, np.mean(scores)))
Explanation: Note that many neighbors mean a "smooth" or "simple" model, so the plot is the mirror image of the diagram above.
If multiple parameters are important, like the parameters C and gamma in an SVM (more about that later), all possible combinations are tried:
End of explanation
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma': [0.001, 0.01, 0.1, 1]}
grid = GridSearchCV(SVR(), param_grid=param_grid, cv=cv, verbose=3)
Explanation: As this is such a very common pattern, there is a built-in class for this in scikit-learn, GridSearchCV. GridSearchCV takes a dictionary that describes the parameters that should be tried and a model to train.
The grid of parameters is defined as a dictionary, where the keys are the parameters and the values are the settings to be tested.
End of explanation
grid.fit(X, y)
Explanation: One of the great things about GridSearchCV is that it is a meta-estimator. It takes an estimator like SVR above, and creates a new estimator, that behaves exactly the same - in this case, like a regressor.
So we can call fit on it, to train it:
End of explanation
grid.predict(X)
Explanation: What fit does is a bit more involved then what we did above. First, it runs the same loop with cross-validation, to find the best parameter combination.
Once it has the best combination, it runs fit again on all data passed to fit (without cross-validation), to built a single new model using the best parameter setting.
Then, as with all models, we can use predict or score:
End of explanation
print(grid.best_score_)
print(grid.best_params_)
Explanation: You can inspect the best parameters found by GridSearchCV in the best_params_ attribute, and the best score in the best_score_ attribute:
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma': [0.001, 0.01, 0.1, 1]}
cv = KFold(n_splits=10, shuffle=True)
grid = GridSearchCV(SVR(), param_grid=param_grid, cv=cv)
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
Explanation: There is a problem with using this score for evaluation, however. You might be making what is called a multiple hypothesis testing error. If you try very many parameter settings, some of them will work better just by chance, and the score that you obtained might not reflect how your model would perform on new unseen data.
Therefore, it is good to split off a separate test-set before performing grid-search. This pattern can be seen as a training-validation-test split, and is common in machine learning:
<img src="figures/grid_search_cross_validation.svg" width="100%">
We can do this very easily by splitting of some test data using train_test_split, training GridSearchCV on the training set, and applying the score method to the test set:
End of explanation
grid.best_params_
Explanation: We can also look at the parameters that were selected:
End of explanation
from sklearn.model_selection import train_test_split, ShuffleSplit
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma': [0.001, 0.01, 0.1, 1]}
single_split_cv = ShuffleSplit(n_splits=1)
grid = GridSearchCV(SVR(), param_grid=param_grid, cv=single_split_cv, verbose=3)
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
Explanation: Some practitioners go for an easier scheme, splitting the data simply into three parts, training, validation and testing. This is a possible alternative if your training set is very large, or it is infeasible to train many models using cross-validation because training a model takes very long.
You can do this with scikit-learn for example by splitting of a test-set and then applying GridSearchCV with ShuffleSplit cross-validation with a single iteration:
<img src="figures/train_validation_test2.svg" width="100%">
End of explanation
clf = GridSearchCV(SVR(), param_grid=param_grid)
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
Explanation: This is much faster, but might result in worse hyperparameters and therefore worse results.
End of explanation |
380 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Make Hazard Curves and Maps
This notebook illustrates how to make hazard curves and hazard maps by combining results from several events.
First set up some things needed in notebook....
Step1: Read in the topography data and define a function to make a contour plot
Step2: Read in image of Crescent City as background for plots
Step3: Set the exceedance values
This should be a list or array of values $\zeta$ (zeta) representing depth of flooding on shore, or elevation above sea level offshore (in meters). The hazard curves will be computed by determining the annual probability that the maximum $\zeta$ observed at each spatial point is above $\zeta_k$, for each value $\zeta_k$ in this list.
Step4: Set the desired annual probability for each event
Note that we are only using 14 events for this workshop. The probabilities have been adjusted accordingly.
event_prob is a Python dictionary. It is initialized to an empty dictionary and then we set event_prob[key] = value where the keys are the names of the hypothetical events and the associated value is the annual probability.
Step6: Define a function to combine two events
Step7: Specify the set of events to include in computing hazard curves
Step8: Compute the combined probability of exceeding each exceedance value
Step9: Plot hazard curves
The array exceed_prob[i,j,
Step10: Plot the hazard curve for one location
Step11: Interactive viewer to move the point around
Step12: Hazard Maps
If we fix k then exceed_prob[
Step13: Plot a sample probability map for one exceendance value
Step14: Interactive viewer of all hazard maps
Step15: Inundation maps for given probability
Step16: Plot a sample map
Step17: Interactive viewer for a range of probabilities | Python Code:
%pylab inline
from __future__ import print_function
from ptha_paths import data_dir, events_dir
import sys, os
from ipywidgets import interact
from IPython.display import Image, display
Explanation: Make Hazard Curves and Maps
This notebook illustrates how to make hazard curves and hazard maps by combining results from several events.
First set up some things needed in notebook....
End of explanation
# Read in topography data:
fixed_grid_file = os.path.join(data_dir, 'MapsTopo', 'fixedgrid_xyB_small.npy')
d=load(fixed_grid_file)
x=d[:,0]
y=d[:,1]
B=d[:,2]
topo = reshape(B, (250,250), order='F')
X = reshape(x, (250,250), order='F')
Y = reshape(y, (250,250), order='F')
def plot_topo():
fig = figure(figsize=(6,6))
ax = axes()
topo_clines = arange(0,20,2)
contour(X,Y,topo,topo_clines,colors='k')
CClatitude = 41.75 # to rescale longitude
ax.set_aspect(1. / cos(pi*CClatitude/180.))
ax.ticklabel_format(format='plain',useOffset=False)
return fig
Explanation: Read in the topography data and define a function to make a contour plot:
End of explanation
CCmap = imread('%s/MapsTopo/CCimage.png' % data_dir)
extent = (235.79781, 235.82087, 41.739671,41.762726) #small region
def plot_CCmap():
fig = figure(figsize=(6,6))
ax = axes()
imshow(CCmap,extent=extent)
CClatitude = 41.75 # to rescale longitude
ax.set_aspect(1. / cos(pi*CClatitude/180.))
ax.ticklabel_format(format='plain',useOffset=False)
axis(extent)
return fig
Explanation: Read in image of Crescent City as background for plots
End of explanation
# these levels were used in original study:
#zeta = hstack((arange(0,2.,.1), arange(2.0,12.5,.5)))
# you get nicer looking curves by using a denser set of exceedance values:
zeta = linspace(0,12,121)
nzeta = len(zeta)
print('%i exceedance values, \nzeta = %s' % (nzeta,zeta))
Explanation: Set the exceedance values
This should be a list or array of values $\zeta$ (zeta) representing depth of flooding on shore, or elevation above sea level offshore (in meters). The hazard curves will be computed by determining the annual probability that the maximum $\zeta$ observed at each spatial point is above $\zeta_k$, for each value $\zeta_k$ in this list.
End of explanation
all_events = ['AASZa', 'AASZb', 'AASZc', 'AASZd', 'CSZa', 'CSZb', 'CSZc', 'CSZd', 'CSZe', \
'CSZf', 'KmSZa', 'KrSZa', 'SChSZa', 'TOHa']
event_prob = {}
event_prob['AASZa'] = 1./394.
event_prob['AASZb'] = 1./750.
event_prob['AASZc'] = 1./563.
event_prob['AASZd'] = 1./324.
event_prob['CSZa'] = 1./250. * .0125
event_prob['CSZb'] = 1./250. * .0125
event_prob['CSZc'] = 1./250. * .0750
event_prob['CSZd'] = 1./250. * .5000
event_prob['CSZe'] = 1./250. * .1750
event_prob['CSZf'] = 1./250. * .2250
event_prob['KmSZa'] = 1./50.
event_prob['KrSZa'] = 1./167.
event_prob['SChSZa'] = 1./300.
event_prob['TOHa'] = 1./103.
print("Annual probability of each event is set to:")
print(event_prob)
Explanation: Set the desired annual probability for each event
Note that we are only using 14 events for this workshop. The probabilities have been adjusted accordingly.
event_prob is a Python dictionary. It is initialized to an empty dictionary and then we set event_prob[key] = value where the keys are the names of the hypothetical events and the associated value is the annual probability.
End of explanation
def combine_prob(p1,p2):
Returns the probability that event 1 or 2 happens
return 1. - (1-p1)*(1-p2)
Explanation: Define a function to combine two events
End of explanation
events = all_events
# Instead, to use a subset of the events, specify a list such as:
#events = ['AASZa', 'AASZb', 'AASZc']
Explanation: Specify the set of events to include in computing hazard curves:
End of explanation
nx, ny = X.shape # note that X is a 2d array of longitude values at each point
exceed_prob = zeros((nx,ny,nzeta)) # initialize to zero
# loop over all events and update exceed_prob at each grid point by combining
# current value with the probability Pk of this event:
for event in events:
event_dir = os.path.join(events_dir, event)
hmax_file = os.path.join(event_dir, 'h_eta_small.npy')
hmax = load(hmax_file)
Hmax = hmax.reshape((nx,ny),order='F')
for k in range(nzeta):
Pk = exceed_prob[:,:,k] # probabilities at all points for one exceedance value zeta_k
exceed_prob[:,:,k] = where(Hmax > zeta[k], combine_prob(event_prob[event],Pk), Pk)
print("Computed exceedance probabilities. \nMaximum over all grid points is %g" % exceed_prob.max())
Explanation: Compute the combined probability of exceeding each exceedance value:
exceed_prob is computed as an array of shape
End of explanation
dx = X[1,0] - X[0,0]
dy = Y[0,1] - Y[0,0]
nx, ny = X.shape
xmin = X.min(); xmax = X.max()
ymin = Y.min(); ymax = Y.max()
def plot_hcurve(longitude, latitude):
i = int(round((longitude - X[0,0]) / dx))
j = int(round((latitude - Y[0,0]) / dy))
if (i<0) or (i>=nx) or (j<0) or (j>=ny):
print("out of domain")
return
fig = figure(figsize=(12,5))
subplot(1,2,1)
p = maximum(exceed_prob[i,j,:], 1e-10)
semilogy(zeta, p, 'b')
ylim(1e-5,1)
xlabel('zeta in meters')
ylabel('annual probability')
title('Hazard Curve')
# Also plot the CC image with a red dot showing the location:
ax = subplot(1,2,2)
imshow(CCmap,extent=extent)
CClatitude = 41.75 # to rescale longitude
ax.set_aspect(1. / cos(pi*CClatitude/180.))
ax.ticklabel_format(format='plain',useOffset=False)
plot([longitude], [latitude], 'ro')
xlim(xmin,xmax)
ylim(ymin,ymax)
title('Location')
show()
Explanation: Plot hazard curves
The array exceed_prob[i,j,:] (i.e. fixing i,j and letting the last index k vary from 0 to nzeta - 1) gives the probability of exceedance at the (i,j) grid point as we vary the exceedance value zeta[k]. Plotting this gives exactly the hazard curve at the (i,j) point.
The function plot_hcurve defined below plots this for a given (longitude, latitude) by first figuring out the index (i,j) for the nearest point on the grid covering Crescent City.
End of explanation
fig = plot_hcurve(235.805, 41.75)
Explanation: Plot the hazard curve for one location:
End of explanation
interact(plot_hcurve, longitude=(xmin,xmax,.001),latitude=(ymin,ymax,0.001))
Explanation: Interactive viewer to move the point around:
End of explanation
prob_clines = [1e-5, 1e-4, 1e-3, 2e-3, 1e-2, 2e-2, 1e-1]
nlines = len(prob_clines)
n1 = int(floor((nlines-1)/2.))
n2 = nlines - 1 - n1
Green = hstack([linspace(1,1,n1),linspace(1,0,n2)])
Red = hstack([linspace(0,0.8,n1), ones(n2)])
Blue = hstack([linspace(1,0.2,n1), zeros(n2)])
prob_colors = list(zip(Red,Green,Blue))
color_offscale = (.5,0,0) # color to use if above maximum
prob_colors.append(color_offscale)
# Choose the background for plots by uncommenting one line:
background = plot_CCmap
#background = plot_topo
def plot_pmap(k):
fig = background()
contourf(X,Y,exceed_prob[:,:,k], prob_clines, colors=prob_colors,alpha = 0.6, extend='max')
title("Annual probability of flooding above %g meters" % zeta[k])
colorbar()
show()
Explanation: Hazard Maps
If we fix k then exceed_prob[:,:,k] is a two dimensional array giving the probability of exceedance at all points on the grid for a fixed exceedance level zeta[k]. We can plot this to obtain a hazard map showing probabilities for a given exceedance value.
Define contours and colors and a function to plot probability maps
prob_clines will be the probability levels to use in contour maps
prob_colors will define the color map to use. This is a list of tuples (R,G,B) of red,green,blue values, chosen to go from light blue to red.
Note: The function plot_pmap defined in the cell below uses the exceedance probabilities exceed_prob computed above. If you recompute these (e.g. by changing the set of events to include, or the probabilities of individual events), you must re-execute this cell to redefine plot_pmap before re-making the plots in later cells!
End of explanation
k = 13
print('This should plot a probability map for exceedance value zeta[%i] = %g m' % (k,zeta[k]))
fig = plot_pmap(k)
Explanation: Plot a sample probability map for one exceendance value:
End of explanation
interact(plot_pmap, k=(0,nzeta-1,1));
Explanation: Interactive viewer of all hazard maps:
End of explanation
def compute_zeta(p):
# create boolean array K with K[i,j,k] == True only where exceed_prob[i,j,k] > p:
K = exceed_prob > p
K[:,:,0] = True
zeta_p = zeros(X.shape)
for i in range(nx):
for j in range(ny):
zeta_p[i,j] = zeta[K[i,j,:]][-1]
return zeta_p
# Set contour lines and colors for plotting zeta = inundation depth
zeta_clines = [1e-3] + list(linspace(0.5,4.5,9))
nlines = len(zeta_clines)
n1 = int(floor((nlines-1)/2.))
n2 = nlines - 1 - n1
Green = hstack([linspace(1,1,n1),linspace(1,0,n2)])
Red = hstack([linspace(0,0.8,n1), ones(n2)])
Blue = hstack([linspace(1,0.2,n1), zeros(n2)])
zeta_colors = list(zip(Red,Green,Blue))
color_offscale = (.5,0,0) # color to use if above maximum
zeta_colors.append(color_offscale)
# Choose the background for plots by uncommenting one line:
background = plot_CCmap
#background = plot_topo
def plot_inundation_map(p):
zeta_p = compute_zeta(p)
fig =background()
contourf(X,Y,zeta_p,zeta_clines, colors=zeta_colors, alpha = 0.6, extend='max')
title("Depth of flooding for annual probability %g\nReturn time %5.0f years" % (p, (1./p)))
colorbar()
show();
Explanation: Inundation maps for given probability:
A more commonly used map is obtained by fixing a probability (e.g. $p = 0.01$ for a "100-year" flood map) and plotting the maximum depth expected with this annual probability.
This requires determining, for each grid point (i,j), the largest value of k for which exceed_prob[k] $\geq p$. Then the value zeta[k] is the largest exceedance value for which the probability is at least $p$.
Recall that zeta is defined to be maximum depth of inundation on shore, or maximum height above MHW offshore.
Note: The functions compute_zeta and plot_inundation_map defined in the cell below uses the exceedance probabilities exceed_prob computed above. If you recompute these (e.g. by changing the set of events to include, or the probabilities of individual events), you must re-execute this cell to redefine the functions before re-making the plots in later cells!
End of explanation
fig = plot_inundation_map(0.002)
Explanation: Plot a sample map:
End of explanation
interact(plot_inundation_map, p=(0.00025,0.01,0.00025));
Explanation: Interactive viewer for a range of probabilities:
End of explanation |
381 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training a Sentiment Analysis LSTM Using Noisy Crowd Labels
In this tutorial, we'll provide a simple walkthrough of how to use Snorkel to resolve conflicts in a noisy crowdsourced dataset for a sentiment analysis task, and then use these denoised labels to train an LSTM sentiment analysis model which can be applied to new, unseen data to automatically make predictions!
Specifically, we'll look at
Step1: Step 1
Step2: We can now load the raw data for our crowdsourcing task (stored in a local csv file) into a dataframe.
Step3: As mentioned above, contributors can provide conflicting labels for the same tweet
Step4: Step 2
Step5: Contexts
All Candidate objects point to one or more Context objects, which represent the raw data that they are rooted in. In this case, our candidates will each point to a single Context object representing the raw text of the tweet.
Once we have defined the Context for each Candidate, we can commit them to the database. Note that we also split into two sets while doing this
Step7: Labels
Next, we'll store the labels for each of the training candidates in a sparse matrix (which will also automatically be saved to the Snorkel database), with one row for each candidate and one column for each crowd worker
Step8: Finally, we load the ground truth ("gold") labels for both the training and test sets, and store as numpy arrays"
Step9: Step 3
Step10: Infering the MAP assignment for each task
Each task corresponds to an indipendent random variable. Thus, we can simply associate each task with the most probably label based on the estimated marginal distribution and get an accuracy score
Step11: Majority vote
It seems like we did well- but how well? Given that this is a fairly simple task--we have 20 contributors per tweet (and most of them are far better than random)--we expect majority voting to perform extremely well, so we can check against majority vote
Step12: We see that while majority vote makes 9 errors, the Snorkel model makes only 2! What about an average crowd worker?
Average human accuracy
We see that the average accuracy of a single crowd worker is in fact much lower
Step13: Step 4
Step14: Next, we'll train a simple LSTM | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import numpy as np
from snorkel import SnorkelSession
session = SnorkelSession()
Explanation: Training a Sentiment Analysis LSTM Using Noisy Crowd Labels
In this tutorial, we'll provide a simple walkthrough of how to use Snorkel to resolve conflicts in a noisy crowdsourced dataset for a sentiment analysis task, and then use these denoised labels to train an LSTM sentiment analysis model which can be applied to new, unseen data to automatically make predictions!
Specifically, we'll look at:
1. Loading data via SparkSQL
2. Creating basic Snorkel objects: Candidates, Contexts, and Labels
3. Training the GenerativeModel to resolve labeling conflicts
4. Training a simple LSTM sentiment analysis model, which can then be used on new, unseen data!
Note that this is a simple tutorial meant to give an overview of the mechanics of using Snorkel-- we'll note places where more careful fine-tuning could be done!
Installing PySpark
Please see the official instructions!
Task Detail: Weather Sentiments in Tweets
In this tutorial we focus on the Weather sentiment task from Crowdflower.
In this task, contributors were asked to grade the sentiment of a particular tweet relating to the weather. Contributors could choose among the following categories:
1. Positive
2. Negative
3. I can't tell
4. Neutral / author is just sharing information
5. Tweet not related to weather condition
The catch is that 20 contributors graded each tweet. Thus, in many cases contributors assigned conflicting sentiment labels to the same tweet.
The task comes with two data files (to be found in the data directory of the tutorial:
1. weather-non-agg-DFE.csv contains the raw contributor answers for each of the 1,000 tweets.
2. weather-evaluated-agg-DFE.csv contains gold sentiment labels by trusted workers for each of the 1,000 tweets.
End of explanation
# Initialize Spark Environment and Spark SQL
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
from pyspark import SparkContext, SparkConf
spark = SparkSession \
.builder \
.master("local") \
.appName("Snorkel Crowdsourcing Demo") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
Explanation: Step 1: Preprocessing - Data Loading with Spark SQL and Dataframes
First, we initialize a SparkSession, which manages a connection to a local Spark master which allows us to preprocess the raw data and prepare convert them to the necessary Snorkel format:
End of explanation
# Load Raw Crowdsourcing Data
raw_crowd_answers = spark.read.format("csv").option("header", "true").csv("data/weather-non-agg-DFE.csv")
raw_crowd_answers.printSchema()
# Load Groundtruth Crowdsourcing Data
gold_crowd_answers = spark.read.format("csv").option("header", "true").csv("data/weather-evaluated-agg-DFE.csv")
gold_crowd_answers.createOrReplaceTempView("gold_crowd_answers")
# Filter out low-confidence answers
gold_answers = spark.sql("SELECT tweet_id, sentiment, tweet_body FROM gold_crowd_answers WHERE correct_category ='Yes' and correct_category_conf = 1").orderBy("tweet_id")
# Keep Only the Tweets with Available Groundtruth
candidate_labeled_tweets = raw_crowd_answers.join(gold_answers, raw_crowd_answers.tweet_id == gold_answers.tweet_id).select(raw_crowd_answers.tweet_id,raw_crowd_answers.tweet_body,raw_crowd_answers.worker_id,raw_crowd_answers.emotion)
Explanation: We can now load the raw data for our crowdsourcing task (stored in a local csv file) into a dataframe.
End of explanation
candidate_labeled_tweets.select("worker_id", "emotion", "tweet_body").orderBy("tweet_id").show()
Explanation: As mentioned above, contributors can provide conflicting labels for the same tweet:
End of explanation
from snorkel.models import candidate_subclass
values = list(map(
lambda r: r.emotion,
candidate_labeled_tweets.select("emotion").distinct().collect()
))
Tweet = candidate_subclass('Tweet', ['tweet'], values=values)
Explanation: Step 2: Generating Snorkel Objects
Candidates
Candidates are the core objects in Snorkel representing objects to be classified. We'll use a helper function to create a custom Candidate sub-class, Tweet, with values representing the possible labels that it can be classified with:
End of explanation
from snorkel.models import Context, Candidate
from snorkel.contrib.models.text import RawText
# Make sure DB is cleared
session.query(Context).delete()
session.query(Candidate).delete()
# Now we create the candidates with a simple loop
tweet_bodies = candidate_labeled_tweets \
.select("tweet_id", "tweet_body") \
.orderBy("tweet_id") \
.distinct()
# Generate and store the tweet candidates to be classified
# Note: We split the tweets in two sets: one for which the crowd
# labels are not available to Snorkel (test, 10%) and one for which we assume
# crowd labels are obtained (to be used for training, 90%)
total_tweets = tweet_bodies.count()
test_split = total_tweets*0.1
for i, t in enumerate(tweet_bodies.collect()):
split = 1 if i <= test_split else 0
raw_text = RawText(stable_id=t.tweet_id, name=t.tweet_id, text=t.tweet_body)
tweet = Tweet(tweet=raw_text, split=split)
session.add(tweet)
session.commit()
Explanation: Contexts
All Candidate objects point to one or more Context objects, which represent the raw data that they are rooted in. In this case, our candidates will each point to a single Context object representing the raw text of the tweet.
Once we have defined the Context for each Candidate, we can commit them to the database. Note that we also split into two sets while doing this:
Training set (split=0): The tweets for which we have noisy, conflicting crowd labels; we will resolve these conflicts using the GenerativeModel and then use them as training data for the LSTM
Test set (split=1): We will pretend that we do not have any crowd labels for this split of the data, and use these to test the LSTM's performance on unseen data
End of explanation
from snorkel.annotations import LabelAnnotator
from collections import defaultdict
# Extract worker votes
# Cache locally to speed up for this small set
worker_labels = candidate_labeled_tweets.select("tweet_id", "worker_id", "emotion").collect()
wls = defaultdict(list)
for row in worker_labels:
wls[row.tweet_id].append((row.worker_id, row.emotion))
# Create a label generator
def worker_label_generator(t):
A generator over the different (worker_id, label_id) pairs for a Tweet.
for worker_id, label in wls[t.tweet.name]:
yield worker_id, label
labeler = LabelAnnotator(label_generator=worker_label_generator)
%time L_train = labeler.apply(split=0)
L_train
Explanation: Labels
Next, we'll store the labels for each of the training candidates in a sparse matrix (which will also automatically be saved to the Snorkel database), with one row for each candidate and one column for each crowd worker:
End of explanation
gold_labels = defaultdict(list)
# Get gold labels in verbose form
verbose_labels = dict([(t.tweet_id, t.sentiment)
for t in gold_answers.select("tweet_id", "sentiment").collect()])
# Iterate over splits, align with Candidate ordering
for split in range(2):
cands = session.query(Tweet).filter(Tweet.split == split).order_by(Tweet.id).all()
for c in cands:
gold_labels[split].append(values.index(verbose_labels[c.tweet.name]) + 1)
train_cand_labels = np.array(gold_labels[0])
test_cand_labels = np.array(gold_labels[1])
Explanation: Finally, we load the ground truth ("gold") labels for both the training and test sets, and store as numpy arrays"
End of explanation
# Imports
from snorkel.learning.gen_learning import GenerativeModel
# Initialize Snorkel's generative model for
# learning the different worker accuracies.
gen_model = GenerativeModel(lf_propensity=True)
# Train the generative model
gen_model.train(
L_train,
reg_type=2,
reg_param=0.1,
epochs=30
)
Explanation: Step 3: Resolving Crowd Conflicts with the Generative Model
Until now we have converted the raw crowdsourced data into a labeling matrix that can be provided as input to Snorkel. We will now show how to:
Use Snorkel's generative model to learn the accuracy of each crowd contributor.
Use the learned model to estimate a marginal distribution over the domain of possible labels for each task.
Use the estimated marginal distribution to obtain the maximum a posteriori probability estimate for the label that each task takes.
End of explanation
accuracy = gen_model.score(L_train, train_cand_labels)
print("Accuracy: {:.10f}".format(accuracy))
Explanation: Infering the MAP assignment for each task
Each task corresponds to an indipendent random variable. Thus, we can simply associate each task with the most probably label based on the estimated marginal distribution and get an accuracy score:
End of explanation
from collections import Counter
# Collect the majority vote answer for each tweet
mv = []
for i in range(L_train.shape[0]):
c = Counter([L_train[i,j] for j in L_train[i].nonzero()[1]])
mv.append(c.most_common(1)[0][0])
mv = np.array(mv)
# Count the number correct by majority vote
n_correct = np.sum([1 for i in range(L_train.shape[0]) if mv[i] == train_cand_labels[i]])
print("Accuracy: {:.10f}".format(n_correct / float(L_train.shape[0])))
print("Number incorrect: {:.0f}".format(L_train.shape[0] - n_correct))
Explanation: Majority vote
It seems like we did well- but how well? Given that this is a fairly simple task--we have 20 contributors per tweet (and most of them are far better than random)--we expect majority voting to perform extremely well, so we can check against majority vote:
End of explanation
accs = []
for j in range(L_train.shape[1]):
n_correct = np.sum([1 for i in range(L_train.shape[0]) if L_train[i,j] == train_cand_labels[i]])
acc = n_correct / float(L_train[:,j].nnz)
accs.append(acc)
print("Mean Accuracy: {:.10f}".format(np.mean(accs)))
Explanation: We see that while majority vote makes 9 errors, the Snorkel model makes only 2! What about an average crowd worker?
Average human accuracy
We see that the average accuracy of a single crowd worker is in fact much lower:
End of explanation
train_marginals = gen_model.marginals(L_train)
from snorkel.annotations import save_marginals
save_marginals(session, L_train, train_marginals)
Explanation: Step 4: Training an ML Model with Snorkel for Sentiment Analysis over Unseen Tweets
In the previous step, we saw that Snorkel's generative model can help to denoise crowd labels automatically. However, what happens when we don't have noisy crowd labels for a tweet?
In this step, we'll use the estimates of the generative model as probabilistic training labels to train a simple LSTM sentiment analysis model, which takes as input a tweet for which no crowd labels are available and predicts its sentiment.
First, we get the probabilistic training labels (training marginals) which are just the marginal estimates of the generative model:
End of explanation
# from snorkel.learning import TextRNN - v0.6.3
from snorkel.learning.tensorflow import TextRNN # v0.7-beta
train_kwargs = {
'lr': 0.01,
'dim': 100,
'n_epochs': 200,
'dropout': 0.2,
'print_freq': 5
}
lstm = TextRNN(seed=1701, cardinality=Tweet.cardinality)
train_cands = session.query(Tweet).filter(Tweet.split == 0).order_by(Tweet.id).all()
lstm.train(train_cands, train_marginals, **train_kwargs)
test_cands = session.query(Tweet).filter(Tweet.split == 1).order_by(Tweet.id).all()
accuracy = lstm.score(test_cands, test_cand_labels)
print("Accuracy: {:.10f}".format(accuracy))
Explanation: Next, we'll train a simple LSTM:
End of explanation |
382 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Coupling GIPL and ECSimpleSnow models
Before you begin, install
Step1: Load ECSimpleSnow module from PyMT
Step2: Load GIPL module from PyMT
Step3: Call the setup method on both ECSimpleSnow and GIPL to get default configuration files and data. | Python Code:
import pymt.models
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import matplotlib.colors as mcolors
from matplotlib.colors import LinearSegmentedColormap
sns.set(style='whitegrid', font_scale= 1.2)
Explanation: Coupling GIPL and ECSimpleSnow models
Before you begin, install:
conda install -c conda-forge pymt pymt_gipl pymt_ecsimplesnow seaborn
End of explanation
ec = pymt.models.ECSimpleSnow()
print(ec.name)
# List input and output variable names.
print(ec.output_var_names)
print(ec.input_var_names)
Explanation: Load ECSimpleSnow module from PyMT
End of explanation
gipl = pymt.models.GIPL()
print(gipl.name)
# List input and output variable names.
print(gipl.output_var_names)
print(gipl.input_var_names)
Explanation: Load GIPL module from PyMT
End of explanation
ec_defaults = ec.setup('.')
print(ec_defaults)
gipl_defaults = gipl.setup('.')
print(gipl_defaults)
ec.initialize('snow_model.cfg')
gipl.initialize('gipl_config.cfg')
# Get soil depth: [unit: m]
depth = gipl.get_grid_z(2)
n_depth = int(len(depth))
# Get the length of forcing data:
ntime = int(gipl.end_time)
# Define a variable to store soil temperature through the time period
tsoil = np.zeros((n_depth, ntime)) * np.nan
print('Final soil temperatures will be ', tsoil.shape)
fig = plt.figure(figsize=[12,6])
ax2 = fig.add_subplot(2,3,1)
ax2.set_title('Air Temperature (Input)')
ax3 = fig.add_subplot(2,3,2)
ax3.set_title('Precipition (Input)')
ax4 = fig.add_subplot(2,3,4)
ax4.set_title('Snow Depth (EC Output)')
ax5 = fig.add_subplot(2,3,5)
ax5.set_title('Snow Density (EC Output)')
ax1 = fig.add_subplot(2,3,(3,6))
ax1.set_ylim([15,0])
ax1.set_xlim([-20,20])
ax1.set_xlabel('Soil Temperature ($^oC$)')
ax1.set_ylabel('Depth (m)')
ax1.plot([0,0],[15,0],'k--')
for i in np.arange(365):
ec.update() # Update Snow Model Once
# Get output from snow model
tair = ec.get_value('land_surface_air__temperature')
prec = ec.get_value('precipitation_mass_flux')
snd = ec.get_value('snowpack__depth', units='m')
rsn = ec.get_value('snowpack__mass-per-volume_density', units = 'g cm-3')
# Pass value to GIPL model
gipl.set_value('land_surface_air__temperature', tair)
gipl.set_value('snowpack__depth', snd)
gipl.set_value('snow__thermal_conductivity', rsn * rsn * 2.846)
gipl.update() # Update GIPL model Once
tsoil[:,i] = gipl.get_value('soil__temperature') # Save results to a matrix
ax1.plot(tsoil[depth>=0,i], depth[depth>=0],color = [0.7,0.7,0.7], alpha = 0.1)
ax2.scatter(i, tair, c = 'k')
ax3.scatter(i, prec, c = 'k')
ax4.scatter(i, snd , c = 'k')
ax5.scatter(i, rsn , c = 'k')
ax1.plot(tsoil[depth>=0,:].max(axis=1), depth[depth>=0], 'r', linewidth = 2, label = 'Max')
ax1.plot(tsoil[depth>=0,:].min(axis=1), depth[depth>=0], 'b', linewidth = 2, label = 'Min')
ax1.plot(tsoil[depth>=0,:].mean(axis=1), depth[depth>=0], 'k', linewidth = 2, label = 'Mean')
ax1.legend()
ax1.set_title('Ground Temperatures (GIPL output)')
ax2.set_xticks([])
ax3.set_xticks([])
fig = plt.figure(figsize=[9,4])
divnorm = mcolors.TwoSlopeNorm(vmin=-25., vcenter=0., vmax=10)
plt.contourf(np.arange(ntime), depth, tsoil, np.linspace(-25,10,15),
norm = divnorm,
cmap="RdBu_r", extend = 'both')
plt.ylim([5,0])
cb = plt.colorbar()
plt.xlabel('Day')
plt.ylabel('Depth (m)')
cb.ax.set_ylabel('Soil Temperature ($^oC$)')
plt.contour(np.arange(ntime), depth, tsoil, [0]) # ZERO
Explanation: Call the setup method on both ECSimpleSnow and GIPL to get default configuration files and data.
End of explanation |
383 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Connect Four
This notebook defines the game Connect Four.
Connect Four is played on a board of dimension $6 \times 7$, i.e. there are $6$ rows $7$ columns. Instead of Red and Yellow we call the players X and O. Player X starts. Player X and O take turns to choose columns that are not yet filled. When player X chooses column c, the first non-empty field in column c is filled with an 'X'. Likewise, when player O chooses column c, the first non-empty field in column c is filled with an 'O'. Rows are numbered from the bottom up, i.e. the bottom row is row $0$. The goal of the game for player X is to get four consecutive 'X's into a row, column, or diagonal line, while player O needs to get four consecutive 'O's into a row, column, or diagonal line.
Step1: States are represented as tuples of tuples. The game starts with an empty board. An empty field on the board is represented as the string ' '.
Step2: The function to_list transforms a tuple of tuples into a list of lists.
Step3: The function to_tuple transforms a list of lists into a tuple of tuples.
Step4: The function find_empty takes two arguments
Step5: Given a State and the player who is the next player move, the function next_states(State, player) computes the list of states that can be reached from State by a move of player.
Step6: The variable All_Lines collects the coordinates of all groups of four fields that are consecutive horizontally, vertically, or diagonally. For example, the variable All_Lines contains, among others, the following lists
Step7: The cell below should output the number $69$.
Step8: Given a State the function top_line_filled(State) checks whether all marks in the top line of the given board are filled.
Step9: The function utility takes two arguments
Step10: The function heuristic tries to guess the value of a state. As it is never called in terminal states, the given implementation assumes that the game will be drawn and hence returns $0$. If you have solved all of the other exercises, you should try to improve this function.
Step11: finished(State) is True if the game is over.
Step12: The function get_move asks the user to input a move in the format r,c where r is the row and the c is the column where the next symbol is to be placed.
Step13: This function informs the user about the result of the game once the game is finished.
Step14: Drawing the Board
Step15: This function creates the canvas for the start state. It draws an empty board which is later used for the game.
Step16: The function draw takes three arguments | Python Code:
gPlayers = [ 'X', 'O' ]
Explanation: Connect Four
This notebook defines the game Connect Four.
Connect Four is played on a board of dimension $6 \times 7$, i.e. there are $6$ rows $7$ columns. Instead of Red and Yellow we call the players X and O. Player X starts. Player X and O take turns to choose columns that are not yet filled. When player X chooses column c, the first non-empty field in column c is filled with an 'X'. Likewise, when player O chooses column c, the first non-empty field in column c is filled with an 'O'. Rows are numbered from the bottom up, i.e. the bottom row is row $0$. The goal of the game for player X is to get four consecutive 'X's into a row, column, or diagonal line, while player O needs to get four consecutive 'O's into a row, column, or diagonal line.
End of explanation
gStart = tuple( tuple(' ' for col in range(7)) for row in range(6))
gStart
Explanation: States are represented as tuples of tuples. The game starts with an empty board. An empty field on the board is represented as the string ' '.
End of explanation
to_list = lambda State: [list(row) for row in State]
Explanation: The function to_list transforms a tuple of tuples into a list of lists.
End of explanation
to_tuple = lambda State: tuple(tuple(row) for row in State)
Explanation: The function to_tuple transforms a list of lists into a tuple of tuples.
End of explanation
def find_empty(State, col):
"your code here"
Explanation: The function find_empty takes two arguments:
- State is a description of the board,
- col specifies a column, i.e. it is an integer from the set ${0, \cdots, 6}$.
Given the State the function find_empty(State, col) returns the smallest $\texttt{row} \in {0, \cdots, 5}$ such that
State[row][col] == ' '
holds. If the specified column is already completely filled, then instead None is returned.
End of explanation
def next_states(State, player):
"your code here"
Explanation: Given a State and the player who is the next player move, the function next_states(State, player) computes the list of states that can be reached from State by a move of player.
End of explanation
All_Lines = "your code here"
All_Lines
Explanation: The variable All_Lines collects the coordinates of all groups of four fields that are consecutive horizontally, vertically, or diagonally. For example, the variable All_Lines contains, among others, the following lists:
[(0, 0), (0, 1), (0, 2), (0, 3)]
[(0, 0), (1, 0), (2, 0), (3, 0)]
[(1, 1), (2, 2), (3, 3), (4, 4)]
End of explanation
len(All_Lines)
Explanation: The cell below should output the number $69$.
End of explanation
def top_line_filled(State):
"your code here"
Explanation: Given a State the function top_line_filled(State) checks whether all marks in the top line of the given board are filled.
End of explanation
def utility(State):
"your code here"
Explanation: The function utility takes two arguments:
- State is a tuple of tuple representing the board.
- player is a player.
The function returns 1 if player has won the game, -1 if the game is lost for player, 0 if it's a draw, and None if the game has not yet been decided.
End of explanation
def heuristic(State):
"your code here, but you can try returning 0 initially"
return 0
Explanation: The function heuristic tries to guess the value of a state. As it is never called in terminal states, the given implementation assumes that the game will be drawn and hence returns $0$. If you have solved all of the other exercises, you should try to improve this function.
End of explanation
def finished(State):
return utility(State) != None
Explanation: finished(State) is True if the game is over.
End of explanation
def get_move(State):
State = to_list(State)
while True:
col = input("Enter column here: ")
col = int(col)
row = find_empty(State, col)
if row != None:
State[row][col] = 'O'
return to_tuple(State)
else:
print("Don't cheat. Please try again.")
Explanation: The function get_move asks the user to input a move in the format r,c where r is the row and the c is the column where the next symbol is to be placed.
End of explanation
def final_msg(State):
if finished(State):
if utility(State) == -1:
print("You have won!")
elif utility(State) == 1:
print("You have lost!")
else:
print("It's a draw.");
return True
return False
Explanation: This function informs the user about the result of the game once the game is finished.
End of explanation
import ipycanvas as cnv
size = 50
Explanation: Drawing the Board
End of explanation
def create_canvas():
canvas = cnv.Canvas(size=(size * 7, size * 8))
display(canvas)
return canvas
import math
Explanation: This function creates the canvas for the start state. It draws an empty board which is later used for the game.
End of explanation
def draw(State, canvas, value):
canvas.clear()
canvas.font = '36px sans-serif'
canvas.text_align = 'center'
canvas.text_baseline = 'middle'
for row in range(6):
for col in range(7):
x = col * size
y = row * size
canvas.line_width = 3.0
canvas.stroke_rect(x, y, size, size)
symbol = State[5-row][col]
if symbol != ' ':
x += size // 2
y += size // 2
if symbol == 'X':
canvas.fill_style ='red'
else:
canvas.fill_style ='blue'
canvas.fill_arc(x, y, 0.4*size, 0, 2*math.pi)
canvas.font = '20px sans-serif'
canvas.fill_style = 'black'
for i in range(7):
x = (i + 0.5) * size
y = 6.4 * size
canvas.fill_text(str(i), x, y)
x = 3.5 * size
y = 7.4 * size
canvas.fill_text(str(value), x, y)
Explanation: The function draw takes three arguments:
- State is the current state of the game.
- canvas is a canvas used to draw the state.
- value is the value of the game for player X.
The function draws the given State onto canvas. Below that, the value is printed.
End of explanation |
384 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Classical Harmonic Oscillator
Many problems in physics come down to this simple relation
Step2: We notice that after a few oscillations our numerical solution does not agree so well with our analytical result. We can quantify this by looking at the deviation in the energy as a function of time.
The kinetic energy of the system is
$$ T = \frac{1}{2} m \dot{x}^2 $$
and the potential energy is
$$ V = \frac{1}{2} k x^2 $$
There is no external work being done on the spring-mass system, so the total energy of the system is conserved, i.e.
$$
E_{tot} = T + V = \mathrm{const}
$$
Exercise
Step3: Error vs time step study | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def undamped_oscillator_euler(x0,v0,k,m,tmax,dt):
Numerically integrate the equation of motion for an undamped harmonic oscillator
using a simple euler method.
# calculate the number of time steps
num_time_steps = np.floor(tmax/dt)
time = np.linspace(0, tmax, num_time_steps)
# define arrays for position and velocity
x = np.zeros(num_time_steps)
v = np.zeros(num_time_steps)
# apply initial conditions
x[0] = x0
v[0] = v0
#define constants
omega = np.sqrt(k/m)
# use F = ma and the euler method to integrate the equation of motion
for i in range(1,len(time)):
a = -k/m * x[i-1]
x[i] = v[i-1]*dt + 0.5 * a * dt**2 + x[i-1]
v[i] = a * dt + v[i-1]
return (x,v,time)
def undamped_oscillator_exact_pos(A,omega,phi,t):
return A*np.cos(omega*t + phi)
def undamped_oscillator_exact_vel(A,omega,phi,t):
return -A*omega*np.sin(omega*t+phi)
# initial conditions for the simulation
x0 = 1
v0 = 0
k = 10
m = 1
tmax = 10
dt = .1
# results we derived from our analytical analysis above
omega = np.sqrt(k/m)
phi = np.arctan(-v0 / (omega*x0))
A = np.sqrt(x0**2+v0**2/omega**2)
# generate numerical trajectory given initial conditions
x,v,t = undamped_oscillator_euler(x0,v0,k,m,tmax,dt)
ax1 = plt.subplot(211)
ax2 = plt.subplot(212)
ax1.plot(t,x)
ax1.plot(t,undamped_oscillator_exact_pos(A,omega,phi,t),linestyle='--')
ax1.set_ylabel('x(t)')
ax2.plot(t,v)
ax2.set_ylabel('v(t)')
ax2.set_xlabel('t')
ax2.plot(t,undamped_oscillator_exact_vel(A,omega,phi,t),linestyle='--')
plt.tight_layout()
Explanation: Classical Harmonic Oscillator
Many problems in physics come down to this simple relation:
$$
\ddot{x} = -\omega^2 x
$$
where $x$ can be any quantity and $\omega$ can be any combination of relevant constants. The resultant motion is known as "simple harmonic motion", i.e.
$$ x(t) = A \cos(\omega t + \phi) $$
Where $A$ is the amplitude of the motion, $\omega$ is the collection of various constants from before, and $\phi$ is a phase that is set by the initial conditions of the problem.
It is traditional to consider a spring-mass system where a spring with rest length $x_0$ and spring constant $k$ is attached to a mass $m$. If we write down our expression for $\ddot{x}$ using Newton's second law we find
$$
m \ddot{x} = -k x
$$
which reduces to
$$
\ddot{x} = -\omega^2 x
$$
where
$$
\omega = \sqrt{\frac{k}{m}}
$$
We can see that the oscillation frequency of the spring mass system is determined by the stiffness, $k$, of the spring and mass, $m$, we have attached to it.
The Analytical Solution
Given a spring-mass system with mass $m$ and spring constant $k$ we can derive the motion of the system analytically and compare our results from a numerical simulation.
We know the solution has form
$$
x(t) = A \cos(\omega t + \phi)\
v(t) = -A \omega \sin(\omega t + \phi)
$$
so we can apply our initial conditions
$$
x(0) = x_0 = A \cos(\phi) \
v(0) = v_0 = -A \omega \sin(\phi)
$$
Dividing one equation by another we get
$$
\tan(\phi) = \frac{ -v_0 }{ \omega x_0 }
$$
or
$$
\phi = \tan^{-1}\left( \frac{-v_0}{\omega x_0} \right)
$$
and we can plug this result back into our initial condition for position and find that
$$
A = \sqrt{ x_0^2 + \left(\frac{v_0}{\omega}\right)^2}
$$
End of explanation
def oscillator_energy(x,v,k,m):
return 0.5*m*v**2 + 0.5*k*x**2
ax = plt.subplot(111)
ax.plot(t, oscillator_energy(x,v,k,m))
ax.plot(t, oscillator_energy(
undamped_oscillator_exact_pos(A,omega,phi,t),
undamped_oscillator_exact_vel(A,omega,phi,t),
k, m
))
Explanation: We notice that after a few oscillations our numerical solution does not agree so well with our analytical result. We can quantify this by looking at the deviation in the energy as a function of time.
The kinetic energy of the system is
$$ T = \frac{1}{2} m \dot{x}^2 $$
and the potential energy is
$$ V = \frac{1}{2} k x^2 $$
There is no external work being done on the spring-mass system, so the total energy of the system is conserved, i.e.
$$
E_{tot} = T + V = \mathrm{const}
$$
Exercise:
Show that $ T + V = \frac{1}{2} k A^2 $
End of explanation
# initial conditions for the simulation
x0 = 1
v0 = 0
k = 10
m = 1
tmax = 10
# results we derived from our analytical analysis above
omega = np.sqrt(k/m)
phi = np.arctan(-v0 / (omega*x0))
A = np.sqrt(x0**2+v0**2/omega**2)
exact_Etot = 0.5 * k * A**2
# list of dts to step through
dts = np.logspace(-1,-5,30)
errors = np.zeros(dts.size)
for i,dt in enumerate(dts):
x,v,t = undamped_oscillator_euler(x0,v0,k,m,tmax,dt)
errors[i] = 100 * (oscillator_energy(x[-1],v[-1],k,m) - exact_Etot) / exact_Etot
ax = plt.subplot(111)
plt.loglog(dts,errors,'bo--')
ax.set_xlabel('dt')
ax.set_ylabel('Energy Error (%)')
Explanation: Error vs time step study
End of explanation |
385 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Morphological Transformations other than opening and closing morphological operation MORPH_GRADIENT will give the difference between dilation and erosion top_hat will give the difference between input image and opening image
| Python Code::
import cv2
import numpy as np
%matplotlib notebook
%matplotlib inline
from matplotlib import pyplot as plt
img = cv2.imread("HappyFish.jpg",cv2.IMREAD_GRAYSCALE)
_,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV)
kernal = np.ones((5,5),np.uint8)
dilation = cv2.dilate(mask,kernal,iterations = 3)
erosion = cv2.erode(mask,kernal,iterations=1)
opening = cv2.morphologyEx(mask,cv2.MORPH_OPEN,kernal)
closing = cv2.morphologyEx(mask,cv2.MORPH_CLOSE,kernal)
MORPH_GRADIENT = cv2.morphologyEx(mask,cv2.MORPH_GRADIENT,kernal)
top_hat = cv2.morphologyEx(mask,cv2.MORPH_TOPHAT,kernal)
titles = ['images',"mask","dilation","erosion","opening",
"closing","MORPH_GRADIENT","top_hat"]
images = [img,mask,dilation,erosion,opening,
closing,MORPH_GRADIENT,top_hat]
for i in range(len(titles)):
plt.subplot(2,4,i+1)
plt.imshow(images[i],"gray")
plt.title(titles[i])
plt.xticks([])
plt.yticks([])
plt.show()
|
386 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
nyc-schools_C
This script averages the ACS variables for the N census tracts closest to each school, and combines these averaged variables with the school outcomes in a single dataframe (saved as a *.csv)
Step1: Function to compute the average value of an ACS variable across several census tracts
Step2: MAIN
Step3: Now loop on the schools, and average ACS variables across census tracts
Step4: Concatenate the tract-averaged data with the school outcome data
Step5: Finally clean up some of column names, and eliminate some that will not be used | Python Code:
import pandas as pd
import numpy as np
import os
bp_data = '/Users/bryanfry/projects/proj_nyc-schools/data_files'
n_tracts = 10 # Average ACS variable from 20 closest tracts to each school.
Explanation: nyc-schools_C
This script averages the ACS variables for the N census tracts closest to each school, and combines these averaged variables with the school outcomes in a single dataframe (saved as a *.csv)
End of explanation
# Compute average value for ACS var, given a list of geoid. Ideally perhaps the tracts should
# be weighted by population rather than using a simple mean, but probably results won't be
# much different since the census tracts are intended to have roughly equal populations.
def calc_multitract_var (df_acs, var, geoid_list, mode = 'sum'):
t = 0 # Total value
#print geoid_list.tolist()
for g in geoid_list:
#print g
try:
t = t + float (df_acs[df_acs.GEOID == g][var])
except: pass
if mode == 'avg':
t = t / len (geoid_list)
return t
Explanation: Function to compute the average value of an ACS variable across several census tracts
End of explanation
# Load school data (with 50 closest census tracts), and ACS variables for each tract
df_sch = pd.read_csv (os.path.join (bp_data, 'df_A_school_info.csv'))
df_acs = pd.read_csv (os.path.join (bp_data, 'df_B_acs_geoid.csv'))
# Drop first column of each imported dataframe (these are just redundent indices)
df_sch = df_sch.drop (df_sch.columns[0], axis = 1)
df_acs = df_acs.drop (df_acs.columns[0], axis = 1)
df_acs.head()
Explanation: MAIN
End of explanation
# Define a dictionary with the census variables to be added to the dataframe
dict_var = {}
acs_col_list = df_acs.columns[2:] # These are the census variables of interest
# Loop on the rows of the school file.
for c in acs_col_list:
dict_var [c] = [] # Make an empty list for each column.
# One element will be added to each list in
# the dictionary for each school# For variables which are either FRACTIONS or MEDIAN VALUES, we take the
# MEAN across the tracts. For other values (corresponging to actual number of
# respondants) we take the SUM.
for i in range (0, len (df_sch)):
geoid_list= df_sch.ix [i][9:9+n_tracts]
for i, c in enumerate (acs_col_list):
if i in [9, 10, 11, 18, 19, 20, 21, 22]: mode = 'avg'
else: mode = 'sum'
dict_var[c].append (calc_multitract_var (df_acs, var = c, geoid_list=geoid_list, mode = mode))
df_tract_avg = pd.DataFrame(data = dict_var)
df_tract_avg.head()
Explanation: Now loop on the schools, and average ACS variables across census tracts
End of explanation
df = pd.concat ([df_sch, df_tract_avg], axis = 1)
df.head()
Explanation: Concatenate the tract-averaged data with the school outcome data
End of explanation
df_c = pd.DataFrame() # c -> 'concise'
# Build list of columns to copy
c_list = ['NAME','DBN','STREET','ZIPCODE','LAT','LON','COUNTY','HOOD','DISPLAY_NAME']
c_list = c_list + ['GEOCODE' + str (i).zfill(2) for i in range (0, n_tracts)]
c_list = c_list + ['2+_RACES','ASIAN','BLACK','DIFFERENT_HOUSE','DIFFERENT_HOUSE_ABROAD',\
'DIFFERENT_HOUSE_DIFFERENT_CITY_SAME_STATE','DIFFERENT_HOUSE_SAME_CITY',\
'DIFFERENT_HOUSE_US_DIFFERENT_STATE','FOREIGN_BORN_INCLUDING_NATURALIZED',\
'MEDIAN_AGE','MEDIAN_INCOME','MEDIAN_MONTHLY_HOUSING_COSTS','NATIVE_AMERICAN',\
'NATIVE_CITIZEN','NON_CITIZEN','SAME_HOUSE','TOTAL_POP?','WHITE','FRAC_MINORITY',\
'RENT_INCOME_RATIO','FRAC_MOVED','FRAC_NONCITIZEN','FRAC_FOREIN_BORN']
for c in c_list: df_c[c] = df[c]
# Copy and rename school outcome data
old_c_list = ['Total Cohort','Total Grads - % of cohort',\
'Total Regents - % of cohort','Total Regents - % of grads','Advanced Regents - % of cohort',\
'Advanced Regents - % of grads','Regents w/o Advanced - % of cohort',\
'Regents w/o Advanced - % of grads','Local - % of cohort','Local - % of grads',\
'Dropped Out - % of cohort','Q_Total Grads - % of cohort','Q_Total Regents - % of cohort',\
'Q_Total Regents - % of grads','Q_Advanced Regents - % of cohort',\
'Q_Advanced Regents - % of grads','Q_Regents w/o Advanced - % of cohort','Q_Local - % of cohort',\
'Q_Local - % of grads','Q_Still Enrolled - % of cohort','Q_Dropped Out - % of cohort']
new_c_list = ['TOTAL_COHORT','GRADS_%','REGENTS_%_COHORT','REGENTS_%_GRADS'\
,'ADV_REGENTS_%_COHORT','ADV_REGENTS_%_GRADS','REG_REGENTS_%_COHORT','REG_REGENTS_%_GRADS'\
,'LOCAL_%_COHORT','LOCAL_%_GRADS','DROPPED_OUT_%','Q_GRADS_%',\
'Q_REGENTS_%_COHORT','Q_REGENTS_%_GRADS','Q_ADV_REGENTS_%_COHORT',\
'Q_ADV_REGENTS_%_GRADS','Q_REG_REGENTS_%_COHORT','Q_LOCAL_%_COHORT',\
'Q_LOCAL_%_GRADS','Q_STILL_ENROLLED_%','Q_DROPPED_OUT_%']
for old_c, new_c in zip (old_c_list, new_c_list):
df_c[new_c] = df[old_c]
#There are some empties -- drop rows with NaN
df_c = df_c.dropna()
# Save the 'concise' dataframe
fp_out = os.path.join (bp_data, 'df_C_sch_acs_NTract=' + str (n_tracts).zfill(2) + '.csv')
df_c.to_csv (fp_out)
Explanation: Finally clean up some of column names, and eliminate some that will not be used
End of explanation |
387 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Autoencoder for PCA - EXERCISE
Follow the bold instructions below to reduce a 30 dimensional data set for classification into a 2-dimensional dataset! Then use the color classes to see if you still kept the same level of class separation in the dimensionality reduction
The Data
Import numpy, matplotlib, and pandas
Step1: Use pandas to read in the csv file called anonymized_data.csv . It contains 500 rows and 30 columns of anonymized data along with 1 last column with a classification label, where the columns have been renamed to 4 letter codes.
Step2: Scale the Data
Use scikit learn to scale the data with a MinMaxScaler. Remember not to scale the Label column, just the data. Save this scaled data as a new variable called scaled_data.
Step3: The Linear Autoencoder
Import tensorflow and import fully_connected layers from tensorflow.contrib.layers.
Step4: Fill out the number of inputs to fit the dimensions of the data set and set the hidden number of units to be 2. Also set the number of outputs to match the number of inputs. Also choose a learning_rate value.
Step5: Placeholder
Create a placeholder fot the data called X.
Step6: Layers
Create the hidden layer and the output layers using the fully_connected function. Remember that to perform PCA there is no activation function.
Step7: Loss Function
Create a Mean Squared Error loss function.
Step8: Optimizer
Create an AdamOptimizer designed to minimize the previous loss function.
Step9: Init
Create an instance of a global variable intializer.
Step10: Running the Session
Now create a Tensorflow session that runs the optimizer for at least 1000 steps. (You can also use epochs if you prefer, where 1 epoch is defined by one single run through the entire dataset.
Step11: Confirm that your output is now 2 dimensional along the previous axis of 30 features.
Step12: Now plot out the reduced dimensional representation of the data. Do you still have clear separation of classes even with the reduction in dimensions? Hint | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Linear Autoencoder for PCA - EXERCISE
Follow the bold instructions below to reduce a 30 dimensional data set for classification into a 2-dimensional dataset! Then use the color classes to see if you still kept the same level of class separation in the dimensionality reduction
The Data
Import numpy, matplotlib, and pandas
End of explanation
data = pd.read_csv('./data/anonymized_data.csv')
data.head()
data.info()
data.describe()
Explanation: Use pandas to read in the csv file called anonymized_data.csv . It contains 500 rows and 30 columns of anonymized data along with 1 last column with a classification label, where the columns have been renamed to 4 letter codes.
End of explanation
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_data = scaler.fit_transform(data.drop('Label', axis = 1))
pd.DataFrame(X_data, columns = data.columns[:-1]).describe()
Explanation: Scale the Data
Use scikit learn to scale the data with a MinMaxScaler. Remember not to scale the Label column, just the data. Save this scaled data as a new variable called scaled_data.
End of explanation
import tensorflow as tf
from tensorflow.contrib.layers import fully_connected
Explanation: The Linear Autoencoder
Import tensorflow and import fully_connected layers from tensorflow.contrib.layers.
End of explanation
num_inputs = 30 # FILL ME IN
num_hidden = 2 # FILL ME IN
num_outputs = num_inputs # Must be true for an autoencoder!
learning_rate = 0.01 #FILL ME IN
Explanation: Fill out the number of inputs to fit the dimensions of the data set and set the hidden number of units to be 2. Also set the number of outputs to match the number of inputs. Also choose a learning_rate value.
End of explanation
X = tf.placeholder(tf.float32, shape = [None, num_inputs])
Explanation: Placeholder
Create a placeholder fot the data called X.
End of explanation
hidden_layer = fully_connected(inputs = X,
num_outputs = num_hidden,
activation_fn = None)
outputs = fully_connected(inputs = hidden_layer,
num_outputs = num_outputs,
activation_fn = None)
Explanation: Layers
Create the hidden layer and the output layers using the fully_connected function. Remember that to perform PCA there is no activation function.
End of explanation
loss = tf.reduce_mean(tf.square(outputs - X))
Explanation: Loss Function
Create a Mean Squared Error loss function.
End of explanation
optimizer = tf.train.AdamOptimizer(learning_rate)
train = optimizer.minimize(loss)
Explanation: Optimizer
Create an AdamOptimizer designed to minimize the previous loss function.
End of explanation
init = tf.global_variables_initializer()
Explanation: Init
Create an instance of a global variable intializer.
End of explanation
num_steps = 1000
with tf.Session() as sess:
sess.run(init)
for iteration in range(num_steps):
sess.run(train,
feed_dict = {X: X_data})
# Now ask for the hidden layer output (the 2 dimensional output)
output_2d = hidden_layer.eval(feed_dict = {X: X_data})
Explanation: Running the Session
Now create a Tensorflow session that runs the optimizer for at least 1000 steps. (You can also use epochs if you prefer, where 1 epoch is defined by one single run through the entire dataset.
End of explanation
output_2d.shape
Explanation: Confirm that your output is now 2 dimensional along the previous axis of 30 features.
End of explanation
plt.scatter(output_2d[:, 0],
output_2d[:, 1],
c = data['Label'])
Explanation: Now plot out the reduced dimensional representation of the data. Do you still have clear separation of classes even with the reduction in dimensions? Hint: You definitely should, the classes should still be clearly seperable, even when reduced to 2 dimensions.
End of explanation |
388 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import Socorro crash data into the Data Platform
We want to be able to store Socorro crash data in Parquet form so that it can be made accessible from re
Step4: We create the pyspark datatype for representing the crash data in spark. This is a slightly modified version of peterbe/crash-report-struct-code.
Step6: First fetch from the primary source in s3 as per bug 1312006. We fall back to the github location if this is not available.
Step9: Read crash data as json, convert it to parquet | Python Code:
!conda install boto3 --yes
import logging
logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)
Explanation: Import Socorro crash data into the Data Platform
We want to be able to store Socorro crash data in Parquet form so that it can be made accessible from re:dash.
See Bug 1273657 for more details
End of explanation
from pyspark.sql.types import *
def create_struct(schema):
Take a JSON schema and return a pyspark StructType of equivalent structure.
replace_definitions(schema, schema['definitions'])
assert '$ref' not in str(schema), 're-write didnt work'
struct = StructType()
for row in get_rows(schema):
struct.add(row)
return struct
def replace_definitions(schema, definitions):
Replace references in the JSON schema with their definitions.
if 'properties' in schema:
for prop, meta in schema['properties'].items():
replace_definitions(meta, definitions)
elif 'items' in schema:
if '$ref' in schema['items']:
ref = schema['items']['$ref'].split('/')[-1]
schema['items'] = definitions[ref]
replace_definitions(schema['items'], definitions)
else:
replace_definitions(schema['items'], definitions)
elif '$ref' in str(schema):
err_msg = "Reference not found for schema: {}".format(str(schema))
log.error(err_msg)
raise ValueError(err_msg)
def get_rows(schema):
Map the fields in a JSON schema to corresponding data structures in pyspark.
if 'properties' not in schema:
err_msg = "Invalid JSON schema: properties field is missing."
log.error(err_msg)
raise ValueError(err_msg)
for prop in sorted(schema['properties']):
meta = schema['properties'][prop]
if 'string' in meta['type']:
logging.debug("{!r} allows the type to be String AND Integer".format(prop))
yield StructField(prop, StringType(), 'null' in meta['type'])
elif 'integer' in meta['type']:
yield StructField(prop, IntegerType(), 'null' in meta['type'])
elif 'boolean' in meta['type']:
yield StructField(prop, BooleanType(), 'null' in meta['type'])
elif meta['type'] == 'array' and 'items' not in meta:
# Assuming strings in the array
yield StructField(prop, ArrayType(StringType(), False), True)
elif meta['type'] == 'array' and 'items' in meta:
struct = StructType()
for row in get_rows(meta['items']):
struct.add(row)
yield StructField(prop, ArrayType(struct), True)
elif meta['type'] == 'object':
struct = StructType()
for row in get_rows(meta):
struct.add(row)
yield StructField(prop, struct, True)
else:
err_msg = "Invalid JSON schema: {}".format(str(meta)[:100])
log.error(err_msg)
raise ValueError(err_msg)
Explanation: We create the pyspark datatype for representing the crash data in spark. This is a slightly modified version of peterbe/crash-report-struct-code.
End of explanation
import boto3
import botocore
import json
import tempfile
import urllib2
def fetch_schema():
Fetch the crash data schema from an s3 location or github location. This
returns the corresponding JSON schema in a python dictionary.
region = "us-west-2"
bucket = "org-mozilla-telemetry-crashes"
key = "crash_report.json"
fallback_url = "https://raw.githubusercontent.com/mozilla/socorro/master/socorro/schemas/crash_report.json"
try:
log.info("Fetching latest crash data schema from s3://{}/{}".format(bucket, key))
s3 = boto3.client('s3', region_name=region)
# download schema to memory via a file like object
resp = tempfile.TemporaryFile()
s3.download_fileobj(bucket, key, resp)
resp.seek(0)
except botocore.exceptions.ClientError as e:
log.warning(("Could not fetch schema from s3://{}/{}: {}\n"
"Fetching crash data schema from {}")
.format(bucket, key, e, fallback_url))
resp = urllib2.urlopen(fallback_url)
return json.load(resp)
Explanation: First fetch from the primary source in s3 as per bug 1312006. We fall back to the github location if this is not available.
End of explanation
from datetime import datetime as dt, timedelta, date
from pyspark.sql import SQLContext
def daterange(start_date, end_date):
for n in range(int((end_date - start_date).days) + 1):
yield (end_date - timedelta(n)).strftime("%Y%m%d")
def import_day(d, schema, version):
Convert JSON data stored in an S3 bucket into parquet, indexed by crash_date.
source_s3path = "s3://org-mozilla-telemetry-crashes/v1/crash_report"
dest_s3path = "s3://telemetry-parquet/socorro_crash/"
num_partitions = 10
log.info("Processing {}, started at {}".format(d, dt.utcnow()))
cur_source_s3path = "{}/{}".format(source_s3path, d)
cur_dest_s3path = "{}/v{}/crash_date={}".format(dest_s3path, version, d)
df = sqlContext.read.json(cur_source_s3path, schema=schema)
df.repartition(num_partitions).write.parquet(cur_dest_s3path, mode="overwrite")
def backfill(start_date_yyyymmdd, schema, version):
Import data from a start date to yesterday's date.
Example:
backfill("20160902", crash_schema, version)
start_date = dt.strptime(start_date_yyyymmdd, "%Y%m%d")
end_date = dt.utcnow() - timedelta(1) # yesterday
for d in daterange(start_date, end_date):
try:
import_day(d)
except Exception as e:
log.error(e)
from os import environ
# get the relevant date
yesterday = dt.strftime(dt.utcnow() - timedelta(1), "%Y%m%d")
target_date = environ.get('date', yesterday)
# fetch and generate the schema
schema_data = fetch_schema()
crash_schema = create_struct(schema_data)
version = schema_data.get('$target_version', 0) # default to v0
# process the data
import_day(target_date, crash_schema, version)
Explanation: Read crash data as json, convert it to parquet
End of explanation |
389 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Extracción de datos web (Web scrapping)
Ante la generación masiva a traves de la red es importante tener herramientas que permitan la extracción de datos a partir de fuentes cuya ubicación es esta. De esto se trata el web scrapping.
Se pueden tener elementos poco especificos mediante las mismas alternativas del procesamiento de texto para algunos casos, sin embargo esto no siempre será efectivo ni eficiente. Por ejemplo, podemos usar wget para descargar una página y hacer la búsqueda de elementos html en ella por medio de expresiones regulares, pero la descarga de la página implica que el contenido debio ser estatico. Igualmente, las expresiones regulares no son la mejor herramienta siempre, y es más eficiente usar elementos especialmente diseñados para recorrer la estructura html sin depender de la generación de expresiones de coincidencia sino obedeciendo exclusivamente a los patrones que ya sabemos que existirán por defecto.
Herramientas (en python)
Para esta labor contamos con algunas herramientas como lo son
Step1: Al usar la función open de webbrowser se abrirá el navegador o una pestaña nueva si el navegador ya estaba abierto. Ya que no indicamos el navegador, esto se realiza con el navegador configurado por defecto en nuestro sistema. Se puede usar para abrir pestañas y ventanas nuevas, cerrarlo, y tambien usar un navegador especifico.
Step2: Sin embargo, la labor de extracción web depende de obtener el código fuente o elementos disponibles en las páginas, lo cual es imposible con solo abrir el navegador. Para este fin es posible usar urllib o como lo haremos en esta sesión, con request.
Step3: Ante un fallo en el proceso de obtención del código con la función get, es posible generar una notificación del motivo de fallo.
Step4: Cuando obtenemos un elemento de una dirección, este se encuentra como binario y no como texto plano. Esto nos facilita algunas cosas. Nos permite descargar contenido que no se solo texto plano (archivos de texto o código fuente) sino tambien directamente archivos binarios como imagenes, ejecutables, videos, archivos de word y otros. Es importante aclarar, que si vamos a almacenar el archivo de texto plano, debemos hacerlo con creación de archivos binarios para no perder la codificación original que tenga el archivo.
Step5: En el bloque anterior, el método iter_content genera bloques del archivo con el tamaño indicado en su argumento. Esto conviene para la escritura de archivos de gran tamaño.
Step6: Usarmos ahora bs4 (forma de importar Beautiful Soup), lo cual nos permitirá la búsqueda de texto y estructuras html especificas. Este es más conveniente que usar expresiones regulares directamente en el código fuente.
Al crear el objeto, debemos indicar el texto sobre el cual actuará (puede ser obtenido directamente de un archivo abierto tambien) y el tipo de analizador sintactico, en este caso lxml.
Step7: Ahora, buscaremos todas las estructuras td que tengan el atributo class con valor content.
Step8: El resultado es una lista con todos los resultados obtenidos. Tambien es posible una búsqueda uno a uno, usando find en lugar de find_all.
Step9: En el filtrado anterior, ahora buscaremos todas las etiquetas a las cuales asociamos con la presencia del atributo href. De esta forma localizaremos la lista de archivos. Para obtener el texto al interior de una etiqueta, usamos la propiedad string y el valor de un atributo con el método get.
Step10: Nos vimos en la necesidad de usar encode("utf-8") ya que la codificación de la página es utf-8 y no ascii (el usado por defecto en python). Podemos consultar los atributos de una etiqueta o si posee un atributo especifico, y no solo obtener el valor, de la siguiente forma.
Step11: Invocar la instancia del controlador del navegador depende del navegador de interes. Hay que tener encuenta que no todos los navegadores son soportados. Podemos encontrar soporte para Chrome, Firefox, Opera, IE y PhantomJS. Este último permite realizar la labor sin la generación de una ventana para el navegador (en caso de ser necesario, incluso se puede generar capturas de pantalla para su validación con ayuda del controlador).
Acorde a cada navegador, se puede tener requerimientos especificos. En el caso de firefox, se presenta la necesidad de indicar el directorio del perfil de usuario, en el caso de chrome se requiere indicar la ruta del controlador (se descarga ya que no viene incluido como si sucede en firefox o phantomjs).
Podría ser posible (no he verificado) usar otros navegadores si usan el mismo motor de navegación realizando la indicación explicita de la ruta del ejecutable. Por ejemplo, se podría controlar vivaldi realizando el cambio de ruta de chrome (usan el mismo motor de navegación).
Step12: Resulta bastante útil el uso de selenium no tanto en los casos que requieran de interacción sino en los casos donde los contenidos (incluye elementos de interacción) son de generación dinámica o tras la interacción el nuevo enlace o contenido tiene retrasos apreciables, lo cual evitaría que Request obtenga el código adecuado. Podemos extraer el código fuente de la página en la cual se encuentra el foco del navegador de la siguiente forma. | Python Code:
import webbrowser
Explanation: Extracción de datos web (Web scrapping)
Ante la generación masiva a traves de la red es importante tener herramientas que permitan la extracción de datos a partir de fuentes cuya ubicación es esta. De esto se trata el web scrapping.
Se pueden tener elementos poco especificos mediante las mismas alternativas del procesamiento de texto para algunos casos, sin embargo esto no siempre será efectivo ni eficiente. Por ejemplo, podemos usar wget para descargar una página y hacer la búsqueda de elementos html en ella por medio de expresiones regulares, pero la descarga de la página implica que el contenido debio ser estatico. Igualmente, las expresiones regulares no son la mejor herramienta siempre, y es más eficiente usar elementos especialmente diseñados para recorrer la estructura html sin depender de la generación de expresiones de coincidencia sino obedeciendo exclusivamente a los patrones que ya sabemos que existirán por defecto.
Herramientas (en python)
Para esta labor contamos con algunas herramientas como lo son:
urllib: Modulo incluido en python para la recuperación de contenido de una url.
webbrowser: Modulo incluido en python para la apertura de url's en una instancia del navegador predefinido.
html: Modulo incluido en python para el analisis sintactico html.
Request: Reemplazo externo para urllib con mayores caracteristicas.
Beautiful Soup: Reemplazo externo para html con mayores caracteristicas.
Selenium: Reemplazo externo para webbrowser con mayores caracteristicas.
Wget: Port de wget para python.
Instalar requisitos
Primero que todo, partimos que ya tenemos instalado al menos un navegador (firefox por defecto en la mayor parte de las distribuciones linux). Se puede trabajar con otros navegadores, y es de especial interes PhantomJS, una opción de navegador que no genera interface gráfica, ideal para pruebas o automatización (en caso de ser molesto que el navegador se vea abrir y cerrar, etc...).
pip install selenium beautifulsoup4 Requests
Aplicando
End of explanation
webbrowser.open('http://github.com/')
Explanation: Al usar la función open de webbrowser se abrirá el navegador o una pestaña nueva si el navegador ya estaba abierto. Ya que no indicamos el navegador, esto se realiza con el navegador configurado por defecto en nuestro sistema. Se puede usar para abrir pestañas y ventanas nuevas, cerrarlo, y tambien usar un navegador especifico.
End of explanation
import requests
res = requests.get('http://www.gutenberg.org/files/18251/18251-0.txt')
res.status_code == requests.codes.ok # Validar código 200 (ok)
type(res)
len(res.text)
print(res.text[:250])
Explanation: Sin embargo, la labor de extracción web depende de obtener el código fuente o elementos disponibles en las páginas, lo cual es imposible con solo abrir el navegador. Para este fin es posible usar urllib o como lo haremos en esta sesión, con request.
End of explanation
res = requests.get('http://github.com/yomeinventoesto')
res.raise_for_status()
Explanation: Ante un fallo en el proceso de obtención del código con la función get, es posible generar una notificación del motivo de fallo.
End of explanation
res = requests.get('http://www.programmableweb.com/sites/default/files/github-jupyter.jpg')
archivo_imagen = open('github-jupyter.jpg', 'wb')
for bloques in res.iter_content(100000):
archivo_imagen.write(bloques)
archivo_imagen.close()
Explanation: Cuando obtenemos un elemento de una dirección, este se encuentra como binario y no como texto plano. Esto nos facilita algunas cosas. Nos permite descargar contenido que no se solo texto plano (archivos de texto o código fuente) sino tambien directamente archivos binarios como imagenes, ejecutables, videos, archivos de word y otros. Es importante aclarar, que si vamos a almacenar el archivo de texto plano, debemos hacerlo con creación de archivos binarios para no perder la codificación original que tenga el archivo.
End of explanation
import bs4
Explanation: En el bloque anterior, el método iter_content genera bloques del archivo con el tamaño indicado en su argumento. Esto conviene para la escritura de archivos de gran tamaño.
End of explanation
res = requests.get('https://github.com/cosmoscalibur/herramientas_computacionales')
gh = bs4.BeautifulSoup(res.text, "lxml")
type(gh)
Explanation: Usarmos ahora bs4 (forma de importar Beautiful Soup), lo cual nos permitirá la búsqueda de texto y estructuras html especificas. Este es más conveniente que usar expresiones regulares directamente en el código fuente.
Al crear el objeto, debemos indicar el texto sobre el cual actuará (puede ser obtenido directamente de un archivo abierto tambien) y el tipo de analizador sintactico, en este caso lxml.
End of explanation
tabla_archivos = gh.find_all('td', {'class':'content'})
type(tabla_archivos)
Explanation: Ahora, buscaremos todas las estructuras td que tengan el atributo class con valor content.
End of explanation
len(tabla_archivos)
print(tabla_archivos)
Explanation: El resultado es una lista con todos los resultados obtenidos. Tambien es posible una búsqueda uno a uno, usando find en lugar de find_all.
End of explanation
for content in tabla_archivos:
lineas_a = content('a')
if lineas_a:
texto = "Se encontro el archivo '{}'".format(lineas_a[0].string.encode("utf-8"))
texto += " con enlace '{}'.".format(lineas_a[0].get("href"))
print(texto)
Explanation: En el filtrado anterior, ahora buscaremos todas las etiquetas a las cuales asociamos con la presencia del atributo href. De esta forma localizaremos la lista de archivos. Para obtener el texto al interior de una etiqueta, usamos la propiedad string y el valor de un atributo con el método get.
End of explanation
lineas_a[0].has_attr("href") # Existencia de un atributo
lineas_a[0].attrs # Atributos existentes
from selenium import webdriver
Explanation: Nos vimos en la necesidad de usar encode("utf-8") ya que la codificación de la página es utf-8 y no ascii (el usado por defecto en python). Podemos consultar los atributos de una etiqueta o si posee un atributo especifico, y no solo obtener el valor, de la siguiente forma.
End of explanation
browser = webdriver.Chrome("/home/cosmoscalibur/Downloads/chromedriver")
browser.get('http://github.com')
username = browser.find_element_by_id("user[login]")
username.send_keys("[email protected]")
dar_click = browser.find_element_by_link_text("privacy policy")
dar_click.click()
Explanation: Invocar la instancia del controlador del navegador depende del navegador de interes. Hay que tener encuenta que no todos los navegadores son soportados. Podemos encontrar soporte para Chrome, Firefox, Opera, IE y PhantomJS. Este último permite realizar la labor sin la generación de una ventana para el navegador (en caso de ser necesario, incluso se puede generar capturas de pantalla para su validación con ayuda del controlador).
Acorde a cada navegador, se puede tener requerimientos especificos. En el caso de firefox, se presenta la necesidad de indicar el directorio del perfil de usuario, en el caso de chrome se requiere indicar la ruta del controlador (se descarga ya que no viene incluido como si sucede en firefox o phantomjs).
Podría ser posible (no he verificado) usar otros navegadores si usan el mismo motor de navegación realizando la indicación explicita de la ruta del ejecutable. Por ejemplo, se podría controlar vivaldi realizando el cambio de ruta de chrome (usan el mismo motor de navegación).
End of explanation
codigo = browser.page_source
print(codigo)
Explanation: Resulta bastante útil el uso de selenium no tanto en los casos que requieran de interacción sino en los casos donde los contenidos (incluye elementos de interacción) son de generación dinámica o tras la interacción el nuevo enlace o contenido tiene retrasos apreciables, lo cual evitaría que Request obtenga el código adecuado. Podemos extraer el código fuente de la página en la cual se encuentra el foco del navegador de la siguiente forma.
End of explanation |
390 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detecção de Outliers nas Cotas Parlamentares
Primeiro, vamos investigar manualmente alguns gastos dos deputados em 2015. Em seguida, usaremos uma técnica simples de Aprendizado de Máquina (Machine Learning) para buscar transações incomuns por todos os dados.
O que é a Cota para Exercício da Atividade Parlamentar (CEAP)?
É um valor mensal recebido, além do salário, para custear os gastos dos deputados na atividade parlamentar. Em 2016, esse valor varia entre R\$ 30.788,66 para deputados do DF e R\$ 44.632,46 para os do Acre. Ainda há um adicional de R\$ 1.353,04 em alguns casos. Mais detalhes na assessoria de imprensa da Câmara dos Deputados.
Conhecendo os dados
Vamos carregar os dados de 2015 e olhar a primeira entrada. Os dados utilizados estão no formato CSV e foram convertidos do original em XML. O significado de cada coluna pode ser conferido no site de transparência da Câmara.
Step1: Análise manual
Por curiosidade, vamos calcular quais foram os 3 parlamentares que mais gastaram em 2015
Step2: O deputado que mais usou a cota parlamentar totalizou R\$ 516.027,24 em 2015, uma média de um pouco mais que R$ 43.000,00 mensais. Vamos verificar seu maior gasto
Step3: Será que um pagamento de R$ 88.500,00 para divulgação da atividade parlamentar é muito alto? Vamos ver os 5 maiores pagamentos desse tipo, entre todos os deputados, ordenado do maior pro menor
Step4: Descobrimos então que outros parlamentares gastaram ainda mais para divulgar suas atividades. Nesse momento, seu foco pode ter mudado dos R\$ 88.500,00 de Jhonatan de Jesus para os R\$ 189.600,00 de Arnaldo Faria de Sá. Comparando os gastos da tabela acima, o primeiro colocado se destoa a ponto de investigarmos melhor esse gasto? Note que começamos com uma ideia
Step5: Observe que os valores de x = 0 são mais frequentes e a frequência diminui para as laterais. Você pode conferir mais detalhes sobre a Distribuição Normal na Wikipedia.
Como é a distribuição dos valores da cota parlamentar?
Step6: Bem diferente da distribuição normal padrão, não é? Embora invisíveis nessa escala, há alguns poucos gastos muito altos à direita. Além disso, notamos muitos gastos próximo do zero e uma diminuição brusca da barra ao lado.
Para que os valores se aproximem da normal, vamos transformá-los aplicando logaritmo, subtraindo a média e dividindo pelo desvio padrão. Vamos ver o resultado e a curva normal sobrepostos
Step7: Agora os gastos estão muito mais próximos da distribuição normal padrão. Segundo Andrew Ng, a distribuição não precisa ser muito igual à normal para obter bons resultados. Podemos seguir para o próximo passo
Step8: Os gastos com divulgação são os primeiros colocados. A tabela acima é a mesma que a última tabela da abordagem manual e sofre do mesmo problema
Step9: NaN significa que não há esse valor nos dados, mas é fácil entender o porquê pelo nome.
Na tabela acima, temos os gastos que mais destoam dentro de suas categorias. Entre os 5 primeiros, 4 são para cobrir alimentação e são cerca de 10 vezes menor que o gasto com serviços postais. Para se ter uma ideia do quanto eles se destacam em suas categorias, vamos ver o valor com alimentação abaixo do qual se encontram 99,865% dos gastos ($3 \sigma$ segundo a tabela que pode ser encontrada na Wikipedia)
Step10: A porcentagem está bem próxima da teórica. Mais de 99% dos gastos com alimentação está abaixo de R\$ 564,93 e não é à toa que os gastos entre 4 e 6 mil estão entre os 5 primeiros da tabela acima. Marllos Sampaio, por exemplo, faz parte dos cerca de 0,3\% que mais gastaram com alimentação.
Por outro lado, pode ser mais interessante investigar o gasto com serviços postais, pois é cerca de 10 vezes maior que os gastos com alimentação. Como podemos destacar os gastos de maior valor aproveitando essa análise dentro de cada categoria? É o que vamos ver a seguir.
Unindo as duas probablilidades
Primeiro, destacamos os maiores valores, mas acabamos priorizando categorias de gastos mais caras. Em seguida, destacamos os maiores valores dentro de cada categoria, mas obtivemos valores relativamente baixos. O ideal seria um balanço entre essas duas abordagens. Lembre-se do ensino médio (colegial, pros mais "experientes" | Python Code:
import pandas as pd
ceap = pd.read_csv('dados/ceap2015.csv.zip')
linhas, colunas = ceap.shape
print('Temos {} entradas com {} colunas cada.'.format(linhas, colunas))
print('Primeira entrada:')
ceap.iloc[0]
Explanation: Detecção de Outliers nas Cotas Parlamentares
Primeiro, vamos investigar manualmente alguns gastos dos deputados em 2015. Em seguida, usaremos uma técnica simples de Aprendizado de Máquina (Machine Learning) para buscar transações incomuns por todos os dados.
O que é a Cota para Exercício da Atividade Parlamentar (CEAP)?
É um valor mensal recebido, além do salário, para custear os gastos dos deputados na atividade parlamentar. Em 2016, esse valor varia entre R\$ 30.788,66 para deputados do DF e R\$ 44.632,46 para os do Acre. Ainda há um adicional de R\$ 1.353,04 em alguns casos. Mais detalhes na assessoria de imprensa da Câmara dos Deputados.
Conhecendo os dados
Vamos carregar os dados de 2015 e olhar a primeira entrada. Os dados utilizados estão no formato CSV e foram convertidos do original em XML. O significado de cada coluna pode ser conferido no site de transparência da Câmara.
End of explanation
colunas = ['txNomeParlamentar', 'sgPartido', 'sgUF', 'vlrLiquido']
grupo = ['txNomeParlamentar', 'sgPartido', 'sgUF']
ceap[colunas].groupby(grupo).sum().sort_values('vlrLiquido', ascending=False).head(3)
Explanation: Análise manual
Por curiosidade, vamos calcular quais foram os 3 parlamentares que mais gastaram em 2015:
End of explanation
nome = "JHONATAN DE JESUS"
ceap[ceap.txNomeParlamentar == nome].sort_values('vlrLiquido', ascending=False).iloc[0]
Explanation: O deputado que mais usou a cota parlamentar totalizou R\$ 516.027,24 em 2015, uma média de um pouco mais que R$ 43.000,00 mensais. Vamos verificar seu maior gasto:
End of explanation
colunas = ['vlrLiquido', 'txNomeParlamentar', 'sgPartido', 'sgUF', 'txtDescricao']
ceap.query('numSubCota == 5')[colunas].sort_values('vlrLiquido', ascending=False).head()
Explanation: Será que um pagamento de R$ 88.500,00 para divulgação da atividade parlamentar é muito alto? Vamos ver os 5 maiores pagamentos desse tipo, entre todos os deputados, ordenado do maior pro menor:
End of explanation
import matplotlib # gráficos
import numpy as np # cálculos
%matplotlib inline
matplotlib.style.use('ggplot')
positivos = ceap[ceap.vlrLiquido > 0].vlrLiquido
aleatorios = pd.Series(np.random.randn(len(positivos)), name='normal')
aleatorios.plot.hist(bins=75, ylim=(0, 35000));
Explanation: Descobrimos então que outros parlamentares gastaram ainda mais para divulgar suas atividades. Nesse momento, seu foco pode ter mudado dos R\$ 88.500,00 de Jhonatan de Jesus para os R\$ 189.600,00 de Arnaldo Faria de Sá. Comparando os gastos da tabela acima, o primeiro colocado se destoa a ponto de investigarmos melhor esse gasto? Note que começamos com uma ideia: o maior gasto do parlamentar que mais gastou no ano e, conforme investigamos, mudamos o rumo para o maior gasto com divulgação entre todos os deputados. Isso pode acontecer repetidas vezes até que de fato escolhamos um gasto para investigar mais a fundo.
Como você já deve ter percebido, a análise manual é muito trabalhosa nas mais de 350 mil entradas que temos. Vejamos agora como processá-las de forma mais objetiva e automatizada.
Aprendizagem de Máquina
Vamos usar uma técnica simples de detecção de outliers lecionada no Coursera por Andrew Ng. Essa técnica diz a probabilidade de um valor específico ocorrer no grupo. Para que ela funcione, os valores devem seguir aproximadamente uma distribuição normal. Não pretendo entrar em detalhes sobre estatística, apenas o suficiente para nos certificarmos de que teremos bons resultados.
Todos os valores
Primeiro, vamos considerar todos os gastos de uma só vez. Será que os valores possuem uma distribuição normal? Veja um exemplo de distribuição normal (padrão):
End of explanation
positivos.plot.hist(bins=75);
Explanation: Observe que os valores de x = 0 são mais frequentes e a frequência diminui para as laterais. Você pode conferir mais detalhes sobre a Distribuição Normal na Wikipedia.
Como é a distribuição dos valores da cota parlamentar?
End of explanation
def log_zscores(valores):
positivos = valores[valores > 0].dropna()
logs = np.log(positivos)
return (logs - logs.mean()) / logs.std()
vlrLiquido_z = log_zscores(ceap.vlrLiquido)
pd.concat([aleatorios, vlrLiquido_z], axis=1).plot.hist(bins=75, alpha=0.6);
Explanation: Bem diferente da distribuição normal padrão, não é? Embora invisíveis nessa escala, há alguns poucos gastos muito altos à direita. Além disso, notamos muitos gastos próximo do zero e uma diminuição brusca da barra ao lado.
Para que os valores se aproximem da normal, vamos transformá-los aplicando logaritmo, subtraindo a média e dividindo pelo desvio padrão. Vamos ver o resultado e a curva normal sobrepostos:
End of explanation
from scipy.stats import norm
def prob(valores):
probs = valores.copy()
probs[probs <= 0] = np.nan
z = log_zscores(probs)
probs[z.index] = norm.sf(z)
return probs
ceap['prob_geral'] = prob(ceap.vlrLiquido)
colunas = ['prob_geral', 'vlrLiquido', 'txNomeParlamentar', 'sgPartido', 'sgUF', 'txtDescricao']
ceap[colunas].sort_values('prob_geral').head()
Explanation: Agora os gastos estão muito mais próximos da distribuição normal padrão. Segundo Andrew Ng, a distribuição não precisa ser muito igual à normal para obter bons resultados. Podemos seguir para o próximo passo: calcular a probabilidade da ocorrência de cada valor. Talvez você já tenha ouvido falar em "6 Sigma" ($6\sigma$) e esse nome vem do fato de que um intervalo entre $-3 \sigma$ e $3\sigma$ abrange quase 100% dos valores de uma distribuição normal. Na distribuição normal padrão, $\sigma = 1$.
Vamos então calcular a probabilidade de cada gasto, supondo que eles sigam uma distribuição normal e, em seguida, mostrar aqueles que possuem menor probabilidade de ocorrência (os 5 primeiros).
End of explanation
colunas = ['numSubCota', 'vlrLiquido']
ceap['prob_grupo'] = ceap[colunas].groupby('numSubCota').transform(prob)
colunas = ['prob_grupo', 'vlrLiquido', 'txNomeParlamentar', 'sgPartido', 'sgUF', 'txtDescricao']
ceap[colunas].sort_values('prob_grupo').head()
Explanation: Os gastos com divulgação são os primeiros colocados. A tabela acima é a mesma que a última tabela da abordagem manual e sofre do mesmo problema: o catagoria com os maiores gastos é penalizada. Vamos corrigir isso a seguir.
Valores por tipo de gasto
Vamos calcular as probabilidades da mesma maneira, mas considerando apenas os valores do mesmo grupo.
End of explanation
alim = ceap.query('numSubCota == 13 and vlrLiquido > 0').vlrLiquido.dropna()
alim_log = np.log(alim)
média_log = alim_log.mean()
sigma_log = alim_log.std()
limite_log = média_log + 3 * sigma_log
limite = np.exp(limite_log)
print('Valor limite = R$ {:.2f}:'.format(limite))
valores_abaixo = len(alim[alim < limite])
valores_totais = len(alim)
print('{} valores abaixo, em um total de {} = {:.3f}%.'.format(
valores_abaixo, valores_totais, 100 * valores_abaixo/valores_totais))
Explanation: NaN significa que não há esse valor nos dados, mas é fácil entender o porquê pelo nome.
Na tabela acima, temos os gastos que mais destoam dentro de suas categorias. Entre os 5 primeiros, 4 são para cobrir alimentação e são cerca de 10 vezes menor que o gasto com serviços postais. Para se ter uma ideia do quanto eles se destacam em suas categorias, vamos ver o valor com alimentação abaixo do qual se encontram 99,865% dos gastos ($3 \sigma$ segundo a tabela que pode ser encontrada na Wikipedia):
End of explanation
ceap['prob_total'] = ceap.prob_geral * ceap.prob_grupo
colunas = ['prob_total', 'vlrLiquido', 'txNomeParlamentar', 'sgPartido', 'sgUF', 'txtDescricao']
ceap[colunas].sort_values('prob_total').head(10)
Explanation: A porcentagem está bem próxima da teórica. Mais de 99% dos gastos com alimentação está abaixo de R\$ 564,93 e não é à toa que os gastos entre 4 e 6 mil estão entre os 5 primeiros da tabela acima. Marllos Sampaio, por exemplo, faz parte dos cerca de 0,3\% que mais gastaram com alimentação.
Por outro lado, pode ser mais interessante investigar o gasto com serviços postais, pois é cerca de 10 vezes maior que os gastos com alimentação. Como podemos destacar os gastos de maior valor aproveitando essa análise dentro de cada categoria? É o que vamos ver a seguir.
Unindo as duas probablilidades
Primeiro, destacamos os maiores valores, mas acabamos priorizando categorias de gastos mais caras. Em seguida, destacamos os maiores valores dentro de cada categoria, mas obtivemos valores relativamente baixos. O ideal seria um balanço entre essas duas abordagens. Lembre-se do ensino médio (colegial, pros mais "experientes" :)) que a probabilidade de ocorrer $x$ e $y$ é igual a $P(x) \times P(y)$.
Vamos multiplicar as probabilidades, ordená-las e listar as primeiras:
End of explanation |
391 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1
Step1: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
Step2: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output
Step3: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output
Step4: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output
Step5: Problem set #2
Step6: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a radius greater than four earth radii. Expected output
Step7: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output
Step8: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output
Step9: EXTREME BONUS ROUND
Step10: Problem set #3
Step11: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint
Step12: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint
Step13: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
Step14: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint
Step15: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
Step16: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output | Python Code:
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
Explanation: Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1: List slices and list comprehensions
Let's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:
End of explanation
values = numbers_str.split(",")
numbers = [int(i) for i in values]
# numbers
max(numbers)
Explanation: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
End of explanation
#test
print(sorted(numbers))
sorted(numbers)[10:]
Explanation: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:
[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]
(Hint: use a slice.)
End of explanation
[i for i in sorted(numbers) if i%3 == 0]
Explanation: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:
[120, 171, 258, 279, 528, 699, 804, 855]
End of explanation
import math
from math import sqrt
[math.sqrt(i) for i in sorted(numbers) if i < 100]
Explanation: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:
[2.6457513110645907, 8.06225774829855, 8.246211251235321]
(These outputs might vary slightly depending on your platform.)
End of explanation
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
Explanation: Problem set #2: Still more list comprehensions
Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.
End of explanation
earth_diameter = planets[2]['diameter']
#earth radius is = half diameter. In a multiplication equation the diameter value can be use as a parameter.
[i['name'] for i in planets if i['diameter'] >= earth_diameter*4]
Explanation: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a radius greater than four earth radii. Expected output:
['Jupiter', 'Saturn', 'Uranus']
End of explanation
mass_list = []
for planet in planets:
outcome = planet['mass']
mass_list.append(outcome)
total = sum(mass_list)
total
Explanation: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
End of explanation
[i['name'] for i in planets if 'giant' in i['type']]
Explanation: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:
['Jupiter', 'Saturn', 'Uranus', 'Neptune']
End of explanation
#Done in class
Explanation: EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:
['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']
End of explanation
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
Explanation: Problem set #3: Regular expressions
In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.
End of explanation
[line for line in poem_lines if re.search(r"\b\w{4}\b\s\b\w{4}\b", line)]
Explanation: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.)
Expected result:
['Then took the other, as just as fair,',
'Had worn them really about the same,',
'And both that morning equally lay',
'I doubted if I should ever come back.',
'I shall be telling this with a sigh']
End of explanation
[line for line in poem_lines if re.search(r"(?:\s\w{5}\b$|\s\w{5}\b[.:;,]$)", line)]
Explanation: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:
['And be one traveler, long I stood',
'And looked down one as far as I could',
'And having perhaps the better claim,',
'Though as for that the passing there',
'In leaves no step had trodden black.',
'Somewhere ages and ages hence:']
End of explanation
all_lines = " ".join(poem_lines)
Explanation: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
End of explanation
[item[2:] for item in (re.findall(r"\bI\b\s\b[a-z]{1,}", all_lines))]
Explanation: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:
['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
End of explanation
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
Explanation: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
End of explanation
menu = []
for dish in entrees:
match = re.search(r"^(.*) \$(.*)", dish)
vegetarian = re.search(r"v$", match.group(2))
price = re.search(r"(?:\d\.\d\d|\d\d\.\d\d)", dish)
if vegetarian == None:
vegetarian = False
else:
vegetarian = True
if match:
dish = {
'name': match.group(1), 'price': price.group(), 'vegetarian': vegetarian
}
menu.append(dish)
menu
Explanation: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output:
[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',
'price': 10.95,
'vegetarian': False},
{'name': 'Lavender and Pepperoni Sandwich ',
'price': 8.49,
'vegetarian': False},
{'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',
'price': 12.95,
'vegetarian': True},
{'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',
'price': 9.95,
'vegetarian': True},
{'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',
'price': 19.95,
'vegetarian': False},
{'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]
End of explanation |
392 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 1 - Introduction to Machine Learning
For this assignment, you will be using the Breast Cancer Wisconsin (Diagnostic) Database to create a classifier that can help diagnose patients. First, read through the description of the dataset (below).
Step1: The object returned by load_breast_cancer() is a scikit-learn Bunch object, which is similar to a dictionary.
Step2: Question 0 (Example)
How many features does the breast cancer dataset have?
This function should return an integer.
Step3: Question 1
Scikit-learn works with lists, numpy arrays, scipy-sparse matrices, and pandas DataFrames, so converting the dataset to a DataFrame is not necessary for training this model. Using a DataFrame does however help make many things easier such as munging data, so let's practice creating a classifier with a pandas DataFrame.
Convert the sklearn.dataset cancer to a DataFrame.
*This function should return a (569, 31) DataFrame with *
*columns = *
['mean radius', 'mean texture', 'mean perimeter', 'mean area',
'mean smoothness', 'mean compactness', 'mean concavity',
'mean concave points', 'mean symmetry', 'mean fractal dimension',
'radius error', 'texture error', 'perimeter error', 'area error',
'smoothness error', 'compactness error', 'concavity error',
'concave points error', 'symmetry error', 'fractal dimension error',
'worst radius', 'worst texture', 'worst perimeter', 'worst area',
'worst smoothness', 'worst compactness', 'worst concavity',
'worst concave points', 'worst symmetry', 'worst fractal dimension',
'target']
*and index = *
RangeIndex(start=0, stop=569, step=1)
Step4: Question 2
What is the class distribution? (i.e. how many instances of malignant (encoded 0) and how many benign (encoded 1)?)
This function should return a Series named target of length 2 with integer values and index = ['malignant', 'benign']
Step5: Question 3
Split the DataFrame into X (the data) and y (the labels).
This function should return a tuple of length 2
Step6: Question 4
Using train_test_split, split X and y into training and test sets (X_train, X_test, y_train, and y_test).
Set the random number generator state to 0 using random_state=0 to make sure your results match the autograder!
This function should return a tuple of length 4
Step7: Question 5
Using KNeighborsClassifier, fit a k-nearest neighbors (knn) classifier with X_train, y_train and using one nearest neighbor (n_neighbors = 1).
*This function should return a * sklearn.neighbors.classification.KNeighborsClassifier.
Step8: Question 6
Using your knn classifier, predict the class label using the mean value for each feature.
Hint
Step9: Question 7
Using your knn classifier, predict the class labels for the test set X_test.
This function should return a numpy array with shape (143,) and values either 0.0 or 1.0.
Step10: Question 8
Find the score (mean accuracy) of your knn classifier using X_test and y_test.
This function should return a float between 0 and 1
Step11: Optional plot
Try using the plotting function below to visualize the differet predicition scores between training and test sets, as well as malignant and benign cells. | Python Code:
import numpy as np
import pandas as pd
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
#print(cancer.DESCR) # Print the data set description
Explanation: You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 1 - Introduction to Machine Learning
For this assignment, you will be using the Breast Cancer Wisconsin (Diagnostic) Database to create a classifier that can help diagnose patients. First, read through the description of the dataset (below).
End of explanation
cancer.keys()
Explanation: The object returned by load_breast_cancer() is a scikit-learn Bunch object, which is similar to a dictionary.
End of explanation
# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the number of features of the breast cancer dataset, which is an integer.
# The assignment question description will tell you the general format the autograder is expecting
return len(cancer['feature_names'])
# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs
answer_zero()
Explanation: Question 0 (Example)
How many features does the breast cancer dataset have?
This function should return an integer.
End of explanation
def answer_one():
return pd.DataFrame(data= np.c_[cancer['data'], cancer['target']],
columns= np.append(cancer['feature_names'], ['target']))
answer_one()
Explanation: Question 1
Scikit-learn works with lists, numpy arrays, scipy-sparse matrices, and pandas DataFrames, so converting the dataset to a DataFrame is not necessary for training this model. Using a DataFrame does however help make many things easier such as munging data, so let's practice creating a classifier with a pandas DataFrame.
Convert the sklearn.dataset cancer to a DataFrame.
*This function should return a (569, 31) DataFrame with *
*columns = *
['mean radius', 'mean texture', 'mean perimeter', 'mean area',
'mean smoothness', 'mean compactness', 'mean concavity',
'mean concave points', 'mean symmetry', 'mean fractal dimension',
'radius error', 'texture error', 'perimeter error', 'area error',
'smoothness error', 'compactness error', 'concavity error',
'concave points error', 'symmetry error', 'fractal dimension error',
'worst radius', 'worst texture', 'worst perimeter', 'worst area',
'worst smoothness', 'worst compactness', 'worst concavity',
'worst concave points', 'worst symmetry', 'worst fractal dimension',
'target']
*and index = *
RangeIndex(start=0, stop=569, step=1)
End of explanation
def answer_two():
cancerdf = answer_one()
s_count = cancerdf['target'].value_counts()
return s_count.rename({1.0: 'benign', 0.0: 'malignant'})
answer_two()
Explanation: Question 2
What is the class distribution? (i.e. how many instances of malignant (encoded 0) and how many benign (encoded 1)?)
This function should return a Series named target of length 2 with integer values and index = ['malignant', 'benign']
End of explanation
def answer_three():
cancerdf = answer_one()
columns = ['mean radius', 'mean texture', 'mean perimeter', 'mean area',
'mean smoothness', 'mean compactness', 'mean concavity',
'mean concave points', 'mean symmetry', 'mean fractal dimension',
'radius error', 'texture error', 'perimeter error', 'area error',
'smoothness error', 'compactness error', 'concavity error',
'concave points error', 'symmetry error', 'fractal dimension error',
'worst radius', 'worst texture', 'worst perimeter', 'worst area',
'worst smoothness', 'worst compactness', 'worst concavity',
'worst concave points', 'worst symmetry', 'worst fractal dimension']
X = cancerdf[columns]
y = cancerdf['target']
return X, y
Explanation: Question 3
Split the DataFrame into X (the data) and y (the labels).
This function should return a tuple of length 2: (X, y), where
* X has shape (569, 30)
* y has shape (569,).
End of explanation
from sklearn.model_selection import train_test_split
def answer_four():
X, y = answer_three()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
return X_train, X_test, y_train, y_test
Explanation: Question 4
Using train_test_split, split X and y into training and test sets (X_train, X_test, y_train, and y_test).
Set the random number generator state to 0 using random_state=0 to make sure your results match the autograder!
This function should return a tuple of length 4: (X_train, X_test, y_train, y_test), where
* X_train has shape (426, 30)
* X_test has shape (143, 30)
* y_train has shape (426,)
* y_test has shape (143,)
End of explanation
from sklearn.neighbors import KNeighborsClassifier
def answer_five():
X_train, X_test, y_train, y_test = answer_four()
knn = KNeighborsClassifier(n_neighbors = 1)
knn.fit(X_train, y_train)
return knn
Explanation: Question 5
Using KNeighborsClassifier, fit a k-nearest neighbors (knn) classifier with X_train, y_train and using one nearest neighbor (n_neighbors = 1).
*This function should return a * sklearn.neighbors.classification.KNeighborsClassifier.
End of explanation
def answer_six():
cancerdf = answer_one()
knn = answer_five()
means = cancerdf.mean()[:-1].values.reshape(1, -1)
return knn.predict(means)
Explanation: Question 6
Using your knn classifier, predict the class label using the mean value for each feature.
Hint: You can use cancerdf.mean()[:-1].values.reshape(1, -1) which gets the mean value for each feature, ignores the target column, and reshapes the data from 1 dimension to 2 (necessary for the precict method of KNeighborsClassifier).
This function should return a numpy array either array([ 0.]) or array([ 1.])
End of explanation
def answer_seven():
X_train, X_test, y_train, y_test = answer_four()
knn = answer_five()
return knn.predict(X_test)
Explanation: Question 7
Using your knn classifier, predict the class labels for the test set X_test.
This function should return a numpy array with shape (143,) and values either 0.0 or 1.0.
End of explanation
def answer_eight():
X_train, X_test, y_train, y_test = answer_four()
knn = answer_five()
# Your code here
return knn.score(X_test, y_test)
Explanation: Question 8
Find the score (mean accuracy) of your knn classifier using X_test and y_test.
This function should return a float between 0 and 1
End of explanation
def accuracy_plot():
import matplotlib.pyplot as plt
%matplotlib notebook
X_train, X_test, y_train, y_test = answer_four()
# Find the training and testing accuracies by target value (i.e. malignant, benign)
mal_train_X = X_train[y_train==0]
mal_train_y = y_train[y_train==0]
ben_train_X = X_train[y_train==1]
ben_train_y = y_train[y_train==1]
mal_test_X = X_test[y_test==0]
mal_test_y = y_test[y_test==0]
ben_test_X = X_test[y_test==1]
ben_test_y = y_test[y_test==1]
knn = answer_five()
scores = [knn.score(mal_train_X, mal_train_y), knn.score(ben_train_X, ben_train_y),
knn.score(mal_test_X, mal_test_y), knn.score(ben_test_X, ben_test_y)]
plt.figure()
# Plot the scores as a bar chart
bars = plt.bar(np.arange(4), scores, color=['#4c72b0','#4c72b0','#55a868','#55a868'])
# directly label the score onto the bars
for bar in bars:
height = bar.get_height()
plt.gca().text(bar.get_x() + bar.get_width()/2, height*.90, '{0:.{1}f}'.format(height, 2),
ha='center', color='w', fontsize=11)
# remove all the ticks (both axes), and tick labels on the Y axis
plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='off', labelbottom='on')
# remove the frame of the chart
for spine in plt.gca().spines.values():
spine.set_visible(False)
plt.xticks([0,1,2,3], ['Malignant\nTraining', 'Benign\nTraining', 'Malignant\nTest', 'Benign\nTest'], alpha=0.8);
plt.title('Training and Test Accuracies for Malignant and Benign Cells', alpha=0.8)
# Uncomment the plotting function to see the visualization,
# Comment out the plotting function when submitting your notebook for grading
# accuracy_plot()
Explanation: Optional plot
Try using the plotting function below to visualize the differet predicition scores between training and test sets, as well as malignant and benign cells.
End of explanation |
393 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <h2>Textbook example
Step2: To complete the model we need to define some parameter values.
Step3: <h2>Solving the model with pyCollocation</h2>
<h3>Defining a `pycollocation.TwoPointBVP` instance</h3>
Step4: Finding a good initial guess for $k(t)$
Theory tells us that, starting from some initial condition $k_0$, the solution to the Solow model converges monotonically toward its long run equilibrium value $k^*$. Our initial guess for the solution should preserve this property...
Step5: Solving the model
Step6: <h3> Polynomial basis functions </h3>
Step7: <h3> B-spline basis functions </h3>
Step8: <h1> Generic Ramsey-Cass-Koopmans model</h1>
Can we refactor the above code so that we can solve a Ramsey-Cass-Koopmans model for arbitrary $f$ and $u$? Yes!
Step10: Example usage... | Python Code:
from scipy import optimize
def nominal_interest_rate(X, pi, i_star, phi_X, phi_pi):
Nominal interest rate follows a Taylor rule.
return i_star + phi_X * np.log(X) + phi_pi * pi
def output_gap(X, pi, g, i_star, phi_X, phi_pi, rho):
i = nominal_interest_rate(X, pi, i_star, phi_X, phi_pi)
return (i - pi - rho - g) * X
def inflation(X, pi, epsilon, psi, rho, theta):
return rho * pi - ((epsilon - 1) / theta) * (X**(1 + psi) - 1)
def basic_nk_model(t, X, pi, epsilon, g, i_star, phi_X, phi_pi, psi, rho, theta, **params):
out = [output_gap(X, pi, g, i_star, phi_X, phi_pi, rho),
inflation(X, pi, epsilon, psi, rho, theta)]
return out
def terminal_condition(t, X, pi, initial_guess, **params):
X_star, pi_star = find_equilibrium(initial_guess, params)
out = [X - X_star, pi - pi_star]
return out
def _equilibrium_system(vec, params):
X, pi = vec
return basic_nk_model(0.0, X, pi, **params)
def find_equilibrium(initial_guess, params):
result = optimize.root(_equilibrium_system,
x0=initial_guess,
args=(params))
if result.success:
return result.x
else:
return result
basic_nk_model(0.0, 1.0, 0.0, 0.1, 0.05, 0.1, 1.0, 0.5, 1.5, 0.05, 1.0)
Explanation: <h2>Textbook example: Basic New Keynesian Model</h2>
<h2> Households </h2>
Suppose that representative household wishes to maximize...
$$\int_{t=0}^{\infty} e^{-\rho t}u(C(t), N(t))L(t)dt$$
...where the flow utility function $u(C(t), N(t))$ is assumed to be additively separable in its two arguments consumption, $C(t)$, and labor supply, $N(t)$ (both measured per member of the household). Note that $L(t)$, the size of the representative household, is assumed to grow at a constant and exogenous rate $n$.
The representative household faces the following intertemporal budget constraint...
$$\dot{B}(t) = i(t)B(t) + W(t)N(t)L(t) - P(t)C(t)L(t)$$
...where $i(t)$ is the nominal interest rate, $B(t)$ is the quantity of bonds held by the representative household, $W(t)$ is the nominal wage paid for labor, and $P(t)$ is the price of consumption goods.
<h3> Solution to the household problem </h3>
Form the Hamiltonian...
$$ H(t, B, C, N, \lambda) \equiv e^{-\rho t}u(C(t), N(t))L(t) + \lambda(t)\bigg[i(t)B(t) + W(t)N(t)L(t) - P(t)C(t)L(t)\bigg] $$
...differentiate with respect to control variables $C$ and $N$ and the state variable $B$...
\begin{align}
\frac{\partial H}{\partial C} \equiv& e^{-\rho t}\frac{\partial u}{\partial C}L(t) - P(t)L(t)\lambda(t) \
\frac{\partial H}{\partial N} \equiv& e^{-\rho t}\frac{\partial u}{\partial N}L(t) - W(t)L(t)\lambda(t)\
\frac{\partial H}{\partial B} \equiv& i(t)\lambda(t)
\end{align}
...the state and costate equations are...
\begin{align}
\dot{B}(t) = \frac{\partial H}{\partial \lambda} =& i(t)B(t) + W(t)N(t)L(t) - P(t)C(t)L(t) \
\dot{\lambda} = -\frac{\partial H}{\partial B} =& -i(t)\lambda(t)\
\end{align}
After a bit of algebra (TODO: Add algebra!), we find that the behavior of the representative household is described by the consumption Euler equation...
$$ \frac{\dot{C}}{C} = \frac{1}{R(C)}\bigg[(i - \pi) - \rho\bigg] $$
...where...
$$ R(C) = -\frac{C\frac{\partial^2 u}{\partial C^2}}{\frac{\partial u}{\partial C}}$$
...is the <a href="https://en.wikipedia.org/wiki/Risk_aversion">Pratt-Arrow measure of relative risk aversion</a>. Consumption Euler equation says that consumption growth is proportional to the gap between the real interest rate $i - \pi$ and the discount rate $\rho$; and inversely proportional to risk preferences.
a first-order condition describing the trade-off between consumption and labor supply...
$$ \frac{W}{P}\frac{\partial u}{\partial C} = -\frac{\partial u}{\partial N} $$
...and the budget constraint...
$$\dot{B}(t) = i(t)B(t) + W(t)N(t)L(t) - P(t)C(t)L(t).$$
<h2> Final goods producers </h2>
Competitive final goods firm produces consumption goods using a continuum of intermediate inputs...
$$ Y = \Bigg[\int_0^1 y_j^{\frac{\epsilon - 1}{\epsilon}}dj\Bigg]^{\frac{\epsilon}{\epsilon - 1}} $$
...final goods firm solves static cost minimization problem...
$$\min_{y_j} \int_0^1 p_jy_jdj$$
...subject to feasibility constraint...
$$ Y = \Bigg[\int_0^1 y_j^{\frac{\epsilon - 1}{\epsilon}}dj\Bigg]^{\frac{\epsilon}{\epsilon - 1}}. $$
<h3> Solution to the firms problem </h3>
Form the Lagrangian...
$$ \mathcal{L} \equiv \int_0^1 p_jy_jdj + \lambda\Bigg(Y - \Bigg[\int_0^1 y_j^{\frac{\epsilon - 1}{\epsilon}}dj\Bigg]^{\frac{\epsilon}{\epsilon - 1}}\Bigg)$$
First-order conditions are...
$$ p_j - \lambda\frac{y_j^{-\frac{1}{\epsilon}}}{\int_0^1 y_j^{\frac{\epsilon - 1}{\epsilon}}dj}Y = 0\ \forall j$$
After quite a bit of algebra you can derive the firm's demand function for intermediate input $j$ as a function of its own price $p_j$ and the aggregate price level $P$ and output $Y$...
\begin{align}
%\frac{p_i}{p_j} =& \frac{y_i^{-\frac{1}{\epsilon}}}{y_j^{-\frac{1}{\epsilon}}}
%\frac{y_i}{y_j} =& \bigg(\frac{p_i}{p_j}\bigg)^{-\epsilon}
%y_i =& \bigg(\frac{p_i}{p_j}\bigg)^{-\epsilon}y_j
%p_iy_i =& p_i\bigg(\frac{p_i}{p_j}\bigg)^{-\epsilon}y_j
%\int_0^1p_iy_idi =& \int_0^1p_i\bigg(\frac{p_i}{p_j}\bigg)^{-\epsilon}y_j di
%\int_0^1p_iy_idi =& y_j\bigg(\frac{1}{p_j}\bigg)^{-\epsilon}\int_0^1p_i^{1-\epsilon} di
%PY =& y_j\bigg(\frac{1}{p_j}\bigg)^{-\epsilon}\int_0^1p_i^{1-\epsilon} di
y_j(p_j) =& \bigg(\frac{p_j}{P}\bigg)^{-\epsilon}Y
\end{align}
where
$$ P = \bigg[\int_0^1p_i^{1-\epsilon}\bigg]^{\frac{1}{1 - \epsilon}}. $$
$$0 = i^* + \phi_X X + \phi_{\pi} \pi - \pi - \rho - g$$
$$ 0 = \rho\pi - \frac{\epsilon - 1}{\theta} \bigg(X^{1 + \psi} - 1\bigg)$$
End of explanation
params = {'epsilon': 0.02, 'g': 0.05, 'i_star': 0.05, 'phi_X': 1.0, 'phi_pi': 0.5,
'psi': 1.5, 'rho': 0.05, 'theta': 1.0, 'initial_guess': np.array([0.5, 0.5])}
find_equilibrium(np.array([0.5, 0.5]), params)
Explanation: To complete the model we need to define some parameter values.
End of explanation
pycollocation.problems.TwoPointBVP?
basic_nk_bvp = pycollocation.problems.TwoPointBVP(bcs_lower=None,
bcs_upper=terminal_condition,
number_bcs_lower=0,
number_odes=2,
params=params,
rhs=basic_nk_model,
)
Explanation: <h2>Solving the model with pyCollocation</h2>
<h3>Defining a `pycollocation.TwoPointBVP` instance</h3>
End of explanation
def initial_mesh(t, T, num, problem):
# compute equilibrium values
X_star, pi_star = find_equilibrium(initial_guess, problem.params)
ts = np.linspace(t, T, num)
Xs = X_star - (X_star - problem.params['k0']) * np.exp(-ts)
pis = pi_star - (pi_star - problem.params['k0']) * np.exp(-ts)
return ts, Xs, pis
Explanation: Finding a good initial guess for $k(t)$
Theory tells us that, starting from some initial condition $k_0$, the solution to the Solow model converges monotonically toward its long run equilibrium value $k^*$. Our initial guess for the solution should preserve this property...
End of explanation
pycollocation.solvers.Solver?
Explanation: Solving the model
End of explanation
polynomial_basis = pycollocation.basis_functions.PolynomialBasis()
solver = pycollocation.solvers.Solver(polynomial_basis)
boundary_points = (0, 100)
ts, ks, cs = initial_mesh(*boundary_points, num=1000, problem=standard_ramsey_bvp)
basis_kwargs = {'kind': 'Chebyshev', 'domain': boundary_points, 'degree': 15}
k_poly = polynomial_basis.fit(ts, ks, **basis_kwargs)
c_poly = polynomial_basis.fit(ts, cs, **basis_kwargs)
initial_coefs = np.hstack([k_poly.coef, c_poly.coef])
nodes = polynomial_basis.roots(**basis_kwargs)
solution = solver.solve(basis_kwargs, boundary_points, initial_coefs,
nodes, standard_ramsey_bvp)
ts, _, _ = initial_mesh(basis_kwargs['domain'], 1000, standard_ramsey_bvp)
k_soln, c_soln = solution.evaluate_solution(ts)
plt.plot(ts, k_soln)
plt.plot(ts, c_soln)
plt.show()
k_resids, c_resids = solution.evaluate_residual(ts)
plt.plot(ts, k_resids)
plt.plot(ts, c_resids)
plt.show()
k_normalized_resids, c_normalized_resids = solution.normalize_residuals(ts)
plt.plot(ts, np.abs(k_normalized_resids))
plt.plot(ts, np.abs(c_normalized_resids))
plt.yscale('log')
plt.show()
Explanation: <h3> Polynomial basis functions </h3>
End of explanation
bspline_basis = pycollocation.basis_functions.BSplineBasis()
solver = pycollocation.solvers.Solver(bspline_basis)
boundary_points = (0, 100)
ts, ks, cs = initial_mesh(*boundary_points, num=250, problem=standard_ramsey_bvp)
tck, u = bspline_basis.fit([ks, cs], u=ts, k=5, s=0)
knots, coefs, k = tck
initial_coefs = np.hstack(coefs)
basis_kwargs = {'knots': knots, 'degree': k, 'ext': 2}
nodes = np.linspace(*boundary_points, num=249)
solution = solver.solve(basis_kwargs, boundary_points, initial_coefs,
nodes, standard_ramsey_bvp)
ts, _, _ = initial_mesh(*boundary_points, num=1000, problem=standard_ramsey_bvp)
k_soln, c_soln = solution.evaluate_solution(ts)
plt.plot(ts, k_soln)
plt.plot(ts, c_soln)
plt.show()
k_resids, c_resids = solution.evaluate_residual(ts)
plt.plot(ts, k_resids)
plt.plot(ts, c_resids)
plt.show()
k_normalized_resids, c_normalized_resids = solution.normalize_residuals(ts)
plt.plot(ts, np.abs(k_normalized_resids))
plt.plot(ts, np.abs(c_normalized_resids))
plt.yscale('log')
plt.show()
Explanation: <h3> B-spline basis functions </h3>
End of explanation
from pycollocation.tests import models
Explanation: <h1> Generic Ramsey-Cass-Koopmans model</h1>
Can we refactor the above code so that we can solve a Ramsey-Cass-Koopmans model for arbitrary $f$ and $u$? Yes!
End of explanation
def ces_output(k, alpha, sigma, **params):
gamma = (sigma - 1) / sigma
if gamma == 0:
y = k**alpha
else:
y = (alpha * k**gamma + (1 - alpha))**(1 / gamma)
return y
def ces_mpk(k, alpha, sigma, **params):
y = ces_output(k, alpha, sigma)
gamma = (sigma - 1) / sigma
if gamma == 0:
mpk = alpha * (y / k)
else:
mpk = alpha * k**(gamma - 1) * (y / (alpha * k**gamma + (1 - alpha)))
return mpk
def crra_risk_aversion(t, c, theta, **params):
return theta
def ces_equilibrium_capital(alpha, delta, g, n, rho, sigma, theta, **params):
Steady state value for capital stock (per unit effective labor).
gamma = (sigma - 1) / sigma
if gamma == 1:
kss = (alpha / (delta + rho + theta * g))**(1 / (1 - alpha))
else:
kss = ((1 / (1 - alpha)) * (((delta + rho + theta * g) / alpha)**(gamma / (1 - gamma)) - alpha))**(-1 / gamma)
return kss
ces_params = {'g': 0.02, 'theta': 1.0, 'n': 0.02, 'alpha': 0.15, 'delta': 0.04,
'sigma': 0.5, 'rho': 0.02, 'k0': 1.0}
generic_ramsey_bvp = models.RamseyCassKoopmansModel(crra_risk_aversion,
ces_output,
ces_equilibrium_capital,
ces_mpk,
ces_params)
polynomial_basis = pycollocation.basis_functions.PolynomialBasis()
solver = pycollocation.solvers.Solver(polynomial_basis)
basis_kwargs = {'kind': 'Chebyshev', 'domain': [0, 100], 'degree': 15}
ts, ks, cs = initial_mesh(basis_kwargs['domain'], 1000, standard_ramsey_bvp)
k_poly = polynomial_basis.fit(ts, ks, **basis_kwargs)
c_poly = polynomial_basis.fit(ts, cs, **basis_kwargs)
initial_coefs = np.hstack([k_poly.coef, c_poly.coef])
solution = solver.solve(basis_kwargs, initial_coefs, standard_ramsey_bvp)
k_soln, c_soln = solution.evaluate_solution(ts)
plt.plot(ts, k_soln)
plt.plot(ts, c_soln)
plt.show()
k_normalized_resids, c_normalized_resids = solution.normalize_residuals(ts)
plt.plot(ts, np.abs(k_normalized_resids))
plt.plot(ts, np.abs(c_normalized_resids))
plt.yscale('log')
plt.show()
Explanation: Example usage...
End of explanation |
394 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="Right">
https
Step1: Print versions
Step2: Defaults
Set date and base paths
Step3: Set log level
Step4: Set URLs and file paths
Inbound URLs
Step5: Prefetched
Step6: Plot and display options
Step7: Functions | Python Code:
import os
import re
import sys
import time
import socket
import platform
import itertools
import requests as req
import logging
from imp import reload
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: <div align="Right">
https://github.com/mrola/blacklist_overlap_test<br>
</div>
Preliminaries
The purpose of this Jupyter notebook is to provide a measure on the degree to which blacklists overlap in terms of IPv4 entries. The notebook extract IPv4 addresses from public and private blacklists and then provides a barchart showing sizes per blacklist and a heatmap showing the degree of overlap between the blacklists.
The URLs of the public blacklists as well as the path to the local blacklist files are set within the notebook.
The data frame that that contains all ip addresses entries for every blacklist/source may be saved in a csv format.
This project is inspired by the combine and tiq-test project (written in R) by @alexpc, @kylemaxwell and others. While the tiq-test project provides additional quality metrics and also supports enrichment, this notebook is pure Python and at this point only provides the overlap test. That said, if overlap test is the goal then just running this Jupyter notebook will do the fetch data, extract IP addresses, calculate overlap between each pair, build heatmap steps and save data for further research. The notebook is designed to be standalone.
References:
* https://github.com/mlsecproject/combine
* https://github.com/mlsecproject/tiq-test
* http://rpubs.com/alexcpsec/tiq-test-Winter2015
Requirements
Python >= 2.7 <small>(tested with 2.7 and 3.4)</small>
Pandas > 0.16 <small>(tested with 0.16 and 0.18)</small>
http://pandas.pydata.org/pandas-docs/stable/index.html
https://github.com/pydata/pandas
Seaborn >= 0.6.0 <small>(tested with 0.6.0 and 0.7.0)</small>
http://stanford.edu/~mwaskom/software/seaborn/
https://github.com/mwaskom/seaborn
Requests > 2.5 <small>(tested with 2.5.3 and 2.9.0)</small>
http://docs.python-requests.org/en/latest/
https://github.com/kennethreitz/requests
Jupyter Notebook > 4.0: <small>(only tested with 4.0.6 but may work with older versions)</small>
https://github.com/jupyter/notebook
If the above requirements are not met then installing miniconda is probably the simplest and fastest way to get ahead. Conda will allow you to create a local "environment" that contains all the necessary packages with all dependencies met without interfering with any global Python installation
Conda is a cross-platform and Python-agnostic package manager and environment manager program that quickly installs, runs and updates packages and their dependencies and easily creates, saves, loads and switches between environments on your local computer. Conda is included in all versions of Anaconda, Miniconda and Anaconda Server.
Reference: http://conda.pydata.org/miniconda.html
Import libraries
End of explanation
print('\nPython version: %s' % platform.python_version())
print('Pandas version: %s' % pd.__version__)
print('Matplotlib version: %s' % mpl.__version__)
print('Seaborn version: %s' % sns.__version__)
print('Requests version: %s' % req.__version__)
Explanation: Print versions
End of explanation
# Today is used as date unless DATE is set here, format"YYYY-MM-DD"
DATE = None
# If True, save data to DIR_OUTPOUT_* as defined below
SAVE = True
# GET_URLS=True means that ioc data is fetched from public/internet sources, see inbound URLs section
GET_URLS = True
# READ_PREFETCH=True means that ioc data is fetched from private/local files, see Prefetched section
READ_PREFETCH = False
# TIMEOUT - number of seconds Requests will wait for a response from the server (both connect and between reads)
TIMEOUT = 4
# ANNOTATE - Set to True if actual value should be written in each heatmap cell
ANNOTATE=True
# Paths
datadir = os.getcwd() + '/../data/'
DIR_OUTPUT_URL = datadir + 'public_inbound/output/'
DIR_OUTPUT_PREFETCHED = datadir + 'private_inbound/output/'
DIR_INPUT_PREFETCHED = datadir + 'private_inbound/input/'
Explanation: Defaults
Set date and base paths
End of explanation
# Set level to one of DEBUG, INFO, WARNING, ERROR
reload(logging)
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(name)-4s: %(levelname)-2s %(message)s', \
datefmt='%Y-%m-%d %H:%M:%S', stream=sys.stdout)
Explanation: Set log level
End of explanation
# Key: Description of source.
# Value: URL to be fetched
inbound_urls = {
'badips.http': 'https://www.badips.com/get/list/http/3?age=2w',
'badips.postfix': 'https://www.badips.com/get/list/postfix/3?age=2w',
'badips.ssh': 'https://www.badips.com/get/list/ssh/3?age=2w',
'openbl.base': 'http://www.openbl.org/lists/base.txt',
'malwaredomainlist.hostslist': 'http://www.malwaredomainlist.com/hostslist/ip.txt',
'malc0de.ip_blacklist': 'http://malc0de.com/bl/IP_Blacklist.txt',
'blocklist.all': 'http://lists.blocklist.de/lists/all.txt',
'spamhaus.drop': 'http://www.spamhaus.org/drop/drop.txt',
'spamhaus.edrop': 'https://www.spamhaus.org/drop/edrop.txt',
'emergingthreats.compromised': 'http://rules.emergingthreats.net/blockrules/compromised-ips.txt',
'emergingthreats.emerging': 'http://rules.emergingthreats.net/fwrules/emerging-Block-IPs.txt',
'palevotracker.ipblocklist': 'https://palevotracker.abuse.ch/blocklists.php?download=ipblocklist',
'feodotracker.ipblocklist': 'https://feodotracker.abuse.ch/blocklist/?download=ipblocklist',
'feodotracker.badips': 'https://feodotracker.abuse.ch/blocklist/?download=badips',
'blutmagie.tor.exit': 'http://torstatus.blutmagie.de/ip_list_exit.php/Tor_ip_list_EXIT.csv',
'blutmagie.tor.all': 'http://torstatus.blutmagie.de/ip_list_all.php/Tor_ip_list_ALL.csv',
'dan.me.torlist': 'https://www.dan.me.uk/torlist/',
'malcode.database': 'http://malc0de.com/database/',
'autoshun.shunlist': 'http://www.autoshun.org/files/shunlist.csv',
'rulez.blist': 'http://danger.rulez.sk/projects/bruteforceblocker/blist.php',
'dragonresearch.vnc': 'https://www.dragonresearchgroup.org/insight/vncprobe.txt',
'dragonresearhc.http': 'https://www.dragonresearchgroup.org/insight/http-report.txt',
'dragonresearch.ssh': 'https://www.dragonresearchgroup.org/insight/sshpwauth.txt',
'alienvault.generic': 'https://reputation.alienvault.com/reputation.generic',
'sslbl.sslipblacklist': 'https://sslbl.abuse.ch/blacklist/sslipblacklist.csv',
'zeustracker.badips': 'https://zeustracker.abuse.ch/blocklist.php?download=badips'
}
inbound_urls_test = {
'badips.ssh': 'https://www.badips.com/get/list/ssh/3?age=2w',
'rulez.blist': 'http://danger.rulez.sk/projects/bruteforceblocker/blist.php',
'malwaredomainlist_ip.txt': 'http://www.malwaredomainlist.com/hostslist/ip.txt',
'malcode.database': 'http://malc0de.com/database/'
}
Explanation: Set URLs and file paths
Inbound URLs
End of explanation
# Key: Local file to be read
# Value: Description of source.
#
# Example entries, zeus_ipblocklist.ioc:
# 101.0.89.3,zeustracker
# 101.200.81.187,zeustracker
inbound_prefetched = {
DIR_INPUT_PREFETCHED + 'compromised-ips.ioc': 'compromised-ips.ioc',
DIR_INPUT_PREFETCHED + 'ips.ioc': 'ips.ioc',
DIR_INPUT_PREFETCHED + 'zeus_ipblocklist.ioc': 'zeus_ipblocklist.ioc'
}
inbound_prefetched_test = {
DIR_INPUT_PREFETCHED + 'compromised-ips.ioc': 'compromised-ips.ioc',
DIR_INPUT_PREFETCHED + 'compromised-ips_test.ioc': 'compromised-ips_test.ioc',
DIR_INPUT_PREFETCHED + 'zeus_ipblocklist.ioc': 'zeus_ipblocklist.ioc',
DIR_INPUT_PREFETCHED + 'zeus_ipblocklist_test.ioc': 'zeus_ipblocklist_test.ioc'
}
Explanation: Prefetched
End of explanation
# Pandas - global display options
pd.set_option('display.width', 120)
pd.set_option('max_colwidth', 0)
# Seaborn - plot options
sns.set()
sns.set(style="whitegrid", rc={"figure.figsize": (14, 8)})
sns.set_palette("bone")
sns.palplot(sns.color_palette())
Explanation: Plot and display options
End of explanation
def set_date():
if DATE:
return DATE
else:
return(time.strftime("%Y-%m-%d"))
# do_pandas()
# Takes a list of IPv4 addresses as input and stores those in a Pandas DataFrame
#
# DataFrame columns: "entity","type","direction","source","notes","date"
# "1.234.27.146","IPv4","inbound","http://malc0de.com/bl/IP_Blacklist.txt","","2016-01-27
#
# DATE is set to today, override this in Defaults section above if needed
def do_pandas(df, ips, name):
df_ips = pd.DataFrame()
date = set_date()
tup = (ips, 'IPv4', 'inbound',name, "", date)
(df_ips['entity'], df_ips['type'], df_ips['direction'], \
df_ips['source'], df_ips['notes'], df_ips['date']) = tup
df = df.append(df_ips, ignore_index=True)
return df
# valid_ip()
# Checks if an IPv4 address is valid
def valid_ip(address):
logger = logging.getLogger('valid_ip')
try:
socket.inet_aton(address)
except:
logger.warning("Invalid address: %s" % address)
return False
return True
# parse_content()
# Extract IPv4 address from a list of rows
ipv4 = re.compile("\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}")
def parse_content(source):
logger = logging.getLogger('parse_content')
ips = []
for line in source:
try:
m = re.search(ipv4, line.decode('utf-8'))
if m:
address = m.group(0)
if valid_ip(address):
ips.append(address)
except UnicodeDecodeError as e:
logger.warning("utf-8 decode failure. Skipping line...")
pass
except Exception as e:
logger.error("Unexpected exception. Skipping line. %s" % e)
pass
if ips:
return ips
else:
return False
# get_url()
# Uses the request library to GET URLs defined in the "inbound_urls" dictionary.
def get_url(urls, df):
logger = logging.getLogger('get_url')
fail_count = 0
for name, url in iter(urls.items()):
logger.info('Fetching: %s' % url)
try:
r = req.get(url, timeout=TIMEOUT, headers={'User-Agent': 'Mozilla/5.0'})
if r.status_code == 200:
logger.debug('Got status 200 back...')
ips = parse_content(r.content.splitlines())
if ips:
df = do_pandas(df, ips, name)
elif ips is False:
logger.warning('Found no valid ipv4 addresses.')
else:
logger.warning ('Got status %d' % r.status_code)
except req.ConnectionError as e:
logger.error('Failed to fetch url due connectivity issues.')
logger.error('%s' % e)
fail_count += 1
if fail_count > 2:
logger.error('\nConnectivity issues assumed to be permanent. Will abort.')
break
except Exception as e:
logger.error('Failed to fetch url.\nError msg: %s' % e)
return df
# get_prefetched()
# Read files defined in the "inbound_prefetched" dictionary.
def get_prefetched(files, df):
logger = logging.getLogger('get_prefetched')
dflist = []
logger.info('Reading data...')
for filen, description in iter(files.items()):
if not os.path.exists(filen):
logger.warning('Failed to read data from:\n\t%s...' % os.path.relpath(filen))
else:
try:
logger.info('%s' % os.path.relpath(filen))
with open(filen, 'rb') as f:
ips = parse_content(f.readlines())
if ips:
df = do_pandas(df, ips, description)
else:
logger.warning('Failed to find valid entries.')
except Exception as e:
logger.error('Caught exception: %s\nAbort...' % e)
break
return df
# fill_heatmap()
# Calculate proportion of items in intersection between two blacklists to each blacklist per se.
# dfp: contains data for calculations.
# df_heat: put results in this frame.
# cols: pair of columns (blacklists) used as input to calculations.
def fill_heatmap(cols, dfp, df_heat):
s = dfp.eq(dfp[cols[0]], axis='index')[cols].all(1)
common = s[s.values == True].count()
col0_sum = dfp[cols[0]].sum()
col1_sum = dfp[cols[1]].sum()
df_heat[cols[0]].loc[cols[1]] = common/col0_sum
df_heat[cols[1]].loc[cols[0]] = common/col1_sum
# do_heatframes()
# Create frames used in calculation of overlap.
# dfp: DataFrame with ipv4 as index and blacklist as columns. Used to find entries in common
# df_heat: DataFrame that will contain the actual overlap values
# colpairs: list of 2-tuples where each tuple contains a unique pair of blacklists
def do_heatframes(df):
df['one'] = 1
dfp = pd.pivot_table(df, values='one', index=['entity'], columns=['source'])
df_heat = pd.DataFrame({'contains': pd.unique(df.source), 'is contained': pd.unique(df.source)})
df_heat['diag'] = 1
df_heat = df_heat.pivot('contains','is contained', 'diag')
colpairs = itertools.combinations(pd.unique(df.source), 2)
for colpair in colpairs:
fill_heatmap(list(colpair), dfp, df_heat)
return df_heat
# plot_counts()
# Barchart showing size of each blacklist feed
def plot_counts(df):
gby = df.groupby(["source"])
s = gby.size().sort_values(ascending=False)
sns.set(style="whitegrid", font_scale=1.0, rc={"figure.figsize": (14, 4)})
ax = sns.barplot(orient='h', x=s, y=s.index, palette="bone")
# ax = sns.countplot(y="source", data=df.sort_index(axis=1, ascending=False), palette="bone");
ax.set(title="Barplot showing the count of entries per source - %s\n" % (set_date()));
plt.show()
# plot_heat()
# Heatmap showing the overlap between blacklist feeds
def plot_heat(df):
df_heat = do_heatframes(df)
sns.set(style="whitegrid", font_scale=1.0, rc={"figure.figsize": (14, 4)})
asize = None
if df_heat.shape[0] > 10:
asize = {'size': 7}
ax = sns.heatmap(df_heat, linewidths=.5, annot_kws=asize, annot=ANNOTATE, cmap="bone");
ax.set(title="Overlap test - heatmap showing overlap between blacklists - %s\n" % (set_date()))
plt.xticks(rotation=40, horizontalalignment='right');
plt.show()
# show_info()
# Print some info to verify result.
def show_info(df):
logger = logging.getLogger('show_info')
logger.info('>>General info to verify everything is ok <<')
logger.info('\n\nVerify we got all sources:\n%s\n' % pd.Series(pd.unique(df.source)))
logger.info('First few frame rows:\n%s\n' % df.head())
logger.info('Frame contains %d entries.\n\n' % df.shape[0])
# save_frame()
# Write to .csv
def save_frame(df, path):
logger = logging.getLogger('save_frame')
date = set_date()
udate = date.replace('-', '')
savepath = path + udate + '.csv'
if not os.path.exists(path):
logger.warning("Failed to find path: %s" % path)
logger.info("Setting path to '/tmp/'")
savepath = '/tmp/' + udate + '.csv'
logger.info("Attempting to save frame...")
try:
df.to_csv(savepath, index=False)
logger.info("Successfully saved frame to:\n\t%s" % os.path.relpath(savepath))
except Exception as e:
logger.error("%s\n" % e)
# wrapitup()
#
def wrapitup(df, dir_output):
logger = logging.getLogger('wrapitup')
if df.values.size > 0:
show_info(df)
plot_counts(df)
if (len(pd.unique(df.source)) > 1):
print("\n\n")
plot_heat(df)
else:
logger.info("Only got a single blacklist feed. No overlap to display.")
if SAVE:
save_frame(df, dir_output)
else:
logger.warning("Got empty data frame...")
print('\nDone!\n\n')
# main()
#
def main():
logger = logging.getLogger('main')
cols = ["entity","type","direction","source","notes","date"]
if GET_URLS:
print("\n\n>>>> Fetching public inbound blacklisted IPv4 addresses from URLs <<<<\n")
df = pd.DataFrame(columns=cols)
df = get_url(inbound_urls, df)
wrapitup(df, DIR_OUTPUT_URL)
if READ_PREFETCH:
print("\n\n>>>> Fetching private inbound blacklisted IPv4 addresses from disk <<<<\n")
df = pd.DataFrame(columns=cols)
df = get_prefetched(inbound_prefetched, df)
wrapitup(df, DIR_OUTPUT_PREFETCHED)
main()
Explanation: Functions
End of explanation |
395 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring the Lorenz System of Differential Equations
In this Notebook we explore the Lorenz system of differential equations
Step2: Computing the trajectories and plotting the result
We define a function that can integrate the differential equations numerically and then plot the solutions. This function has arguments that control the parameters of the differential equation ($\sigma$, $\beta$, $\rho$), the numerical integration (N, max_time) and the visualization (angle).
Step3: Let's call the function once to view the solutions. For this set of parameters, we see the trajectories swirling around two points, called attractors.
Step4: Using IPython's interactive function, we can explore how the trajectories behave as we change the various parameters.
Step5: The object returned by interactive is a Widget object and it has attributes that contain the current result and arguments
Step6: After interacting with the system, we can take the result and perform further computations. In this case, we compute the average positions in $x$, $y$ and $z$.
Step7: Creating histograms of the average positions (across different trajectories) show that on average the trajectories swirl about the attractors. | Python Code:
%matplotlib inline
from ipywidgets import interact, interactive
from IPython.display import clear_output, display, HTML
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import cnames
from matplotlib import animation
Explanation: Exploring the Lorenz System of Differential Equations
In this Notebook we explore the Lorenz system of differential equations:
$$
\begin{aligned}
\dot{x} & = \sigma(y-x) \
\dot{y} & = \rho x - y - xz \
\dot{z} & = -\beta z + xy
\end{aligned}
$$
This is one of the classic systems in non-linear differential equations. It exhibits a range of different behaviors as the parameters ($\sigma$, $\beta$, $\rho$) are varied.
Imports
First, we import the needed things from IPython, NumPy, Matplotlib and SciPy.
End of explanation
def solve_lorenz(N=10, angle=0.0, max_time=4.0, sigma=10.0, beta=8./3, rho=28.0):
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1], projection='3d')
ax.axis('off')
# prepare the axes limits
ax.set_xlim((-25, 25))
ax.set_ylim((-35, 35))
ax.set_zlim((5, 55))
def lorenz_deriv(x_y_z, t0, sigma=sigma, beta=beta, rho=rho):
Compute the time-derivative of a Lorenz system.
x, y, z = x_y_z
return [sigma * (y - x), x * (rho - z) - y, x * y - beta * z]
# Choose random starting points, uniformly distributed from -15 to 15
np.random.seed(1)
x0 = -15 + 30 * np.random.random((N, 3))
# Solve for the trajectories
t = np.linspace(0, max_time, int(250*max_time))
x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t)
for x0i in x0])
# choose a different color for each trajectory
colors = plt.cm.viridis(np.linspace(0, 1, N))
for i in range(N):
x, y, z = x_t[i,:,:].T
lines = ax.plot(x, y, z, '-', c=colors[i])
plt.setp(lines, linewidth=2)
ax.view_init(30, angle)
plt.show()
return t, x_t
Explanation: Computing the trajectories and plotting the result
We define a function that can integrate the differential equations numerically and then plot the solutions. This function has arguments that control the parameters of the differential equation ($\sigma$, $\beta$, $\rho$), the numerical integration (N, max_time) and the visualization (angle).
End of explanation
t, x_t = solve_lorenz(angle=0, N=10)
Explanation: Let's call the function once to view the solutions. For this set of parameters, we see the trajectories swirling around two points, called attractors.
End of explanation
w = interactive(solve_lorenz, angle=(0.,360.), max_time=(0.1, 4.0),
N=(0,50), sigma=(0.0,50.0), rho=(0.0,50.0))
display(w)
Explanation: Using IPython's interactive function, we can explore how the trajectories behave as we change the various parameters.
End of explanation
t, x_t = w.result
w.kwargs
Explanation: The object returned by interactive is a Widget object and it has attributes that contain the current result and arguments:
End of explanation
xyz_avg = x_t.mean(axis=1)
xyz_avg.shape
Explanation: After interacting with the system, we can take the result and perform further computations. In this case, we compute the average positions in $x$, $y$ and $z$.
End of explanation
plt.hist(xyz_avg[:,0])
plt.title('Average $x(t)$');
plt.hist(xyz_avg[:,1])
plt.title('Average $y(t)$');
Explanation: Creating histograms of the average positions (across different trajectories) show that on average the trajectories swirl about the attractors.
End of explanation |
396 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
机器学习工程师纳米学位
监督学习
项目 2
Step1: 练习
Step2: 数据准备
在这个部分中,我们将要为建模、训练和测试准备数据
识别特征和目标列
你获取的数据中通常都会包含一些非数字的特征,这会导致一些问题,因为大多数的机器学习算法都会期望输入数字特征进行计算。
运行下面的代码单元将学生数据分成特征和目标列看一看他们中是否有非数字特征。
Step3: 预处理特征列
正如你所见,我们这里有几个非数值的列需要做一定的转换!它们中很多是简单的yes/no,比如internet。这些可以合理地转化为1/0(二元值,binary)值。
其他的列,如Mjob和Fjob,有两个以上的值,被称为_分类变量(categorical variables)_。处理这样的列的推荐方法是创建和可能值一样多的列(如:Fjob_teacher,Fjob_other,Fjob_services等),然后将其中一个的值设为1另外的设为0。
这些创建的列有时候叫做 虚拟变量(dummy variables),我们将用pandas.get_dummies()函数来完成这个转换。运行下面代码单元的代码来完成这里讨论的预处理步骤。
Step4: 实现
Step5: 训练和评价模型
在这个部分,你将选择3个适合这个问题并且在scikit-learn中已有的监督学习的模型。首先你需要说明你选择这三个模型的原因,包括这些数据集有哪些特点,每个模型的优点和缺点各是什么。然后,你需要将这些模型用不同大小的训练集(100个数据点,200个数据点,300个数据点)进行训练,并用F<sub>1</sub>的值来衡量。你需要制作三个表,每个表要显示训练集大小,训练时间,预测时间,训练集上的F<sub>1</sub>值和测试集上的F<sub>1</sub>值(每个模型一个表)。
这是目前 scikit-learn 里有的监督学习模型,你可以从中选择
Step6: 练习
Step7: 结果表格
编辑下面的表格看看在Markdown中如何设计一个表格。你需要把上面的结果记录在表格中。
分类器 1 - ?
| 训练集大小 | 训练时间 | 预测时间 (测试) | F1值 (训练) | F1值 (测试) |
| | Python Code:
# 载入所需要的库
import numpy as np
import pandas as pd
from time import time
from sklearn.metrics import f1_score
# 载入学生数据集
student_data = pd.read_csv("student-data.csv")
print "Student data read successfully!"
Explanation: 机器学习工程师纳米学位
监督学习
项目 2: 搭建一个学生干预系统
欢迎来到机器学习工程师纳米学位的第二个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的代码部分中有你必须要实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示!
除了实现代码外,你还必须回答一些与项目和你的实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。我们将根据你对问题的回答和撰写代码所实现的功能来对你提交的项目进行评分。
提示:Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown可以通过双击进入编辑模式。
问题 1 - 分类 vs. 回归
在这个项目中你的任务是找出那些如果不给予帮助,最终可能无法毕业的学生。你觉得这个问题是哪种类型的监督学习问题,是分类问题还是回归问题?为什么?
答案:
分析数据
运行下面区域的代码以载入学生数据集,以及一些此项目所需的Python库。注意数据集的最后一列'passed'是我们的预测的目标(表示学生是毕业了还是没有毕业),其他的列是每个学生的属性。
End of explanation
# TODO: 计算学生的数量
n_students = None
# TODO: 计算特征数量
n_features = None
# TODO: 计算通过的学生数
n_passed = None
# TODO: 计算未通过的学生数
n_failed = None
# TODO: 计算通过率
grad_rate = None
# 输出结果
print "Total number of students: {}".format(n_students)
print "Number of features: {}".format(n_features)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
Explanation: 练习: 分析数据
我们首先通过调查数据,以确定有多少学生的信息,并了解这些学生的毕业率。在下面的代码单元中,你需要完成如下的运算:
- 学生的总数, n_students。
- 每个学生的特征总数, n_features。
- 毕业的学生的数量, n_passed。
- 未毕业的学生的数量, n_failed。
- 班级的毕业率, grad_rate, 用百分数表示(%)。
End of explanation
# 提取特征列
feature_cols = list(student_data.columns[:-1])
# 提取目标列 ‘passed’
target_col = student_data.columns[-1]
# 显示列的列表
print "Feature columns:\n{}".format(feature_cols)
print "\nTarget column: {}".format(target_col)
# 将数据分割成特征数据和目标数据(即X_all 和 y_all)
X_all = student_data[feature_cols]
y_all = student_data[target_col]
# 通过打印前5行显示特征信息
print "\nFeature values:"
print X_all.head()
Explanation: 数据准备
在这个部分中,我们将要为建模、训练和测试准备数据
识别特征和目标列
你获取的数据中通常都会包含一些非数字的特征,这会导致一些问题,因为大多数的机器学习算法都会期望输入数字特征进行计算。
运行下面的代码单元将学生数据分成特征和目标列看一看他们中是否有非数字特征。
End of explanation
def preprocess_features(X):
''' 预处理学生数据,将非数字的二元特征转化成二元值(0或1),将分类的变量转换成虚拟变量
'''
# 初始化一个用于输出的DataFrame
output = pd.DataFrame(index = X.index)
# 查看数据的每一个特征列
for col, col_data in X.iteritems():
# 如果数据是非数字类型,将所有的yes/no替换成1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# 如果数据类型是类别的(categorical),将它转换成虚拟变量
if col_data.dtype == object:
# 例子: 'school' => 'school_GP' and 'school_MS'
col_data = pd.get_dummies(col_data, prefix = col)
# 收集转换后的列
output = output.join(col_data)
return output
X_all = preprocess_features(X_all)
print "Processed feature columns ({} total features):\n{}".format(len(X_all.columns), list(X_all.columns))
Explanation: 预处理特征列
正如你所见,我们这里有几个非数值的列需要做一定的转换!它们中很多是简单的yes/no,比如internet。这些可以合理地转化为1/0(二元值,binary)值。
其他的列,如Mjob和Fjob,有两个以上的值,被称为_分类变量(categorical variables)_。处理这样的列的推荐方法是创建和可能值一样多的列(如:Fjob_teacher,Fjob_other,Fjob_services等),然后将其中一个的值设为1另外的设为0。
这些创建的列有时候叫做 虚拟变量(dummy variables),我们将用pandas.get_dummies()函数来完成这个转换。运行下面代码单元的代码来完成这里讨论的预处理步骤。
End of explanation
# TODO:在这里导入你可能需要使用的另外的功能
# TODO:设置训练集的数量
num_train = None
# TODO:设置测试集的数量
num_test = X_all.shape[0] - num_train
# TODO:把数据集混洗和分割成上面定义的训练集和测试集
X_train = None
X_test = None
y_train = None
y_test = None
# 显示分割的结果
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
Explanation: 实现: 将数据分成训练集和测试集
现在我们已经将所有的 分类的(categorical) 特征转换成数值了。下一步我们将把数据(包括特征和对应的标签数据)分割成训练集和测试集。在下面的代码单元中,你需要完成下列功能:
- 随机混洗切分数据(X_all, y_all) 为训练子集和测试子集。
- 使用300个数据点作为训练集(约76%),使用95个数据点作为测试集(约24%)。
- 如果可能的话,为你使用的函数设置一个random_state。
- 将结果存储在X_train, X_test, y_train和 y_test中。
End of explanation
def train_classifier(clf, X_train, y_train):
''' 用训练集训练分类器 '''
# 开始计时,训练分类器,然后停止计时
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' 用训练好的分类器做预测并输出F1值'''
# 开始计时,作出预测,然后停止计时
start = time()
y_pred = clf.predict(features)
end = time()
# 输出并返回结果
print "Made predictions in {:.4f} seconds.".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
''' 用一个分类器训练和预测,并输出F1值 '''
# 输出分类器名称和训练集大小
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# 训练一个分类器
train_classifier(clf, X_train, y_train)
# 输出训练和测试的预测结果
print "F1 score for training set: {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "F1 score for test set: {:.4f}.".format(predict_labels(clf, X_test, y_test))
Explanation: 训练和评价模型
在这个部分,你将选择3个适合这个问题并且在scikit-learn中已有的监督学习的模型。首先你需要说明你选择这三个模型的原因,包括这些数据集有哪些特点,每个模型的优点和缺点各是什么。然后,你需要将这些模型用不同大小的训练集(100个数据点,200个数据点,300个数据点)进行训练,并用F<sub>1</sub>的值来衡量。你需要制作三个表,每个表要显示训练集大小,训练时间,预测时间,训练集上的F<sub>1</sub>值和测试集上的F<sub>1</sub>值(每个模型一个表)。
这是目前 scikit-learn 里有的监督学习模型,你可以从中选择:
- Gaussian Naive Bayes (GaussianNB) 朴素贝叶斯
- Decision Trees 决策树
- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K-Nearest Neighbors (KNeighbors)
- Stochastic Gradient Descent (SGDC)
- Support Vector Machines (SVM) 向量模型机
- Logistic Regression 逻辑回归
问题 2 - 应用模型
列出三个适合这个问题的监督学习算法模型。每一个你选择的模型:
描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)
这个模型的优势是什么?他什么情况下表现最好?
这个模型的缺点是什么?什么条件下它表现很差?
根据我们当前数据集的特点,为什么这个模型适合这个问题。
回答:
准备
运行下面的代码单元以初始化三个帮助函数,这三个函数将能够帮你训练和测试你上面所选择的三个监督学习算法。这些函数是:
- train_classifier - 输入一个分类器和训练集,用数据来训练这个分类器。
- predict_labels - 输入一个训练好的分类器、特征以及一个目标标签,这个函数将帮你做预测并给出F<sub>1</sub>的值.
- train_predict - 输入一个分类器以及训练集和测试集,它可以运行train_clasifier和predict_labels.
- 这个函数将分别输出训练集的F<sub>1</sub>值和测试集的F<sub>1</sub>值
End of explanation
# TODO:从sklearn中引入三个监督学习模型
# from sklearn import model_A
# from sklearn import model_B
# from skearln import model_C
# TODO:初始化三个模型
clf_A = None
clf_B = None
clf_C = None
# TODO:设置训练集大小
X_train_100 = None
y_train_100 = None
X_train_200 = None
y_train_200 = None
X_train_300 = None
y_train_300 = None
# TODO:对每一个分类器和每一个训练集大小运行'train_predict'
# train_predict(clf, X_train, y_train, X_test, y_test)
Explanation: 练习: 模型评价指标
借助于上面定义的函数,你现在需要导入三个你选择的监督学习模型,然后为每一个模型运行train_predict函数。请记住,对于每一个模型你需要在不同大小的训练集(100,200和300)上进行训练和测试。所以,你在下面应该会有9个不同的输出(每个模型都有训练集大小不同的三个输出)。在接下来的代码单元中,你将需要实现以下功能:
- 引入三个你在上面讨论过的监督式学习算法模型。
- 初始化三个模型并将它们存储在clf_A, clf_B 和 clf_C中。
- 如果可能对每一个模型都设置一个random_state。
- 注意: 这里先使用每一个模型的默认参数,在接下来的部分中你将需要对某一个模型的参数进行调整。
- 创建不同大小的训练集用来训练每一个模型。
- 不要再混洗和再分割数据!新的训练集要取自X_train和y_train.
- 对于每一个模型要用不同大小的训练集来训练它,然后在测试集上做测试(总共需要9次训练测试)
注意: 在下面的代码单元后面我们提供了三个表用来存储你的结果。
End of explanation
# TODO: 导入 'GridSearchCV' 和 'make_scorer'
# TODO:创建你希望调整的参数列表
parameters = None
# TODO:初始化分类器
clf = None
# TODO:用'make_scorer'创建一个f1评分函数
f1_scorer = None
# TODO:在分类器上使用f1_scorer作为评分函数运行网格搜索
grid_obj = None
# TODO: Fit the grid search object to the training data and find the optimal parameters
# TODO:用训练集训练grid search object来寻找最佳参数
grid_obj = None
# Get the estimator
# 得到预测的结果
clf = grid_obj.best_estimator_
# Report the final F1 score for training and testing after parameter tuning
# 输出经过调参之后的训练集和测试集的F1值
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
Explanation: 结果表格
编辑下面的表格看看在Markdown中如何设计一个表格。你需要把上面的结果记录在表格中。
分类器 1 - ?
| 训练集大小 | 训练时间 | 预测时间 (测试) | F1值 (训练) | F1值 (测试) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | | | | |
| 200 | EXAMPLE | | | |
| 300 | | | | EXAMPLE |
分类器 2 - ?
| 训练集大小 | 训练时间 | 预测时间 (测试) | F1值 (训练) | F1值 (测试) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | | | | |
| 200 | EXAMPLE | | | |
| 300 | | | | EXAMPLE |
分类器 3 - ?
| 训练集大小 | 训练时间 | 预测时间 (测试) | F1值 (训练) | F1值 (测试) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | | | | |
| 200 | EXAMPLE | | | |
| 300 | | | | EXAMPLE |
选择最佳模型
在最后这一部分中,你将从三个监督学习模型中选择一个用在学生数据上的最佳模型。然后你将在最佳模型上用全部的训练集(X_train和y_train)运行一个网格搜索算法,在这个过程中,你要至少调整一个参数以提高模型的F<sub>1</sub>值(相比于没有调参的模型的分值有所提高)。
问题 3 - 选择最佳模型
给予你上面做的实验,用一到两段话,向(学校)监事会解释你将选择哪个模型作为最佳的模型。哪个模型在现有的数据,有限的资源、开支和模型表现综合来看是最好的选择?
回答:
问题 4 - 用通俗的语言解释模型
用一到两段话,向(学校)监事会用外行也听得懂的话来解释最终模型是如何工作的。你需要解释所选模型的主要特点。例如,这个模型是怎样被训练的,它又是如何做出预测的。避免使用高级的数学或技术术语,不要使用公式或特定的算法名词。
回答:
练习: 模型调参
细调选择的模型的参数。使用网格搜索(GridSearchCV)来至少调整模型的重要参数(至少调整一个),这个参数至少需给出并尝试3个不同的值。你要使用整个训练集来完成这个过程。在接下来的代码单元中,你需要实现以下功能:
- 导入 sklearn.model_selection.GridSearchCV 和 sklearn.metrics.make_scorer.
- 创建一个对于这个模型你希望调整参数的字典。
- 例如: parameters = {'parameter' : [list of values]}。
- 初始化你选择的分类器,并将其存储在clf中。
- 使用make_scorer 创建F<sub>1</sub>评分函数并将其存储在f1_scorer中。
- 需正确设定参数pos_label的值!
- 在分类器clf上用f1_scorer 作为评价函数运行网格搜索,并将结果存储在grid_obj中。
- 用训练集(X_train, y_train)训练grid search object,并将结果存储在grid_obj中。
End of explanation |
397 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature engineering with pandas and scikit-learn
This notebook demonstrates how to use Notebooks to perform feature engineering on a dataset using Pandas.
For each dataset, you will load the data into a Pandas DataFrame, clean and transform the columns into a usable format, and then restructure the data into feature and target data columns.
Before you jump in, let's cover some of the different tools you'll be using
Step1: Define constants
Define the name of your Google Cloud Storage bucket where the cleaned data is stored.
PROJECT_ID
Step2: List the files
Since the data cleaning job outputted multiple partioned files into the GCS bucket, you will need to loop through each file to access its contents. The following cell will create a list of the files with the BLOB_PREFIX defined above so they can be referenced later when loading the data into a dataframe.
Step3: Load the files into a dataframe
Now, you can load the files into a dataframe.
First, define the schema. From this dataset, you will need 4 columns
Step4: Next, run the following cell to loop through the files in GCS, create a Pandas DataFrame, and view the first ten rows. The columns needed are the 1st, 2nd, 3rd, and 7th columns from left to right (starting with 0 at tripduration) when looking at the table in BigQuery.
Step5: Extract features
Reformat the data
The following cell will modify the dataset in a few ways
Step6: Count trips starting from a station
Next, count the number of trips that have been started from each station per day. The groupby function from Pandas will count the number of unique combinations of the start time and start station ID values. Then, the pivot function from Pandas can be used to convert the station IDs into columns (since they are the target data) and the counts as the values.
Also, use the add_prefix function to rename the columns and distinguish that the values indicate trips that have started from the station.
Step7: Count trips ending at a station
Running the following cell will repeat the same process as above, but will generate values for the number of trips that have ended at the station.
Step8: Putting it together
The following cell will combine both dataframes for trips started and ended at the stations. Then, all NaN values will be replaced with a 0 since this indicates that no trips started or ended at that particular station. Lastly, the dataset will be cleaned up by renaming the columns and converting the values to integers.
Step9: You are done with feature engineering for the Citibike Dataset! Now, you can move on to the external datasets you ingested in BigQuery to obtain more features, starting with Gas Prices.
Gas Prices Dataset
Now, perform feature engineering on the Gas Prices dataset. This includes cleaning the data, normalizing the price values, and transforming the data to match the Citibike dataset.
Load the data
Import libraries
Running the following cell will import the libraries needed to preprocess the external datasets.
Datetime
Step10: Load the data from BigQuery into a dataframe
Run the following cell to load the Gas Prices dataset from BigQuery into a dataframe. You will define a query that selects the columns needed from the gas prices dataset, run the query using the BigQuery client, and then convert it to a Pandas DataFrame.
If you named your dataset something other than new_york_citibike_trips, be sure to update the DATASET_NAME variable.
Step11: Normalize values
The gas price values range from around 2 USD to 5 USD. It is important to normalize these values and scale them to be between 0 and 1 so that all the values within our dataset are weighted consistently. Running the following cell will create a scaler using the MinMaxScaler from scikit-learn and then fit the gas prices to the scaler.
Step12: Copy prices for the week
The Citibike dataset contains pricing values for each day, however, the Gas Prices dataset contains one value per week. To get daily pricing values, you can assign a week's pricing value to each day of that particular week.
First, run the following cell to refactor the date so it matches the format of a datetime object.
Step13: Now, copy the gas price of one day for the whole week by adding new rows to the dataframe.
The following cell does this by applying a function to each row in the dataframe that
Step14: You have now finished transforming the Gas Prices dataset! Now you can move on to the next external dataset
Step15: Transform the holiday column
The purpose of the holiday feature column is to represent a binary value for whether there is a holiday on a specific day or not, rather than the type of holiday. Since this dataset contains only days with holidays, run the following cell to convert the holiday values to 1 (referring to True). Later, when combining the datasets, you will add values of 0 (referring to False) to dates that are present in the other datasets but not this one.
Step16: You have now finished transforming the US Holidays dataset! Now you can move on to the next external dataset
Step17: Normalize Values
Similarly to the gas price values, the precipitation and temperature values must be normalized so all the values within the dataset are weighted consistently.
Step18: Convert column data types
Date column
Run the following cell to change the datatype of the date column from a Datetime object to a string, so that it can be properly combined with the other datasets.
Step19: Impactful column
Run the following cell to encode the True and False column values to 0 or 1 so that they can be correctly interpreted by the machine learning model.
Step20: Combine the datasets
Now that all of the datasets have been transformed, combine them to create one table. The Pandas merge function can only combine two datasets at a time, so each dataset will be merged separetely.
Citibike Trips and Gas Prices
Step21: Improve the date feature
Now that all the datasets have been combined, you can separate the date column into more features such as the year, month, and day. Then, the date column can be dropped.
Step22: The following cell will extract the day of the week from the date information using the Datetime python library.
Step23: Upload the data to a GCS bucket
Now that you have finished feature engineering on all of the datasets, you will need to upload the data to a bucket so that it can be accessed later when training a model. Run the following cell to upload the final dataframe to the GCS bucket you specified earlier.
Step24: You have now finished feature engineering on all of the datasets! Model training is next. | Python Code:
import os
import pandas as pd
from google.cloud import storage
Explanation: Feature engineering with pandas and scikit-learn
This notebook demonstrates how to use Notebooks to perform feature engineering on a dataset using Pandas.
For each dataset, you will load the data into a Pandas DataFrame, clean and transform the columns into a usable format, and then restructure the data into feature and target data columns.
Before you jump in, let's cover some of the different tools you'll be using:
Vertex AI consists of tools that allow machine learning developers and data scientists to run their ML projects quickly and cost-effectively.
Cloud Storage is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving.
BigQuery is a serverless, highly scalable, and cost-effective multi-cloud data warehouse.
Pandas is a data analysis and manipulation tool built on top of the Python programming language.
Scikit-Learn is a machine learning and data analysis tool for the Python programming language that provides simple and efficient tools to analyze or predict data.
Citibike Dataset
First, you will perform feature engineering on the Citibike dataset. This includes cleaning the data, extracting the necessary features, and transforming the data into feature columns.
Load the data
Import libraries
Running the following cell will import the libraries needed to preprocess the Citibike dataset.
Pandas: to store and manipulate the dataset
Google Cloud Storage: to retrieve the dataset from the GCS bucket where the dataset is stored
os: to retrieve environment variables
End of explanation
PROJECT_ID = os.getenv('PROJECT_ID', '[your-project-id]')
BUCKET_NAME = os.getenv('BUCKET_NAME', '[your-bucket-name]')
BLOB_PREFIX = 'clean_data/'
Explanation: Define constants
Define the name of your Google Cloud Storage bucket where the cleaned data is stored.
PROJECT_ID: unique identifier for your project
BUCKET_NAME: name of the bucket where the cleaned dataset is stored
BLOB_PREFIX: folder where the files are stored
End of explanation
# Create storage client
storage_client = storage.Client()
# List files in the bucket with the specified prefix
blobs = storage_client.list_blobs(BUCKET_NAME, prefix=BLOB_PREFIX)
Explanation: List the files
Since the data cleaning job outputted multiple partioned files into the GCS bucket, you will need to loop through each file to access its contents. The following cell will create a list of the files with the BLOB_PREFIX defined above so they can be referenced later when loading the data into a dataframe.
End of explanation
COLUMNS = (
'starttime',
'stoptime',
'start_station_id',
'end_station_id',
)
Explanation: Load the files into a dataframe
Now, you can load the files into a dataframe.
First, define the schema. From this dataset, you will need 4 columns:
starttime: to extract the day of the week and date of when the trip starts
stoptime: to extract the day of the week and date of when the trip has ended
start_station_id: to find out how many trips started at a station
end_station_id: to find out how many trips ended at a station
End of explanation
# Create empty dataframe
citibike_data = pd.DataFrame()
# For each file: load the contents into a dataframe
# and concatenate the new dataframe with the existing
for blob in blobs:
print("blob" + str(blob.name))
filename = f'gs://{BUCKET_NAME}/{blob.name}'
new_df = pd.read_csv(filename, compression='gzip', usecols=[1, 2, 3, 7], header=None,
names=COLUMNS, low_memory=False)
citibike_data = pd.concat([citibike_data, new_df])
citibike_data.head(10)
Explanation: Next, run the following cell to loop through the files in GCS, create a Pandas DataFrame, and view the first ten rows. The columns needed are the 1st, 2nd, 3rd, and 7th columns from left to right (starting with 0 at tripduration) when looking at the table in BigQuery.
End of explanation
# Drop rows with NaN values
citibike_data = citibike_data.dropna()
# Convert station IDs to integers
citibike_data['start_station_id'] = citibike_data['start_station_id'].astype('int32')
citibike_data['end_station_id'] = citibike_data['end_station_id'].astype('int32')
# Remove time from the time columns
citibike_data['starttime'] = citibike_data['starttime'].apply(lambda t: t.split("T")[0])
citibike_data['stoptime'] = citibike_data['stoptime'].apply(lambda t: t.split("T")[0])
citibike_data.head(10)
Explanation: Extract features
Reformat the data
The following cell will modify the dataset in a few ways:
Any rows with NaN values will be dropped
The station IDs will be converted from floats to integers
The times from the start time column will be removed since they are not needed
End of explanation
# Find unique combinations of start time and start station ID values
trips_started = (citibike_data.groupby(['starttime', 'start_station_id'])
.size().reset_index().rename(columns={0: 'count'}))
# Pivot to make station ID the columns and rename them
trips_started = (trips_started.pivot(index='starttime', columns='start_station_id', values='count')
.add_prefix('started_at_'))
trips_started.head(10)
Explanation: Count trips starting from a station
Next, count the number of trips that have been started from each station per day. The groupby function from Pandas will count the number of unique combinations of the start time and start station ID values. Then, the pivot function from Pandas can be used to convert the station IDs into columns (since they are the target data) and the counts as the values.
Also, use the add_prefix function to rename the columns and distinguish that the values indicate trips that have started from the station.
End of explanation
# Find unique combinations of stop time and end station ID values
trips_ended = (citibike_data.groupby(['stoptime', 'end_station_id'])
.size().reset_index().rename(columns={0: 'count'}))
# Pivot to make station ID the columns and rename them
trips_ended = (trips_ended.pivot(index='stoptime', columns='end_station_id', values='count')
.add_prefix('ending_at_'))
trips_ended.head(10)
Explanation: Count trips ending at a station
Running the following cell will repeat the same process as above, but will generate values for the number of trips that have ended at the station.
End of explanation
# Combine the dataframes
# Set the index as row number instead of start time
# Fill the NaN values with 0's
citibike_df = (pd.concat([trips_started, trips_ended], axis=1)
.reset_index()
.fillna(0))
# Rename the column with start and end dates
citibike_df.rename(columns={'index': 'date'}, inplace=True)
# Convert all values to integers
for col in citibike_df.columns:
if col == 'date':
continue
citibike_df[col] = citibike_df[col].astype(int)
citibike_df.head(10)
Explanation: Putting it together
The following cell will combine both dataframes for trips started and ended at the stations. Then, all NaN values will be replaced with a 0 since this indicates that no trips started or ended at that particular station. Lastly, the dataset will be cleaned up by renaming the columns and converting the values to integers.
End of explanation
import datetime
from google.cloud import bigquery
from sklearn import preprocessing
Explanation: You are done with feature engineering for the Citibike Dataset! Now, you can move on to the external datasets you ingested in BigQuery to obtain more features, starting with Gas Prices.
Gas Prices Dataset
Now, perform feature engineering on the Gas Prices dataset. This includes cleaning the data, normalizing the price values, and transforming the data to match the Citibike dataset.
Load the data
Import libraries
Running the following cell will import the libraries needed to preprocess the external datasets.
Datetime: to manipulate the date column
BigQuery: to retrieve the datasets from BigQuery
scikit-learn: to normalize the numerical column values
End of explanation
LOCATION = 'US'
DATASET_NAME = 'new_york_citibike_trips'
# Create the BigQuery client
bigquery_client = bigquery.Client(location=LOCATION)
# Define the query
table = f'{PROJECT_ID}.{DATASET_NAME}.gas_prices'
query = f' SELECT Date as date, New_York_City_Average_USD_per_Gal as nyc_gas_price FROM {table}'
# Run the query
query_job = bigquery_client.query(
query,
location=LOCATION
)
# Convert to a dataframe
gas_df = query_job.to_dataframe()
gas_df.head(10)
Explanation: Load the data from BigQuery into a dataframe
Run the following cell to load the Gas Prices dataset from BigQuery into a dataframe. You will define a query that selects the columns needed from the gas prices dataset, run the query using the BigQuery client, and then convert it to a Pandas DataFrame.
If you named your dataset something other than new_york_citibike_trips, be sure to update the DATASET_NAME variable.
End of explanation
# Extract gas prices column as a numpy array
gas_values = gas_df[['nyc_gas_price']].values
# Create scaler from sklearn
min_max_scaler = preprocessing.MinMaxScaler()
# Fit values to the scaler and replace column with normalized values
gas_values_scaled = min_max_scaler.fit_transform(gas_values)
gas_df['nyc_gas_price'] = gas_values_scaled
gas_df.head(10)
Explanation: Normalize values
The gas price values range from around 2 USD to 5 USD. It is important to normalize these values and scale them to be between 0 and 1 so that all the values within our dataset are weighted consistently. Running the following cell will create a scaler using the MinMaxScaler from scikit-learn and then fit the gas prices to the scaler.
End of explanation
def refactor_date(date):
'''Refactor the date strings so they match the Citibike dataset'''
parts = date.split('/')
return f'{parts[2]}-{parts[0]}-{parts[1]}'
gas_df['date'] = gas_df['date'].apply(lambda d: refactor_date(d))
gas_df.head(10)
Explanation: Copy prices for the week
The Citibike dataset contains pricing values for each day, however, the Gas Prices dataset contains one value per week. To get daily pricing values, you can assign a week's pricing value to each day of that particular week.
First, run the following cell to refactor the date so it matches the format of a datetime object.
End of explanation
# Define list to hold new rows
new_rows = []
def copy_values_for_week(row):
'''Copies gas price of one day for the entire week '''
today = datetime.datetime.strptime(row['date'], '%Y-%m-%d')
# Loop through the next six days
for day in range(1, 7):
# Create and a new row for the next day
new_day = datetime.datetime.strftime(today + datetime.timedelta(days=day), '%Y-%m-%d')
new_row = {'date': new_day, 'nyc_gas_price': row['nyc_gas_price']}
new_rows.append(new_row)
# Apply copy function to dataframe
gas_df.apply(copy_values_for_week, axis=1)
# Add new rows to dataframe
gas_df = gas_df.append(new_rows)
gas_df
Explanation: Now, copy the gas price of one day for the whole week by adding new rows to the dataframe.
The following cell does this by applying a function to each row in the dataframe that:
+ Converts each date to a datetime object
+ Loops through the next six days to create new rows
+ Appends the new rows to a list
End of explanation
# Define the query
table = f'{PROJECT_ID}.{DATASET_NAME}.usholidays'
query = f' SELECT Date as date, Holiday as holiday FROM {table}'
# Run the query
query_job = bigquery_client.query(
query,
location=LOCATION,
)
# Convert to a dataframe
holiday_df = query_job.to_dataframe()
holiday_df
Explanation: You have now finished transforming the Gas Prices dataset! Now you can move on to the next external dataset: US Holidays.
US Holidays Dataset
Load the data
Run the following cell to load the US Holidays dataset from BigQuery into a dataframe. Similarly to loading the Gas Prices dataset, this query selects the columns needed, runs the query using the BigQuery client, and converts the job to a dataframe.
End of explanation
holiday_df['holiday'] = holiday_df['holiday'].apply(lambda h: 1)
holiday_df.head(10)
Explanation: Transform the holiday column
The purpose of the holiday feature column is to represent a binary value for whether there is a holiday on a specific day or not, rather than the type of holiday. Since this dataset contains only days with holidays, run the following cell to convert the holiday values to 1 (referring to True). Later, when combining the datasets, you will add values of 0 (referring to False) to dates that are present in the other datasets but not this one.
End of explanation
# Initialize combined weather dataframe
weather_df = pd.DataFrame()
years = ['2013', '2014', '2015', '2016', '2017', '2018']
for year in years:
# Define a query
query = f''' SELECT
date,
IF(MAX(haswx) = 'True', 'True', 'False') AS impactful,
MAX(prcp) AS prcp,
MAX(tmin) AS min_temp,
MAX(tmax) AS max_temp
FROM (
SELECT
wx.date,
IF (SUBSTR(wx.element, 0, 2) = 'WT', 'True', NULL) AS haswx,
IF (wx.element = 'PRCP', wx.value/10, NULL) AS prcp,
IF (wx.element = 'TMIN', wx.value/10, NULL) AS tmin,
IF (wx.element = 'TMAX', wx.value/10, NULL) AS tmax
FROM
`bigquery-public-data.ghcn_d.ghcnd_{year}` AS wx
WHERE
id = 'USW00094728')
GROUP BY
date
ORDER BY
date'''
# Run the query
query_job = bigquery_client.query(
query,
location=LOCATION
)
# Convert to a dataframe
curr_df = query_job.to_dataframe()
# Concatenate with combined dataframe
weather_df = pd.concat([weather_df, curr_df])
weather_df.head(10)
Explanation: You have now finished transforming the US Holidays dataset! Now you can move on to the next external dataset: Weather.
Weather Dataset
Load the data
Run the following cell to load the Weather dataset from BigQuery into a dataframe. For each year needed, you will:
Define a query that selects the required columns (whether there was impactful weather that day, precipitation (mm), minimum temperature, maximum temperature)
Run the query using the BigQuery client
Convert the job to a dataframe
Concatenate it with the combined dataframe
End of explanation
cols_to_normalize = ['prcp', 'min_temp', 'max_temp']
for col_name in cols_to_normalize:
# Extract values
temp_values = weather_df[[col_name]].values
# Fit values to the scaler and replace column with normalized values
temp_values_scaled = min_max_scaler.fit_transform(temp_values)
weather_df[col_name] = temp_values_scaled
weather_df.head(10)
Explanation: Normalize Values
Similarly to the gas price values, the precipitation and temperature values must be normalized so all the values within the dataset are weighted consistently.
End of explanation
weather_df['date'] = weather_df['date'].apply(lambda d: datetime.datetime.strftime(d, '%Y-%m-%d'))
weather_df.head(10)
Explanation: Convert column data types
Date column
Run the following cell to change the datatype of the date column from a Datetime object to a string, so that it can be properly combined with the other datasets.
End of explanation
weather_df['impactful'] = weather_df['impactful'].apply(lambda impact: 1 if impact == "True" else 0)
weather_df.head(10)
Explanation: Impactful column
Run the following cell to encode the True and False column values to 0 or 1 so that they can be correctly interpreted by the machine learning model.
End of explanation
# Merge both gas dataset with citibike dataset
final_df = pd.merge(gas_df, citibike_df, on="date")
# Merge combined dataset with holiday dataset
final_df = pd.merge(holiday_df, final_df, how="right", on="date").fillna(0)
# Merge combined dataset with weather dataset
final_df = pd.merge(weather_df, final_df, on="date")
final_df
Explanation: Combine the datasets
Now that all of the datasets have been transformed, combine them to create one table. The Pandas merge function can only combine two datasets at a time, so each dataset will be merged separetely.
Citibike Trips and Gas Prices: merge on the date column by specifying on="date" to create a combined dataframe
Combined dataframe and US Holidays: merge both datasets on the date column, keep the dates of the combined dataframe by specifying how='right', and fill the empty rows with False
Combined dataframe and Weather: merge on the date column by specifying on="date" to create the final dataframe
End of explanation
# Define the name and year, month, and day columns
date_columns = final_df['date'].str.split('-', expand=True)
date_names = ['year', 'month', 'day']
# Add the columns at the start of the dataset
for i in range(3):
final_df.insert(0, date_names[i], date_columns[i])
final_df[date_names[i]] = final_df[date_names[i]].astype('int32')
# Remove the date column from the dataframe
final_df = final_df.drop('date', axis=1)
final_df.head(10)
Explanation: Improve the date feature
Now that all the datasets have been combined, you can separate the date column into more features such as the year, month, and day. Then, the date column can be dropped.
End of explanation
def find_weekday(df):
''' Creates a datetime object and returns the day of the week '''
date = datetime.datetime(int(df['year']), int(df['month']), int(df['day']))
return date.weekday()
# Apply the find_weekday() function to every row of the dataset
weekday_col = final_df.apply(find_weekday, axis=1)
# Insert the weekday column at the start
final_df.insert(0, 'weekday', weekday_col)
final_df
Explanation: The following cell will extract the day of the week from the date information using the Datetime python library.
End of explanation
# Get bucket using storage client
bucket = storage_client.get_bucket(BUCKET_NAME)
# Upload the final dataframe as a csv file to the bucket
bucket.blob('feature_engineering/final_data.csv').upload_from_string(final_df.to_csv(), 'text/csv')
Explanation: Upload the data to a GCS bucket
Now that you have finished feature engineering on all of the datasets, you will need to upload the data to a bucket so that it can be accessed later when training a model. Run the following cell to upload the final dataframe to the GCS bucket you specified earlier.
End of explanation
###################################################################
###USED FOR INTERNAL TESTING TEARDOWN - USERS MAY SKIP THIS CELL###
###################################################################
def delete_blob_in_gcs(blob_name):
'''Delete a blob in GCS'''
blob = bucket.blob(blob_name)
blob.delete()
# Delete blob from GCS
delete_blob_in_gcs('clean_data/citibike.csv.gz')
delete_blob_in_gcs('holidays.csv')
delete_blob_in_gcs('gasprices.csv')
# Delete dataset from BigQuery
bigquery_client.delete_dataset(
f'{PROJECT_ID}.{DATASET_NAME}', delete_contents=True, not_found_ok=True
)
Explanation: You have now finished feature engineering on all of the datasets! Model training is next.
End of explanation |
398 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Business-Intelligence-Laboratory-2021---Pandas" data-toc-modified-id="Business-Intelligence-Laboratory-2021---Pandas-1">Business Intelligence Laboratory 2021 - Pandas</a></span></li><li><span><a href="#REPLACE-THIS-WITH-YOUR-NAME-(NEPTUN)" data-toc-modified-id="REPLACE-THIS-WITH-YOUR-NAME-(NEPTUN)-2">REPLACE THIS WITH YOUR NAME (NEPTUN)</a></span><ul class="toc-item"><li><span><a href="#REPLACE-with-today's-date]" data-toc-modified-id="REPLACE-with-today's-date]-2.1">REPLACE with today's date]</a></span></li><li><span><a href="#General-information" data-toc-modified-id="General-information-2.2">General information</a></span><ul class="toc-item"><li><span><a href="#Submission" data-toc-modified-id="Submission-2.2.1">Submission</a></span></li><li><span><a href="#Tips" data-toc-modified-id="Tips-2.2.2">Tips</a></span></li><li><span><a href="#Feedback" data-toc-modified-id="Feedback-2.2.3">Feedback</a></span></li></ul></li><li><span><a href="#Code-quality" data-toc-modified-id="Code-quality-2.3">Code quality</a></span></li><li><span><a href="#Figure-quality" data-toc-modified-id="Figure-quality-2.4">Figure quality</a></span></li></ul></li><li><span><a href="#Data-preparation" data-toc-modified-id="Data-preparation-3">Data preparation</a></span></li><li><span><a href="#Exploring-the-dataset" data-toc-modified-id="Exploring-the-dataset-4">Exploring the dataset</a></span></li><li><span><a href="#Basic-queries" data-toc-modified-id="Basic-queries-5">Basic queries</a></span><ul class="toc-item"><li><span><a href="#Which-movies-were-released-in-1956?" data-toc-modified-id="Which-movies-were-released-in-1956?-5.1">Which movies were released in 1956?</a></span></li><li><span><a href="#How-many-movies-were-released-in-the-80s?" data-toc-modified-id="How-many-movies-were-released-in-the-80s?-5.2">How many movies were released in the 80s?</a></span></li><li><span><a href="#When-were-the-Die-Hard-movies-released?" data-toc-modified-id="When-were-the-Die-Hard-movies-released?-5.3">When were the Die Hard movies released?</a></span></li><li><span><a href="#How-many-movies-are-both-action-and-romance?-What-about-action-or-romance?" data-toc-modified-id="How-many-movies-are-both-action-and-romance?-What-about-action-or-romance?-5.4">How many movies are both action and romance? What about action or romance?</a></span></li></ul></li><li><span><a href="#Task-1
Step1: Data preparation
Downloading the dataset
Step2: Loading the dataset
pd.read_table loads a tabular dataset. The full function signature is
Step3: Some improvements
Step4: We will use the release year of the movies frequently. Let's extract the release year of a movie into a separate column
Step5: The most common years are
Step6: video_release_date is always NaT (not a time), let's drop it
Step7: Exploring the dataset
describe generate descriptive statistics.
Step8: Only numeric columns are included by default. A single column (pd.Series) has a describe function too
Step9: Numberic statistics are available as separate functions too
Step10: Basic queries
Which movies were released in 1956?
Step11: How many movies were released in the 80s?
Let's print 5 examples too.
Step12: When were the Die Hard movies released?
Step13: Die Hard 4 and 5 are missing. This is because the dataset only contains movies released between
Step14: and Die Hard 4 and 5 were released in 2007 and 2013 respectively.
How many movies are both action and romance? What about action or romance?
Make sure you parenthesize the conditions
Step15: Task 1
Step16: Q1.2 What is the oldest movie?
Step17: Q1.3 What is the frequency of each genre? In other words how many movies are tagged as action, drama etc. (2 points)
The list of genres is given below.
Step18: Q1.4 How many genres does each movie have? (3 points)
You need a similar summation as the previous one but for each row. Add the count as a new column to the movies Dataframe.
Step19: Q1.5 Which movies have the most genres? (3 points)
There is more than one answer, so idxmax will not work this time. You should solve this task in two steps. First compute the maximum, then find the rows that match that number.
Step20: Q1.6* Extract the list of genres as a comma separated string. (4 points)
If a movie is tagged drama, thriller and romance, the genre string should be 'drama, thriller, romance'.
Since we need to work on multiple fields on each row, we need to use apply. The usage of apply is given. Your task is to implement get_genres. get_genres takes a row as its input and returns the formatted string. You can access each column as row[column_name].
Step21: Task 2
Step22: Let's plot it on a bar chart. We create the figure and axis objects beforehand. This allows adjusting the figure size and apply other changes to the figure.
Step23: Let's zoom into the 90s
Step24: Q2.1 Write a function that takes a genre and groups movies from the 90s of that genre by year. (3 points)
Step25: Q2.2 Plot the number of adventure movies from the 90s on a bar chart. Use your groupby_genre function. (2 points)
Step26: Task 3
Step27: Q3.2 We're building a traditional lexicon of the titles. What is the distribution of initial letters (i.e. how many titles start with S?)? Plot it on a bar chart.
Step 1. Compute frequencies. (3 points)
Step28: Step 2. Plot it on a bar chart in descending order. (3 points)
The most common letter should be the first bar.
Make the figure wider and fix the axis labels
Step29: Q3.3 Plot the distribution of release day (day of month) on a pie chart.
Step 1. groupby (2 points)
Step30: Step 2. pie chart. Add percent values. (3 points)
You should see that the 1st day of the month was by far the most common release day. The reason for this is most likely the lack of a specified day in the original release date (May 1996 instead of May 13, 1996).
Step31: Task 4
Step32: The timestamp column is a Unix timestamp, let's convert it to pd.DateTime
Step33: Merging it with movies
Step34: Q4.1 Load the users table from the file ml-100k/u.user. (3 points)
u.users has the following columns
Step35: Q4.2 Merge the users table with ratings. Do not discard any columns. (3 points)
Step36: Q4.3 Which 5 movies received the most ratings and how many times were they rated? (2 points)
Step37: Q4.4 How strict are people by occupation? Compute the average of ratings by occupation. Plot it on a bar chart in descending order.
Step 1. Compute the averages by occupation. (2 points)
Step38: Step 2. Plot it on a bar chart. (2 points)
Extra point
Step39: Q4.5 How likely are different age groups to rate movies? Compute the number of ratings by age grouped into 10-19, 20-29, etc. Plot it on a bar chart.
Step 1. Number of ratings by age group (3 points)
You can do this without pd.cut or pd.qcut. Think about how we handled decades earlier.
Step40: Step 2. Plot it on a bar chart. (2 points)
Step41: Task 5
Step42: Step 2. Plot it on a bar chart in descending order by score. Set the limits of the y-axis to (2.5, 4). (2 points)
Step43: Q5.2 Plot the average ratings by occupation and gender on a multiple bar plot. (4 points)
Tip
Step44: Q5.3 What hour of the day do different occupations rate? (3 points)
Create a function that computes the number of ratings per hour for a single occupation.
Step45: Q5.4 Plot the rating hours of marketing employees and programmers on two pie charts. (4 points)
A two-subplot figure is created. ax is an array of the two subplots, use ax[0] for marketing employees and ax[1] for programmers. Set the titles of the subplots accordingly.
Step46: Q5.5 Do older people prefer movies with longer titles? Compute the average title length by age group (0-10, 10-20).
Step1. compute mean length (4 points)
Tip
Step47: Step 2. Plot it on a bar chart. Choose a reasonable range for the y-axis. (2 points)
Step48: Q5.6 What are the highest rated movies among the movies that were rated at least 50 times? (5 points)
Return a Series of the top 10 such movies with their rating. | Python Code:
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
sns.set_context('notebook')
pd.options.display.max_colwidth = 100
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Business-Intelligence-Laboratory-2021---Pandas" data-toc-modified-id="Business-Intelligence-Laboratory-2021---Pandas-1">Business Intelligence Laboratory 2021 - Pandas</a></span></li><li><span><a href="#REPLACE-THIS-WITH-YOUR-NAME-(NEPTUN)" data-toc-modified-id="REPLACE-THIS-WITH-YOUR-NAME-(NEPTUN)-2">REPLACE THIS WITH YOUR NAME (NEPTUN)</a></span><ul class="toc-item"><li><span><a href="#REPLACE-with-today's-date]" data-toc-modified-id="REPLACE-with-today's-date]-2.1">REPLACE with today's date]</a></span></li><li><span><a href="#General-information" data-toc-modified-id="General-information-2.2">General information</a></span><ul class="toc-item"><li><span><a href="#Submission" data-toc-modified-id="Submission-2.2.1">Submission</a></span></li><li><span><a href="#Tips" data-toc-modified-id="Tips-2.2.2">Tips</a></span></li><li><span><a href="#Feedback" data-toc-modified-id="Feedback-2.2.3">Feedback</a></span></li></ul></li><li><span><a href="#Code-quality" data-toc-modified-id="Code-quality-2.3">Code quality</a></span></li><li><span><a href="#Figure-quality" data-toc-modified-id="Figure-quality-2.4">Figure quality</a></span></li></ul></li><li><span><a href="#Data-preparation" data-toc-modified-id="Data-preparation-3">Data preparation</a></span></li><li><span><a href="#Exploring-the-dataset" data-toc-modified-id="Exploring-the-dataset-4">Exploring the dataset</a></span></li><li><span><a href="#Basic-queries" data-toc-modified-id="Basic-queries-5">Basic queries</a></span><ul class="toc-item"><li><span><a href="#Which-movies-were-released-in-1956?" data-toc-modified-id="Which-movies-were-released-in-1956?-5.1">Which movies were released in 1956?</a></span></li><li><span><a href="#How-many-movies-were-released-in-the-80s?" data-toc-modified-id="How-many-movies-were-released-in-the-80s?-5.2">How many movies were released in the 80s?</a></span></li><li><span><a href="#When-were-the-Die-Hard-movies-released?" data-toc-modified-id="When-were-the-Die-Hard-movies-released?-5.3">When were the Die Hard movies released?</a></span></li><li><span><a href="#How-many-movies-are-both-action-and-romance?-What-about-action-or-romance?" data-toc-modified-id="How-many-movies-are-both-action-and-romance?-What-about-action-or-romance?-5.4">How many movies are both action and romance? What about action or romance?</a></span></li></ul></li><li><span><a href="#Task-1:-Simple-queries" data-toc-modified-id="Task-1:-Simple-queries-6">Task 1: Simple queries</a></span><ul class="toc-item"><li><span><a href="#Q1.1-What-is-the-oldest-movie?" data-toc-modified-id="Q1.1-What-is-the-oldest-movie?-6.1">Q1.1 What is the oldest movie?</a></span></li><li><span><a href="#Q1.2-What-is-the-oldest-movie?" data-toc-modified-id="Q1.2-What-is-the-oldest-movie?-6.2">Q1.2 What is the oldest movie?</a></span></li><li><span><a href="#Q1.3-What-is-the-frequency-of-each-genre?-In-other-words-how-many-movies-are-tagged-as-action,-drama-etc.-(2-points)" data-toc-modified-id="Q1.3-What-is-the-frequency-of-each-genre?-In-other-words-how-many-movies-are-tagged-as-action,-drama-etc.-(2-points)-6.3">Q1.3 What is the frequency of each genre? In other words how many movies are tagged as action, drama etc. (2 points)</a></span></li><li><span><a href="#Q1.4-How-many-genres-does-each-movie-have?-(3-points)" data-toc-modified-id="Q1.4-How-many-genres-does-each-movie-have?-(3-points)-6.4">Q1.4 How many genres does each movie have? (3 points)</a></span></li><li><span><a href="#Q1.5-Which-movies-have-the-most-genres?-(3-points)" data-toc-modified-id="Q1.5-Which-movies-have-the-most-genres?-(3-points)-6.5">Q1.5 Which movies have the most genres? (3 points)</a></span></li><li><span><a href="#Q1.6*-Extract-the-list-of-genres-as-a-comma-separated-string.-(4-points)" data-toc-modified-id="Q1.6*-Extract-the-list-of-genres-as-a-comma-separated-string.-(4-points)-6.6">Q1.6* Extract the list of genres as a comma separated string. (4 points)</a></span></li></ul></li><li><span><a href="#Task-2:-Groupby-and-visualization" data-toc-modified-id="Task-2:-Groupby-and-visualization-7">Task 2: Groupby and visualization</a></span><ul class="toc-item"><li><span><a href="#Q2.1-Write-a-function-that-takes-a-genre-and-groups-movies-from-the-90s-of-that-genre-by-year.--(3-points)" data-toc-modified-id="Q2.1-Write-a-function-that-takes-a-genre-and-groups-movies-from-the-90s-of-that-genre-by-year.--(3-points)-7.1">Q2.1 Write a function that takes a genre and groups movies from the 90s of that genre by year. (3 points)</a></span></li><li><span><a href="#Q2.2-Plot-the-number-of-adventure-movies-from-the-90s-on-a-bar-chart.-Use-your-groupby_genre-function.-(2-points)" data-toc-modified-id="Q2.2-Plot-the-number-of-adventure-movies-from-the-90s-on-a-bar-chart.-Use-your-groupby_genre-function.-(2-points)-7.2">Q2.2 Plot the number of adventure movies from the 90s on a <em>bar</em> chart. Use your <code>groupby_genre</code> function. (2 points)</a></span></li></ul></li><li><span><a href="#Task-3:-String-and-date-manipulation" data-toc-modified-id="Task-3:-String-and-date-manipulation-8">Task 3: String and date manipulation</a></span><ul class="toc-item"><li><span><a href="#Q3.1-How-many-movies-have-title-longer-than-40-characters?-(3-points)" data-toc-modified-id="Q3.1-How-many-movies-have-title-longer-than-40-characters?-(3-points)-8.1">Q3.1 How many movies have title longer than 40 characters? (3 points)</a></span></li><li><span><a href="#Q3.2-We're-building-a-traditional-lexicon-of-the-titles.-What-is-the-distribution-of-initial-letters-(i.e.-how-many-titles-start-with-S?)?-Plot-it-on-a-bar-chart." data-toc-modified-id="Q3.2-We're-building-a-traditional-lexicon-of-the-titles.-What-is-the-distribution-of-initial-letters-(i.e.-how-many-titles-start-with-S?)?-Plot-it-on-a-bar-chart.-8.2">Q3.2 We're building a traditional lexicon of the titles. What is the distribution of initial letters (i.e. how many titles start with S?)? Plot it on a bar chart.</a></span><ul class="toc-item"><li><span><a href="#Step-1.-Compute-frequencies.-(3-points)" data-toc-modified-id="Step-1.-Compute-frequencies.-(3-points)-8.2.1">Step 1. Compute frequencies. (3 points)</a></span></li><li><span><a href="#Step-2.-Plot-it-on-a-bar-chart-in-descending-order.-(3-points)" data-toc-modified-id="Step-2.-Plot-it-on-a-bar-chart-in-descending-order.-(3-points)-8.2.2">Step 2. Plot it on a bar chart in descending order. (3 points)</a></span></li></ul></li><li><span><a href="#Q3.3-Plot-the-distribution-of-release-day-(day-of-month)-on-a-pie-chart." data-toc-modified-id="Q3.3-Plot-the-distribution-of-release-day-(day-of-month)-on-a-pie-chart.-8.3">Q3.3 Plot the distribution of release day (day of month) on a pie chart.</a></span><ul class="toc-item"><li><span><a href="#Step-1.-groupby-(2-points)" data-toc-modified-id="Step-1.-groupby-(2-points)-8.3.1">Step 1. groupby (2 points)</a></span></li></ul></li><li><span><a href="#Step-2.-pie-chart.-Add-percent-values.-(3-points)" data-toc-modified-id="Step-2.-pie-chart.-Add-percent-values.-(3-points)-8.4">Step 2. pie chart. Add percent values. (3 points)</a></span></li></ul></li><li><span><a href="#Task-4:-Handling-multiple-dataframes" data-toc-modified-id="Task-4:-Handling-multiple-dataframes-9">Task 4: Handling multiple dataframes</a></span><ul class="toc-item"><li><span><a href="#Q4.1-Load-the-users-table-from-the-file-ml-100k/u.user.-(3-points)" data-toc-modified-id="Q4.1-Load-the-users-table-from-the-file-ml-100k/u.user.-(3-points)-9.1">Q4.1 Load the users table from the file <code>ml-100k/u.user</code>. (3 points)</a></span></li><li><span><a href="#Q4.2-Merge-the-users-table-with-ratings.-Do-not-discard-any-columns.-(3-points)" data-toc-modified-id="Q4.2-Merge-the-users-table-with-ratings.-Do-not-discard-any-columns.-(3-points)-9.2">Q4.2 Merge the <code>users</code> table with <code>ratings</code>. Do not discard any columns. (3 points)</a></span></li><li><span><a href="#Q4.3-Which-5-movies-received-the-most-ratings-and-how-many-times-were-they-rated?-(2-points)" data-toc-modified-id="Q4.3-Which-5-movies-received-the-most-ratings-and-how-many-times-were-they-rated?-(2-points)-9.3">Q4.3 Which 5 movies received the most ratings and how many times were they rated? (2 points)</a></span></li><li><span><a href="#Q4.4-How-strict-are-people-by-occupation?-Compute-the-average-of-ratings-by-occupation.-Plot-it-on-a-bar-chart-in-descending-order." data-toc-modified-id="Q4.4-How-strict-are-people-by-occupation?-Compute-the-average-of-ratings-by-occupation.-Plot-it-on-a-bar-chart-in-descending-order.-9.4">Q4.4 How strict are people by occupation? Compute the average of ratings by occupation. Plot it on a bar chart in descending order.</a></span><ul class="toc-item"><li><span><a href="#Step-1.-Compute-the-averages-by-occupation.-(2-points)" data-toc-modified-id="Step-1.-Compute-the-averages-by-occupation.-(2-points)-9.4.1">Step 1. Compute the averages by occupation. (2 points)</a></span></li><li><span><a href="#Step-2.-Plot-it-on-a-bar-chart.-(2-points)" data-toc-modified-id="Step-2.-Plot-it-on-a-bar-chart.-(2-points)-9.4.2">Step 2. Plot it on a bar chart. (2 points)</a></span></li></ul></li><li><span><a href="#Q4.5-How-likely-are-different-age-groups-to-rate-movies?-Compute-the-number-of-ratings-by-age-grouped-into-10-19,-20-29,-etc.-Plot-it-on-a-bar-chart." data-toc-modified-id="Q4.5-How-likely-are-different-age-groups-to-rate-movies?-Compute-the-number-of-ratings-by-age-grouped-into-10-19,-20-29,-etc.-Plot-it-on-a-bar-chart.-9.5">Q4.5 How likely are different age groups to rate movies? Compute the number of ratings by age grouped into 10-19, 20-29, etc. Plot it on a bar chart.</a></span><ul class="toc-item"><li><span><a href="#Step-1.-Number-of-ratings-by-age-group-(3-points)" data-toc-modified-id="Step-1.-Number-of-ratings-by-age-group-(3-points)-9.5.1">Step 1. Number of ratings by age group (3 points)</a></span></li><li><span><a href="#Step-2.-Plot-it-on-a-bar-chart.-(2-points)" data-toc-modified-id="Step-2.-Plot-it-on-a-bar-chart.-(2-points)-9.5.2">Step 2. Plot it on a bar chart. (2 points)</a></span></li></ul></li></ul></li><li><span><a href="#Task-5:-Advanced-tasks" data-toc-modified-id="Task-5:-Advanced-tasks-10">Task 5: Advanced tasks</a></span><ul class="toc-item"><li><span><a href="#Q5.1-What-is-the-mean-of-ratings-by-genre?" data-toc-modified-id="Q5.1-What-is-the-mean-of-ratings-by-genre?-10.1">Q5.1 What is the mean of ratings by genre?</a></span><ul class="toc-item"><li><span><a href="#Step-1.-Compute-the-mean-scores.-(5-points)" data-toc-modified-id="Step-1.-Compute-the-mean-scores.-(5-points)-10.1.1">Step 1. Compute the mean scores. (5 points)</a></span></li><li><span><a href="#Step-2.-Plot-it-on-a-bar-chart-in-descending-order-by-score.-Set-the-limits-of-the-y-axis-to-(2.5,-4).-(2-points)" data-toc-modified-id="Step-2.-Plot-it-on-a-bar-chart-in-descending-order-by-score.-Set-the-limits-of-the-y-axis-to-(2.5,-4).-(2-points)-10.1.2">Step 2. Plot it on a bar chart in descending order by score. Set the limits of the y-axis to (2.5, 4). (2 points)</a></span></li></ul></li><li><span><a href="#Q5.2-Plot-the-average-ratings-by-occupation-and-gender-on-a-multiple-bar-plot.-(4-points)" data-toc-modified-id="Q5.2-Plot-the-average-ratings-by-occupation-and-gender-on-a-multiple-bar-plot.-(4-points)-10.2">Q5.2 Plot the average ratings by occupation <em>and</em> gender on a multiple bar plot. (4 points)</a></span></li><li><span><a href="#Q5.3-What-hour-of-the-day-do-different-occupations-rate?-(3-points)" data-toc-modified-id="Q5.3-What-hour-of-the-day-do-different-occupations-rate?-(3-points)-10.3">Q5.3 What hour of the day do different occupations rate? (3 points)</a></span></li><li><span><a href="#Q5.4-Plot-the-rating-hours-of-marketing-employees-and-programmers-on-two-pie-charts.-(4-points)" data-toc-modified-id="Q5.4-Plot-the-rating-hours-of-marketing-employees-and-programmers-on-two-pie-charts.-(4-points)-10.4">Q5.4 Plot the rating hours of marketing employees and programmers on two pie charts. (4 points)</a></span></li><li><span><a href="#Q5.5-Do-older-people-prefer-movies-with-longer-titles?-Compute-the-average-title-length-by-age-group-(0-10,-10-20)." data-toc-modified-id="Q5.5-Do-older-people-prefer-movies-with-longer-titles?-Compute-the-average-title-length-by-age-group-(0-10,-10-20).-10.5">Q5.5 Do older people prefer movies with longer titles? Compute the average title length by age group (0-10, 10-20).</a></span><ul class="toc-item"><li><span><a href="#Step1.-compute-mean-length-(4-points)" data-toc-modified-id="Step1.-compute-mean-length-(4-points)-10.5.1">Step1. compute mean length (4 points)</a></span></li><li><span><a href="#Step-2.-Plot-it-on-a-bar-chart.-Choose-a-reasonable-range-for-the-y-axis.-(2-points)" data-toc-modified-id="Step-2.-Plot-it-on-a-bar-chart.-Choose-a-reasonable-range-for-the-y-axis.-(2-points)-10.5.2">Step 2. Plot it on a bar chart. Choose a reasonable range for the y-axis. (2 points)</a></span></li></ul></li><li><span><a href="#Q5.6-What-are-the-highest-rated-movies-among-the-movies-that-were-rated-at-least-50-times?-(5-points)" data-toc-modified-id="Q5.6-What-are-the-highest-rated-movies-among-the-movies-that-were-rated-at-least-50-times?-(5-points)-10.6">Q5.6 What are the highest rated movies among the movies that were rated at least 50 times? (5 points)</a></span></li></ul></li></ul></div>
Business Intelligence Laboratory 2021 - Pandas
REPLACE THIS WITH YOUR NAME (NEPTUN)
REPLACE with today's date]
General information
This goal of this notebook is to give a introduction to the pandas library, a popular data manipulation and analysis tool for Python.
Before completing this notebook, you should read the introductory material which is available in you Github Classroom repository or here.
Problems are numbered from Q1 to Q5 with many subproblems such as Q1.1. The scores range from 2 to 5 based on the difficulty of the problem. The maximum score for each Task is:
| Task | Score |
| ---- | ----|
| 1 | TODO |
| 2 | TODO |
| 3 | TODO |
| 4 | TODO |
| 5 | TODO |
| code style | 5 |
| figure quality | 5 |
Grades are determined using this table:
| Score | Grade |
| ---- | ----|
| TODO | 5 |
| TODO | 4 |
| TODO | 3 |
| TODO | 2 |
| TODO | 1 |
Task 1 to 4 are considered core exercises and completing all of them correctly results in a 5.
Task 5 contains advanced exercises for higher scores.
You can make up for mistakes in Task 1-4.
Your answer should go in place of YOUR CODE HERE. Please remove raise NotImplementedError.
Most of the tasks are automatically graded with nbgrader.
There are many visible and hidden tests.
Visible tests are available in this version, hidden tests are not.
This means that passing all visible tests does not ensure that your answer is correct.
Not passing the visible tests means that your answer is incorrect.
Do not delete or copy cells and do not edit the metadata of the cells.
You may add cells but they will not be graded.
Submission
You only need to submit this notebook through Github Classroom.
Please do not add any additional files such as the dataset.
Make sure that you commit and push the last version of your notebook.
VERY IMPORTANT Run Kernel->Restart & Run All and make sure that it finishes without errors.
If you skip exercises, you need to manually run the remaining cells.
Skipping exercises won't affect the autograder.
If you skip and exercise, please leave the solution cell as it is and do not remove the exception.
Tips
You generally don't need to leave any DataFrames printed as cell outputs. You can do it for debug purposes and it won't affect the autograder but please don't leave long tables in the output. Use .head() instead.
Be concise. All exercises can be solved with fewer than 5 lines of code and most are one-liners.
All exercises can be solved without for loops. Using a for loop for Q1.6 is acceptable.
Avoid overriding Python built-in functions with your own variables (max = 2).
If you mess up, you can always do one of the following
1. Kernel -> Restart & Run All - this will run all cells from top to bottom until an exception is thrown
1. Kernel -> Restart, Cell -> Run All Above - this will run all cells from top to bottom until the current cell is reached or and exception is thrown
If your notebook runs for longer than a minute, one or more of your solutions is very inefficient.
Feedback
Please fill out this short survey after you completed the problems.
Code quality
You can get 5 extra points for code quality.
Figure quality
You can get 5 extra points for the quality of your figures. Good figures have labeled axes with meaningful names, reasonable figure size and reasonable axes limits.
Extra attention to details also helps.
End of explanation
import os
data_dir = os.getenv('MOVIELENS')
if data_dir is None:
data_dir = ''
ml_path = os.path.join(data_dir, 'ml.zip')
if os.path.exists(ml_path):
print('File already exists, skipping download step.')
else:
print('Downloading the Movielens dataset')
import urllib
u = urllib.request.URLopener()
u.retrieve('http://files.grouplens.org/datasets/movielens/ml-100k.zip', ml_path)
unzip_path = os.path.join(data_dir, 'ml-100k')
if os.path.exists(unzip_path):
print('Dataset already unpacked, skipping unpacking step.')
else:
print('Unziping the dataset.')
from zipfile import ZipFile
with ZipFile(ml_path) as myzip:
myzip.extractall(data_dir)
data_dir = unzip_path
Explanation: Data preparation
Downloading the dataset:
End of explanation
# movies = pd.read_table('ml-100k/u.item') # it raises a UnicodeDecodeErrort because its encoding is not UTF-8
movies = pd.read_table(os.path.join(data_dir, 'u.item'), encoding='latin1')
movies.head()
Explanation: Loading the dataset
pd.read_table loads a tabular dataset. The full function signature is:
~~~
pandas.read_table(filepath_or_buffer: Union[str, pathlib.Path, IO[~AnyStr]], sep='t', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, cache_dates=True, iterator=False, chunksize=None, compression='infer', thousands=None, decimal: str = '.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, dialect=None, error_bad_lines=True, warn_bad_lines=True, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None
~~~
let's try it with defaults
End of explanation
column_names = [
'movie_id', 'title', 'release_date', 'video_release_date', 'imdb_url', 'unknown', 'action', 'adventure', 'animation',
'children', 'comedy', 'crime', 'documentary', 'drama', 'fantasy', 'film_noir', 'horror', 'musical', 'mystery',
'romance', 'sci_fi', 'thriller', 'war', 'western']
movies = pd.read_table(
os.path.join(data_dir, 'u.item'), sep='|',
names=column_names, encoding='latin1', index_col='movie_id',
parse_dates=['release_date', 'video_release_date']
)
movies.head()
Explanation: Some improvements:
Use a different separator. | instead of \t.
The first line of the file is used as the header. The real names of the columns are listed in the README, they can be specified with the names parameters.
read_table added an index (0..N-1), but the dataset already has an index, let's use that one (index_col='movie_id')
two columns, release_date and video_release_date are dates, pandas can parse them and create its own datetype.
End of explanation
movies['year'] = movies.release_date.dt.year
Explanation: We will use the release year of the movies frequently. Let's extract the release year of a movie into a separate column:
End of explanation
movies.year.value_counts().head()
Explanation: The most common years are:
End of explanation
movies.video_release_date.isnull().value_counts()
movies = movies.drop('video_release_date', axis=1)
Explanation: video_release_date is always NaT (not a time), let's drop it
End of explanation
movies.describe()
Explanation: Exploring the dataset
describe generate descriptive statistics.
End of explanation
movies.release_date.describe(datetime_is_numeric=True)
Explanation: Only numeric columns are included by default. A single column (pd.Series) has a describe function too
End of explanation
movies.mean(numeric_only=True)
Explanation: Numberic statistics are available as separate functions too:
count: number of non-NA cells. NA is NOT the same as 0
mean: average
std: standard deviation
var: variance
min, max
etc.
numeric_only=True excludes date and string columns.
End of explanation
movies[movies.release_date.dt.year==1956]
Explanation: Basic queries
Which movies were released in 1956?
End of explanation
d = movies[(movies.release_date.dt.year >= 1980) & (movies.release_date.dt.year < 1990)]
print(f"{len(d)} movies were released in the 80s.")
print("\nA few examples:")
print("\n".join(d.sample(5).title))
Explanation: How many movies were released in the 80s?
Let's print 5 examples too.
End of explanation
movies[movies.title.str.contains('Die Hard')]
Explanation: When were the Die Hard movies released?
End of explanation
movies.release_date.min(), movies.release_date.max()
Explanation: Die Hard 4 and 5 are missing. This is because the dataset only contains movies released between:
End of explanation
print("Action and romance:", len(movies[(movies.action==1) & (movies.romance==1)]))
print("Action or romance:", len(movies[(movies.action==1) | (movies.romance==1)]))
Explanation: and Die Hard 4 and 5 were released in 2007 and 2013 respectively.
How many movies are both action and romance? What about action or romance?
Make sure you parenthesize the conditions
End of explanation
# children_drama = ...
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(children_drama, pd.DataFrame)
assert len(children_drama) == 19
assert 'Bogus (1996)' in children_drama.title.values
assert 'Sliding Doors (1998)' not in children_drama.title.values
Explanation: Task 1: Simple queries
Q1.1 What is the oldest movie?
End of explanation
# oldest_movie_title = ...
# YOUR CODE HERE
raise NotImplementedError()
assert oldest_movie_title == 'Nosferatu (Nosferatu, eine Symphonie des Grauens) (1922)'
Explanation: Q1.2 What is the oldest movie?
End of explanation
genres = ['action', 'adventure', 'animation', 'children', 'comedy', 'crime',
'documentary', 'drama', 'fantasy', 'film_noir', 'horror', 'musical',
'mystery', 'romance', 'sci_fi', 'thriller', 'war', 'western']
# YOUR CODE HERE
raise NotImplementedError()
assert len(genre_frequency) == len(genres)
for genre in genres:
assert genre in genre_frequency.index
assert genre_frequency.loc['musical'] == 56
assert genre_frequency.loc['thriller'] == 251
Explanation: Q1.3 What is the frequency of each genre? In other words how many movies are tagged as action, drama etc. (2 points)
The list of genres is given below.
End of explanation
# movies['genre_count'] = ...
# YOUR CODE HERE
raise NotImplementedError()
assert 'genre_count' in movies.columns
assert movies['genre_count'].max() == 6
assert movies['genre_count'].min() == 0
Explanation: Q1.4 How many genres does each movie have? (3 points)
You need a similar summation as the previous one but for each row. Add the count as a new column to the movies Dataframe.
End of explanation
# ...
# movies_with_most_genres = ...
# YOUR CODE HERE
raise NotImplementedError()
assert len(movies_with_most_genres) == 3
# 6 is the maximum genre count, all movies have 6 genres
assert movies_with_most_genres['genre_count'].unique() == [6]
assert 'Empire Strikes Back, The (1980)' in movies_with_most_genres['title'].values
Explanation: Q1.5 Which movies have the most genres? (3 points)
There is more than one answer, so idxmax will not work this time. You should solve this task in two steps. First compute the maximum, then find the rows that match that number.
End of explanation
def get_genres(row):
# YOUR CODE HERE
raise NotImplementedError()
movies['genres'] = movies[genres].apply(get_genres, axis=1)
assert isinstance(movies['genres'].iloc[0], str)
empire_genres = movies[movies['title']=='Empire Strikes Back, The (1980)']['genres'].iloc[0]
em_genres = set()
for genre in empire_genres.split(","):
em_genres.add(genre.strip())
assert em_genres == {'action', 'adventure', 'drama', 'romance', 'sci_fi', 'war'}
Explanation: Q1.6* Extract the list of genres as a comma separated string. (4 points)
If a movie is tagged drama, thriller and romance, the genre string should be 'drama, thriller, romance'.
Since we need to work on multiple fields on each row, we need to use apply. The usage of apply is given. Your task is to implement get_genres. get_genres takes a row as its input and returns the formatted string. You can access each column as row[column_name].
End of explanation
movies.groupby(movies['year'] // 10 * 10).size()
Explanation: Task 2: Groupby and visualization
How many movies are released each decade? For this we will groupby on a condition:
End of explanation
fig, ax = plt.subplots(1, figsize=(10, 6))
movies.groupby(movies['year'] // 10 * 10).size().plot(kind='bar', ax=ax)
ax.set_ylabel("Movie count")
ax.set_title("Movies per decade")
ax.grid(axis='y')
Explanation: Let's plot it on a bar chart. We create the figure and axis objects beforehand. This allows adjusting the figure size and apply other changes to the figure.
End of explanation
fig, ax = plt.subplots(1, figsize=(10, 6))
m = movies[movies.year>=1990]
m.groupby('year').size().plot(kind='bar', ax=ax)
Explanation: Let's zoom into the 90s:
most movies were released in the late 80s and 90s, let's zoom in. Let's also change the figure size.
We create the plot object with one subplot, we then specify which axis pandas should use for plotting (ax=ax).
End of explanation
def groupby_genre(df, genre):
# YOUR CODE HERE
raise NotImplementedError()
crime = groupby_genre(movies, 'crime')
assert 1993 in crime.groups
assert 1989 not in crime.groups
assert type(crime) == pd.core.groupby.DataFrameGroupBy
assert len(crime) == 8 # 1990, 1992-98
assert crime.size().loc[1996] == 21
Explanation: Q2.1 Write a function that takes a genre and groups movies from the 90s of that genre by year. (3 points)
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Q2.2 Plot the number of adventure movies from the 90s on a bar chart. Use your groupby_genre function. (2 points)
End of explanation
def long_titles(df):
# YOUR CODE HERE
raise NotImplementedError()
title_cnt = long_titles(movies)
assert type(title_cnt) == int
Explanation: Task 3: String and date manipulation
Q3.1 How many movies have title longer than 40 characters? (3 points)
End of explanation
def compute_initial_letter_frequencies(df):
# YOUR CODE HERE
raise NotImplementedError()
initial = compute_initial_letter_frequencies(movies)
assert type(initial) == pd.Series
# frequency counts should be >= 1
assert initial.min() >= 1
# the largest one cannot be larger than the full dataframe
assert initial.max() <= len(movies)
Explanation: Q3.2 We're building a traditional lexicon of the titles. What is the distribution of initial letters (i.e. how many titles start with S?)? Plot it on a bar chart.
Step 1. Compute frequencies. (3 points)
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Step 2. Plot it on a bar chart in descending order. (3 points)
The most common letter should be the first bar.
Make the figure wider and fix the axis labels
End of explanation
def groupby_release_day(df):
# YOUR CODE HERE
raise NotImplementedError()
by_day = groupby_release_day(movies)
assert type(by_day) == pd.core.groupby.DataFrameGroupBy
# the longest month is 31 days
assert len(by_day) < 32
# shouldn't group on day of week
assert len(by_day) > 7
Explanation: Q3.3 Plot the distribution of release day (day of month) on a pie chart.
Step 1. groupby (2 points)
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Step 2. pie chart. Add percent values. (3 points)
You should see that the 1st day of the month was by far the most common release day. The reason for this is most likely the lack of a specified day in the original release date (May 1996 instead of May 13, 1996).
End of explanation
cols = ['user', 'movie_id', 'rating', 'timestamp']
ratings = pd.read_table(os.path.join(data_dir, "u.data"), names=cols)
ratings.head()
Explanation: Task 4: Handling multiple dataframes
The main table of this dataset is u.data with 100000 ratings. We can load it the following way:
End of explanation
ratings['timestamp'] = pd.to_datetime(ratings.timestamp, unit='s')
ratings.head()
Explanation: The timestamp column is a Unix timestamp, let's convert it to pd.DateTime:
End of explanation
ratings = pd.merge(ratings, movies, left_on='movie_id', right_index=True)
Explanation: Merging it with movies:
End of explanation
# raw_users = ...
# YOUR CODE HERE
raise NotImplementedError()
assert type(users) == pd.DataFrame
# there is no user with index=0
assert 0 not in users.index
assert users.shape == (943, 4)
assert list(users.columns) == ['age', 'gender', 'occupation', 'zip']
Explanation: Q4.1 Load the users table from the file ml-100k/u.user. (3 points)
u.users has the following columns: user_id, age, gender, occupation, zip. Use user_id as the index.
End of explanation
# ratings = ratings.merge...
# YOUR CODE HERE
raise NotImplementedError()
assert type(ratings) == pd.DataFrame
# all movies have ratings (nunique return the number of unique elements)
assert ratings['movie_id'].nunique() == 1682
Explanation: Q4.2 Merge the users table with ratings. Do not discard any columns. (3 points)
End of explanation
# most_rated = ...
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(most_rated, pd.Series)
assert len(most_rated) == 5
assert 'Liar Liar (1997)' in most_rated
assert most_rated.loc['Fargo (1996)'] > 500
Explanation: Q4.3 Which 5 movies received the most ratings and how many times were they rated? (2 points)
End of explanation
# mean_by_occupation = ...
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(mean_by_occupation, pd.Series)
# ratings are between 1 and 5
assert mean_by_occupation.min() > 1
assert mean_by_occupation.max() < 5
Explanation: Q4.4 How strict are people by occupation? Compute the average of ratings by occupation. Plot it on a bar chart in descending order.
Step 1. Compute the averages by occupation. (2 points)
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Step 2. Plot it on a bar chart. (2 points)
Extra point: make the bar chart wider and restring the y-axis to (2.5, 4).
End of explanation
def count_ratings_by_age_group(ratings):
# YOUR CODE HERE
raise NotImplementedError()
rating_by_age_group = count_ratings_by_age_group(ratings)
assert isinstance(rating_by_age_group, pd.Series)
assert 20 in rating_by_age_group
assert 25 not in rating_by_age_group
Explanation: Q4.5 How likely are different age groups to rate movies? Compute the number of ratings by age grouped into 10-19, 20-29, etc. Plot it on a bar chart.
Step 1. Number of ratings by age group (3 points)
You can do this without pd.cut or pd.qcut. Think about how we handled decades earlier.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Step 2. Plot it on a bar chart. (2 points)
End of explanation
genres = ['unknown', 'action', 'adventure', 'animation',
'children', 'comedy', 'crime', 'documentary', 'drama', 'fantasy',
'film_noir', 'horror', 'musical', 'mystery', 'romance', 'sci_fi',
'thriller', 'war', 'western']
def compute_mean_rating_by_genre(ratings):
# YOUR CODE HERE
raise NotImplementedError()
genre_rating = compute_mean_rating_by_genre(ratings)
assert len(genre_rating) == len(genres)
# all means are between 3 and 4
assert genre_rating.min() > 3.0
assert genre_rating.max() < 4.0
# film noir is rated highest
assert genre_rating.idxmax() == 'film_noir'
for g in genres:
assert g in genre_rating.index
assert abs(genre_rating.loc['adventure'] - 3.503528) < 1e-3
Explanation: Task 5: Advanced tasks
Q5.1 What is the mean of ratings by genre?
If a movie has multiple genres, include it in every genre.
Step 1. Compute the mean scores. (5 points)
There are many ways to solve this problem. Try to do it without explicit for loops.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Step 2. Plot it on a bar chart in descending order by score. Set the limits of the y-axis to (2.5, 4). (2 points)
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Q5.2 Plot the average ratings by occupation and gender on a multiple bar plot. (4 points)
Tip: there is an example of a multiple bar plot here
Tip 2: there are many ways to solve this problem, one is a one-liner using DataFrame.unstack. It's a little longer if you make the figure nicer.
End of explanation
def count_rating_by_hour_occupation(ratings, occupation):
# YOUR CODE HERE
raise NotImplementedError()
marketing = count_rating_by_hour_occupation(ratings, "marketing")
assert isinstance(marketing, pd.Series)
# there are only 24 hours
assert len(marketing) < 25
Explanation: Q5.3 What hour of the day do different occupations rate? (3 points)
Create a function that computes the number of ratings per hour for a single occupation.
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Q5.4 Plot the rating hours of marketing employees and programmers on two pie charts. (4 points)
A two-subplot figure is created. ax is an array of the two subplots, use ax[0] for marketing employees and ax[1] for programmers. Set the titles of the subplots accordingly.
End of explanation
def get_mean_length_by_age_group(ratings):
# YOUR CODE HERE
raise NotImplementedError()
title_len_by_age = get_mean_length_by_age_group(ratings)
assert isinstance(title_len_by_age, pd.Series)
assert len(title_len_by_age) == 8
# titles are long
assert title_len_by_age.min() >= 20
Explanation: Q5.5 Do older people prefer movies with longer titles? Compute the average title length by age group (0-10, 10-20).
Step1. compute mean length (4 points)
Tip: You should probably create a copy of some of the columns.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Step 2. Plot it on a bar chart. Choose a reasonable range for the y-axis. (2 points)
End of explanation
# highest_rated = ...
# YOUR CODE HERE
raise NotImplementedError()
assert len(highest_rated) == 10
assert abs(highest_rated.max() - 4.491071) < 1e-3
assert 'Star Wars (1977)' in highest_rated.index
Explanation: Q5.6 What are the highest rated movies among the movies that were rated at least 50 times? (5 points)
Return a Series of the top 10 such movies with their rating.
End of explanation |
399 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License")
Step1: Retrain EfficientDet for the Edge TPU with TensorFlow Lite Model Maker
In this tutorial, we'll retrain the EfficientDet-Lite object detection model (derived from EfficientDet) using the TensorFlow Lite Model Maker library, and then compile it to run on the Coral Edge TPU. All in about 30 minutes.
By default, w'll retrain the model using a publicly available dataset of salad photos, teaching the model to recognize a salad and some of the ingredients. But we've also provided code so you can upload your own training dataset in the Pascal VOC XML format.
Here's an example of the salad training results
Step2: Load the training data
To use the default salad training dataset, just run all the code below as-is.
But if you want to train with your own image dataset, follow these steps
Step3: Load the salads CSV dataset
Model Maker requires that we load our dataset using the DataLoader API. So in this case, we'll load it from a CSV file that defines 175 images for training, 25 images for validation, and 25 images for testing.
Step4: If you want to load your own dataset as a CSV file, you can learn more about the format in Formatting a training data CSV. You can load your CSV either from Cloud Storage (as shown above) or from a local path.
DataLoader can also load your dataset in other formats, such as from a set of TFRecord files or from a local directory using the Pascal VOC format (shown below for a custom dataset).
(Optional) Load your own Pascal VOC dataset
To use your custom dataset, you need to modify a few variables here, such as your ZIP filename, your label map, and the path to your images/annotations
Step6: Now you're ready to train the model with your custom dataset. But before you run the notebook, you should also skip to the Export to TensorFlow Lite section and change the TFLITE_FILENAME and LABLES_FILENAME for your exported files.
Then run the whole notebook by clicking Runtime > Run all.
Step7: Select the model spec
Model Maker supports the EfficientDet-Lite family of object detection models that are compatible with the Edge TPU. (EfficientDet-Lite is derived from EfficientDet, which offers state-of-the-art accuracy in a small model size). There are several model sizes you can choose from
Step8: The EfficientDetLite0Spec constructor also supports several arguments that specify training options, such as the max number of detections (default is 25 for the TF Lite model) and whether to use Cloud TPUs for training. You can also use the constructor to specify the number of training epochs and the batch size, but you can also specify those in the next step.
Create and train the model
Now we need to create our model according to the model spec, load our dataset into the model, specify training parameters, and begin training.
Using Model Maker, we accomplished all of that with create()
Step9: Evaluate the model
Now we'll use the test dataset to evaluate how well the model performs with data it has never seen before.
The evaluate() method provides output in the style of COCO evaluation metrics
Step10: Because the default batch size for EfficientDetLite models is 64, this needs only 1 step to go through all 25 images in the salad test set. You can also specify the batch_size argument when you call evaluate().
Export to TensorFlow Lite
Next, we'll export the model to the TensorFlow Lite format. By default, the export() method performs full integer post-training quantization, which is exactly what we need for compatibility with the Edge TPU. (Model Maker uses the same dataset we gave to our model spec as a representative dataset, which is required for full-int quantization.)
We just need to specify the export directory and format. By default, it exports to TF Lite, but we also want a labels file, so we declare both
Step11: Evaluate the TF Lite model
Exporting the model to TensorFlow Lite can affect the model accuracy, due to the reduced numerical precision from quantization and because the original TensorFlow model uses per-class non-max supression (NMS) for post-processing, while the TF Lite model uses global NMS, which is faster but less accurate.
Therefore you should always evaluate the exported TF Lite model and be sure it still meets your requirements
Step12: Try the TFLite model
Just to be sure of things, let's run the model ourselves with an image from the test set.
Step14: To simplify our code, we'll use the PyCoral API
Step15: Compile for the Edge TPU
First we need to download the Edge TPU Compiler
Step16: Before compiling the .tflite file for the Edge TPU, it's important to consider whether your model will fit into the Edge TPU memory.
The Edge TPU has approximately 8 MB of SRAM for caching model paramaters, so any model close to or over 8 MB will not fit onto the Edge TPU memory. That means the inference times are longer, because some model parameters must be fetched from the host system memory.
One way to elimiate the extra latency is to use model pipelining, which splits the model into segments that can run on separate Edge TPUs in series. This can significantly reduce the latency for big models.
The following table provides recommendations for the number of Edge TPUs to use with each EfficientDet-Lite model.
| Model architecture | Minimum TPUs | Recommended TPUs
|--------------------|-------|-------|
| EfficientDet-Lite0 | 1 | 1 |
| EfficientDet-Lite1 | 1 | 1 |
| EfficientDet-Lite2 | 1 | 2 |
| EfficientDet-Lite3 | 2 | 2 |
| EfficientDet-Lite4 | 2 | 3 |
If you need extra Edge TPUs for your model, then update NUMBER_OF_TPUS here
Step17: Beware when using multiple segments | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License")
End of explanation
!pip install -q tflite-model-maker
import numpy as np
import os
from tflite_model_maker.config import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import object_detector
import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
from absl import logging
logging.set_verbosity(logging.ERROR)
Explanation: Retrain EfficientDet for the Edge TPU with TensorFlow Lite Model Maker
In this tutorial, we'll retrain the EfficientDet-Lite object detection model (derived from EfficientDet) using the TensorFlow Lite Model Maker library, and then compile it to run on the Coral Edge TPU. All in about 30 minutes.
By default, w'll retrain the model using a publicly available dataset of salad photos, teaching the model to recognize a salad and some of the ingredients. But we've also provided code so you can upload your own training dataset in the Pascal VOC XML format.
Here's an example of the salad training results:
<img src="https://storage.googleapis.com/site_and_emails_static_assets/Images/efficientdet-salads.png?" width="400" hspace="0">
<a href="https://colab.research.google.com/github/google-coral/tutorials/blob/master/retrain_efficientdet_model_maker_tf2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"></a>
<a href="https://github.com/google-coral/tutorials/blob/master/retrain_efficientdet_model_maker_tf2.ipynb" target="_parent"><img src="https://img.shields.io/static/v1?logo=GitHub&label=&color=333333&style=flat&message=View%20on%20GitHub" alt="View in GitHub"></a>
If you want to run the notebook with the salad dataset, you can run the whole thing now by clicking Runtime > Run all in the Colab toolbar. But if you want to use your own dataset, then continue down to Load the training data and follow the instructions there.
Note: If using a custom dataset, beware that if your dataset includes more than 20 classes, you'll probably have slower inference speeds compared to if you have fewer classes. This is due to an aspect of the EfficientDet architecture in which a certain layer cannot compile for the Edge TPU when it carries more than 20 classes.
Import the required packages
End of explanation
use_custom_dataset = False #@param ["False", "True"] {type:"raw"}
dataset_is_split = False #@param ["False", "True"] {type:"raw"}
Explanation: Load the training data
To use the default salad training dataset, just run all the code below as-is.
But if you want to train with your own image dataset, follow these steps:
Be sure your dataset is annotated in Pascal VOC XML (various tools can help create VOC annotations, such as LabelImg). Then create a ZIP file with all your JPG images and XML files (JPG and XML files can all be in one directory or in separate directories).
Click the Files tab in the left panel and just drag-drop your ZIP file there to upload it.
Use the following drop-down option to set use_custom_dataset to True.
If your dataset is already split into separate directories for training, validation, and testing, also set dataset_is_split to True. (If your dataset is not split, leave it False and we'll split it below.)
Then skip to Load your own Pascal VOC dataset and follow the rest of the instructions there.
End of explanation
if not use_custom_dataset:
train_data, validation_data, test_data = object_detector.DataLoader.from_csv('gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv')
Explanation: Load the salads CSV dataset
Model Maker requires that we load our dataset using the DataLoader API. So in this case, we'll load it from a CSV file that defines 175 images for training, 25 images for validation, and 25 images for testing.
End of explanation
if use_custom_dataset:
# The ZIP file you uploaded:
!unzip dataset.zip
# Your labels map as a dictionary (zero is reserved):
label_map = {1: 'apple', 2: 'banana'}
if dataset_is_split:
# If your dataset is already split, specify each path:
train_images_dir = 'dataset/train/images'
train_annotations_dir = 'dataset/train/annotations'
val_images_dir = 'dataset/validation/images'
val_annotations_dir = 'dataset/validation/annotations'
test_images_dir = 'dataset/test/images'
test_annotations_dir = 'dataset/test/annotations'
else:
# If it's NOT split yet, specify the path to all images and annotations
images_in = 'dataset/images'
annotations_in = 'dataset/annotations'
Explanation: If you want to load your own dataset as a CSV file, you can learn more about the format in Formatting a training data CSV. You can load your CSV either from Cloud Storage (as shown above) or from a local path.
DataLoader can also load your dataset in other formats, such as from a set of TFRecord files or from a local directory using the Pascal VOC format (shown below for a custom dataset).
(Optional) Load your own Pascal VOC dataset
To use your custom dataset, you need to modify a few variables here, such as your ZIP filename, your label map, and the path to your images/annotations:
End of explanation
#@markdown Be sure you run this cell. It's hiding the `split_dataset()` function used in the next code block.
import os
import random
import shutil
def split_dataset(images_path, annotations_path, val_split, test_split, out_path):
Splits a directory of sorted images/annotations into training, validation, and test sets.
Args:
images_path: Path to the directory with your images (JPGs).
annotations_path: Path to a directory with your VOC XML annotation files,
with filenames corresponding to image filenames. This may be the same path
used for images_path.
val_split: Fraction of data to reserve for validation (float between 0 and 1).
test_split: Fraction of data to reserve for test (float between 0 and 1).
Returns:
The paths for the split images/annotations (train_dir, val_dir, test_dir)
_, dirs, _ = next(os.walk(images_path))
train_dir = os.path.join(out_path, 'train')
val_dir = os.path.join(out_path, 'validation')
test_dir = os.path.join(out_path, 'test')
IMAGES_TRAIN_DIR = os.path.join(train_dir, 'images')
IMAGES_VAL_DIR = os.path.join(val_dir, 'images')
IMAGES_TEST_DIR = os.path.join(test_dir, 'images')
os.makedirs(IMAGES_TRAIN_DIR, exist_ok=True)
os.makedirs(IMAGES_VAL_DIR, exist_ok=True)
os.makedirs(IMAGES_TEST_DIR, exist_ok=True)
ANNOT_TRAIN_DIR = os.path.join(train_dir, 'annotations')
ANNOT_VAL_DIR = os.path.join(val_dir, 'annotations')
ANNOT_TEST_DIR = os.path.join(test_dir, 'annotations')
os.makedirs(ANNOT_TRAIN_DIR, exist_ok=True)
os.makedirs(ANNOT_VAL_DIR, exist_ok=True)
os.makedirs(ANNOT_TEST_DIR, exist_ok=True)
# Get all filenames for this dir, filtered by filetype
filenames = os.listdir(os.path.join(images_path))
filenames = [os.path.join(images_path, f) for f in filenames if (f.endswith('.jpg'))]
# Shuffle the files, deterministically
filenames.sort()
random.seed(42)
random.shuffle(filenames)
# Get exact number of images for validation and test; the rest is for training
val_count = int(len(filenames) * val_split)
test_count = int(len(filenames) * test_split)
for i, file in enumerate(filenames):
source_dir, filename = os.path.split(file)
annot_file = os.path.join(annotations_path, filename.replace("jpg", "xml"))
if i < val_count:
shutil.copy(file, IMAGES_VAL_DIR)
shutil.copy(annot_file, ANNOT_VAL_DIR)
elif i < val_count + test_count:
shutil.copy(file, IMAGES_TEST_DIR)
shutil.copy(annot_file, ANNOT_TEST_DIR)
else:
shutil.copy(file, IMAGES_TRAIN_DIR)
shutil.copy(annot_file, ANNOT_TRAIN_DIR)
return (train_dir, val_dir, test_dir)
# We need to instantiate a separate DataLoader for each split dataset
if use_custom_dataset:
if dataset_is_split:
train_data = object_detector.DataLoader.from_pascal_voc(
train_images_dir, train_annotations_dir, label_map=label_map)
validation_data = object_detector.DataLoader.from_pascal_voc(
val_images_dir, val_annotations_dir, label_map=label_map)
test_data = object_detector.DataLoader.from_pascal_voc(
test_images_dir, test_annotations_dir, label_map=label_map)
else:
train_dir, val_dir, test_dir = split_dataset(images_in, annotations_in,
val_split=0.2, test_split=0.2,
out_path='split-dataset')
train_data = object_detector.DataLoader.from_pascal_voc(
os.path.join(train_dir, 'images'),
os.path.join(train_dir, 'annotations'), label_map=label_map)
validation_data = object_detector.DataLoader.from_pascal_voc(
os.path.join(val_dir, 'images'),
os.path.join(val_dir, 'annotations'), label_map=label_map)
test_data = object_detector.DataLoader.from_pascal_voc(
os.path.join(test_dir, 'images'),
os.path.join(test_dir, 'annotations'), label_map=label_map)
print(f'train count: {len(train_data)}')
print(f'validation count: {len(validation_data)}')
print(f'test count: {len(test_data)}')
Explanation: Now you're ready to train the model with your custom dataset. But before you run the notebook, you should also skip to the Export to TensorFlow Lite section and change the TFLITE_FILENAME and LABLES_FILENAME for your exported files.
Then run the whole notebook by clicking Runtime > Run all.
End of explanation
spec = object_detector.EfficientDetLite0Spec()
Explanation: Select the model spec
Model Maker supports the EfficientDet-Lite family of object detection models that are compatible with the Edge TPU. (EfficientDet-Lite is derived from EfficientDet, which offers state-of-the-art accuracy in a small model size). There are several model sizes you can choose from:
|| Model architecture | Size(MB) | Latency(ms) | Average Precision |
|-|--------------------|-----------|---------------|----------------------|
|| EfficientDet-Lite0 | 5.7 | 37.4 | 30.4% |
|| EfficientDet-Lite1 | 7.6 | 56.3 | 34.3% |
|| EfficientDet-Lite2 | 10.2 | 104.6 | 36.0% |
|| EfficientDet-Lite3 | 14.4 | 107.6 | 39.4% |
| <td colspan=4><br><i> File size of the compiled Edge TPU models. <br/>** Latency measured on a desktop CPU with a Coral USB Accelerator. <br/> Average Precision is the mAP (mean Average Precision) on the COCO 2017 validation dataset.</i></td> |
Beware that the Lite2 and Lite3 models do not fit onto the Edge TPU's onboard memory, so you'll see even greater latency when using those, due to the cost of fetching data from the host system memory. Maybe this extra latency is okay for your application, but if it's not and you require the precision of the larger models, then you can pipeline the model across multiple Edge TPUs (more about this when we compile the model below).
For this tutorial, we'll use Lite0:
End of explanation
model = object_detector.create(train_data=train_data,
model_spec=spec,
validation_data=validation_data,
epochs=50,
batch_size=10,
train_whole_model=True)
Explanation: The EfficientDetLite0Spec constructor also supports several arguments that specify training options, such as the max number of detections (default is 25 for the TF Lite model) and whether to use Cloud TPUs for training. You can also use the constructor to specify the number of training epochs and the batch size, but you can also specify those in the next step.
Create and train the model
Now we need to create our model according to the model spec, load our dataset into the model, specify training parameters, and begin training.
Using Model Maker, we accomplished all of that with create():
End of explanation
model.evaluate(test_data)
Explanation: Evaluate the model
Now we'll use the test dataset to evaluate how well the model performs with data it has never seen before.
The evaluate() method provides output in the style of COCO evaluation metrics:
End of explanation
TFLITE_FILENAME = 'efficientdet-lite-salad.tflite'
LABELS_FILENAME = 'salad-labels.txt'
model.export(export_dir='.', tflite_filename=TFLITE_FILENAME, label_filename=LABELS_FILENAME,
export_format=[ExportFormat.TFLITE, ExportFormat.LABEL])
Explanation: Because the default batch size for EfficientDetLite models is 64, this needs only 1 step to go through all 25 images in the salad test set. You can also specify the batch_size argument when you call evaluate().
Export to TensorFlow Lite
Next, we'll export the model to the TensorFlow Lite format. By default, the export() method performs full integer post-training quantization, which is exactly what we need for compatibility with the Edge TPU. (Model Maker uses the same dataset we gave to our model spec as a representative dataset, which is required for full-int quantization.)
We just need to specify the export directory and format. By default, it exports to TF Lite, but we also want a labels file, so we declare both:
End of explanation
model.evaluate_tflite(TFLITE_FILENAME, test_data)
Explanation: Evaluate the TF Lite model
Exporting the model to TensorFlow Lite can affect the model accuracy, due to the reduced numerical precision from quantization and because the original TensorFlow model uses per-class non-max supression (NMS) for post-processing, while the TF Lite model uses global NMS, which is faster but less accurate.
Therefore you should always evaluate the exported TF Lite model and be sure it still meets your requirements:
End of explanation
import random
# If you're using a custom dataset, we take a random image from the test set:
if use_custom_dataset:
images_path = test_images_dir if dataset_is_split else os.path.join(test_dir, "images")
filenames = os.listdir(os.path.join(images_path))
random_index = random.randint(0,len(filenames)-1)
INPUT_IMAGE = os.path.join(images_path, filenames[random_index])
else:
# Download a test salad image
INPUT_IMAGE = 'salad-test.jpg'
DOWNLOAD_URL = "https://storage.googleapis.com/cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg"
!wget -q -O $INPUT_IMAGE $DOWNLOAD_URL
Explanation: Try the TFLite model
Just to be sure of things, let's run the model ourselves with an image from the test set.
End of explanation
! python3 -m pip install --extra-index-url https://google-coral.github.io/py-repo/ pycoral
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
import tflite_runtime.interpreter as tflite
from pycoral.adapters import common
from pycoral.adapters import detect
from pycoral.utils.dataset import read_label_file
def draw_objects(draw, objs, scale_factor, labels):
Draws the bounding box and label for each object.
COLORS = np.random.randint(0, 255, size=(len(labels), 3), dtype=np.uint8)
for obj in objs:
bbox = obj.bbox
color = tuple(int(c) for c in COLORS[obj.id])
draw.rectangle([(bbox.xmin * scale_factor, bbox.ymin * scale_factor),
(bbox.xmax * scale_factor, bbox.ymax * scale_factor)],
outline=color, width=3)
font = ImageFont.truetype("LiberationSans-Regular.ttf", size=15)
draw.text((bbox.xmin * scale_factor + 4, bbox.ymin * scale_factor + 4),
'%s\n%.2f' % (labels.get(obj.id, obj.id), obj.score),
fill=color, font=font)
# Load the TF Lite model
labels = read_label_file(LABELS_FILENAME)
interpreter = tflite.Interpreter(TFLITE_FILENAME)
interpreter.allocate_tensors()
# Resize the image for input
image = Image.open(INPUT_IMAGE)
_, scale = common.set_resized_input(
interpreter, image.size, lambda size: image.resize(size, Image.ANTIALIAS))
# Run inference
interpreter.invoke()
objs = detect.get_objects(interpreter, score_threshold=0.4, image_scale=scale)
# Resize again to a reasonable size for display
display_width = 500
scale_factor = display_width / image.width
height_ratio = image.height / image.width
image = image.resize((display_width, int(display_width * height_ratio)))
draw_objects(ImageDraw.Draw(image), objs, scale_factor, labels)
image
Explanation: To simplify our code, we'll use the PyCoral API:
End of explanation
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
! sudo apt-get update
! sudo apt-get install edgetpu-compiler
Explanation: Compile for the Edge TPU
First we need to download the Edge TPU Compiler:
End of explanation
NUMBER_OF_TPUS = 1
!edgetpu_compiler $TFLITE_FILENAME -d --num_segments=$NUMBER_OF_TPUS
Explanation: Before compiling the .tflite file for the Edge TPU, it's important to consider whether your model will fit into the Edge TPU memory.
The Edge TPU has approximately 8 MB of SRAM for caching model paramaters, so any model close to or over 8 MB will not fit onto the Edge TPU memory. That means the inference times are longer, because some model parameters must be fetched from the host system memory.
One way to elimiate the extra latency is to use model pipelining, which splits the model into segments that can run on separate Edge TPUs in series. This can significantly reduce the latency for big models.
The following table provides recommendations for the number of Edge TPUs to use with each EfficientDet-Lite model.
| Model architecture | Minimum TPUs | Recommended TPUs
|--------------------|-------|-------|
| EfficientDet-Lite0 | 1 | 1 |
| EfficientDet-Lite1 | 1 | 1 |
| EfficientDet-Lite2 | 1 | 2 |
| EfficientDet-Lite3 | 2 | 2 |
| EfficientDet-Lite4 | 2 | 3 |
If you need extra Edge TPUs for your model, then update NUMBER_OF_TPUS here:
End of explanation
from google.colab import files
files.download(TFLITE_FILENAME)
files.download(TFLITE_FILENAME.replace('.tflite', '_edgetpu.tflite'))
files.download(LABELS_FILENAME)
Explanation: Beware when using multiple segments: The Edge TPU Comiler divides the model such that all segments have roughly equal amounts of parameter data, but that does not mean all segments have the same latency. Especially when dividing an SSD model such as EfficientDet, this results in a latency-imbalance between segments, because SSD models have a large post-processing op that actually executes on the CPU, not on the Edge TPU. So although segmenting your model this way is better than running the whole model on just one Edge TPU, we recommend that you segment the EfficientDet-Lite model using our profiling-based partitioner tool, which measures each segment's latency on the Edge TPU and then iteratively adjusts the segmentation sizes to provide balanced latency between all segments.
Download the files
End of explanation |